Mitaka Serie Releasenotes

13.3.13

Known Issues

13.3.9

New Features

  • Added a new haproxy_extra_services var which will allow extra haproxy endpoint additions.

  • The nova SSH public key distribution has been made a lot faster especially when deploying against very large clusters. To support larger clusters the role has moved away from the „authorized_key“ module and is now generating a script to insert keys that may be missing from the authorized keys file. The script is saved on all nova compute nodes and can be found at /usr/local/bin/openstack-nova-key.sh. If ever there is a need to reinsert keys or fix issues on a given compute node the script can be executed at any time without directly running the ansible playbooks or roles.

Deprecation Notes

  • Moved haproxy_service_configs var to haproxy_default_service_configs so that haproxy_service_configs can be modified and added to without overriding the entire default service dict.

13.3.8

Bug Fixes

  • SSLv3 is now disabled in the haproxy daemon configuration by default.

13.3.7

New Features

  • LXC containers will now generate a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set to true. This feature was implemented to resolve issues with dynamic mac addresses in containers generally experienced at scale with network intensive services.

Bug Fixes

  • LXC containers will now have the ability to use a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set true. This change will assist in resolving a long standing issue where network intensive services, such as neutron and rabbitmq, can enter a confused state for long periods of time and require rolling restarts or internal system resets to recover.

13.3.6

Upgrade Notes

  • When migrating from Liberty to Mitaka neutron does not automatically set or migrate networks MTU settings. Neutron has no migration to correctly set the MTU on existing networks so we’ve created an OSA MTU migration playbook to work-around this issue. The playbook will set the MTU on networks created before the upgrade by iterating on known values from the user_variables and facts. Should any known network name be encountered the MTU will be set to the known value. If no MTU and no global override is present the playbook will fall back to using 1500 for vlan/flat networks and 1450 for vxlan networks.

13.3.5

New Features

  • The os_nova role can now deploy the a custom /etc/libvirt/qemu.conf file by defining qemu_conf_dict.

  • The openstack-ansible-galera_server role will now prevent deployers from changing the galera_cluster_name variable on clusters that already have a value set in a running galera cluster. You can set the new galera_force_change_cluster_name variable to True to force the galera_cluster_name variable to be changed. We recommend setting this by running the galera-install.yml playbook with -e galera_force_change_cluster_name=True, to avoid changing the galera_cluster_name variable unintentionally. Use with caution, changing the galera_cluster_name value can cause your cluster to fail, as the nodes won’t join if restarted sequentially.

  • The LXC container creation process now has a configurable delay for the task which waits for the container to start. The variable lxc_container_ssh_delay can be set to change the default delay of five seconds.

Known Issues

  • It is not possible to override the ceph client credential for nova and cinder with ansible 1.9.4.

    See bug 1605302 for more details.

Upgrade Notes

  • New overrides are provided to allow for better customization around logfile retention and rate limiting for UDP/TCP sockets. rsyslog_server_logrotation_window defaults to 14 days rsyslog_server_ratelimit_interval defaults to 0 seconds rsyslog_server_ratelimit_burst defaults to 10000

  • The rsyslog.conf is now using v7+ style configuration settings

Bug Fixes

  • The pip_install_options variable is now honored during repo building. This variable allows deployers to specify trusted CA certificates by setting the variable to „–cert /etc/ssl/certs/ca-certificates.crt“

  • The repo_build play now correctly evaluates environment variables configured in /etc/environment. This enables deployments in an environment with http proxies.

13.3.4

New Features

  • AIDE is configured to skip the entire /var directory when it does the database initialization and when it performs checks. This reduces disk I/O and allows these jobs to complete faster.

    This also allows the initialization to become a blocking process and Ansible will wait for the initialization to complete prior to running the next task.

  • Although the STIG requires martian packets to be logged, the logging is now disabled by default. The logs can quickly fill up a syslog server or make a physical console unusable.

    Deployers that need this logging enabled will need to set the following Ansible variable:

    security_sysctl_enable_martian_logging: yes
    

Upgrade Notes

  • All of the discretionary access control (DAC) auditing is now disabled by default. This reduces the amount of logs generated during deployments and minor upgrades. The following variables are now set to no:

    security_audit_DAC_chmod: no
    security_audit_DAC_chown: no
    security_audit_DAC_lchown: no
    security_audit_DAC_fchmod: no
    security_audit_DAC_fchmodat: no
    security_audit_DAC_fchown: no
    security_audit_DAC_fchownat: no
    security_audit_DAC_fremovexattr: no
    security_audit_DAC_lremovexattr: no
    security_audit_DAC_fsetxattr: no
    security_audit_DAC_lsetxattr: no
    security_audit_DAC_setxattr: no
    

Bug Fixes

  • The /run directory is excluded from AIDE checks since the files and directories there are only temporary and often change when services start and stop.

  • AIDE initialization is now always run on subsequent playbook runs when initialize_aide is set to yes. The initialization will be skipped if AIDE isn’t installed or if the AIDE database already exists.

    See bug 1616281 for more details.

  • The auditd rules for auditing V-38568 (filesystem mounts) were incorrectly labeled in the auditd logs with the key of export-V-38568. They are now correctly logged with the key filesystem_mount-V-38568.

13.3.3

New Features

  • The horizon_keystone_admin_roles variable is added to support the OPENSTACK_KEYSTONE_ADMIN_ROLES list in the horizon_local_settings.py file.

  • The ability to support login user domain and login project domain has been added to the keystone module.

    # Example usage
    - keystone:
        command: ensure_user
        endpoint: "{{ keystone_admin_endpoint }}"
        login_user: admin
        login_password: admin
        login_project_name: admin
        login_user_domain_name: custom
        login_project_domain_name: custom
        user_name: demo
        password: demo
        project_name: demo
        domain_name: custom
    
  • LBaaS v2 panels in Horizon will automatically be enabled when LBaaS v2 is included in neutron_plugin_base.

  • Deployers can now configure tempest public and private networks by setting the following variables, ‚tempest_private_net_provider_type‘ to either vxlan or vlan and ‚tempest_public_net_provider_type‘ to flat or vlan. Depending on what the deployer sets these variables to, they may also need to update other variables accordingly, this mainly involves ‚tempest_public_net_physical_type‘ and ‚tempest_public_net_seg_id‘. Please refer to http://docs.openstack.org/mitaka/networking-guide/intro-basic-networking.html for more neutron networking information.

Upgrade Notes

  • The fix for the broken bind mounts in the galera container (see bug 1609862 <https://launchpad.net/bugs/1609862> for details) will be applied to the LXC container configuration file, but a restart of each galera container is required to put the change into effect.

    Deployers can use the rolling restart functionality provided in the upgrade playbook within the main OpenStack-Ansible repository.

    Deployers can also shut down and power on galera containers one at a time manually if that method is preferred.

    This will also cause the old error logs in /var/log/mysql_logs in the container to become unavailable since the new bind mount is mounted on top of the existing logs directory within the container. If these logs are critical for a deployer to keep, the deployer should:

    1. Power off one Galera container

    2. Copy the logs from the container’s filesystem to /openstack/log/{{ inventory_hostname }} on the host filesystem

    3. Power on the Galera container

    4. Repeat for the other Galera containers

Bug Fixes

  • The bind mount for logs on the galera container was found to be broken in bug 1609862 <https://launchpad.net/bugs/1609862> and it has been fixed.

    NOTE: This fix is partially applied for existing OpenStack-Ansible deployments. See the upgrade section of the release notes for the work required to fully apply the fix.

  • The ability to support login user domain and login project domain has been added to the keystone module. This resolves https://bugs.launchpad.net/openstack-ansible/+bug/1574000

    # Example usage
    - keystone:
        command: ensure_user
        endpoint: "{{ keystone_admin_endpoint }}"
        login_user: admin
        login_password: admin
        login_project_name: admin
        login_user_domain_name: custom
        login_project_domain_name: custom
        user_name: demo
        password: demo
        project_name: demo
        domain_name: custom
    

13.3.2

New Features

  • A new variable has been added to allow a deployer to control the restart of containers via the handler. This new option is lxc_container_allow_restarts and has a default of true. If a deployer wishes to disable the auto-restart functionality they can set this value to false and automatic container restarts that are not absolutely required will be disabled.

Upgrade Notes

  • LXC package installation and cache preparation will now occur by default only on hosts which will actually implement containers.

  • During upgrades, container and service restarts for the mariadb/galera cluster were being triggered multiple times and causing the cluster to become unstable and often unrecoverable. This situation has been improved immensely, and we now have tight control such that restarts of the galera containers only need to happen once, and are done so in a controlled, predictable and repeatable way.

Bug Fixes

  • LXC package installation and cache preparation will now occur by default only on hosts which will actually implement containers.

  • The --compact flag has been removed from xtrabackup options. This had been shown to cause crashes in some SST situations

13.3.1

New Features

  • The py_pkgs lookup plugin now has strict ordering for requirement files discovered. These files are used to add additional requirements to the python packages discovered. The order is defined by the constant, REQUIREMENTS_FILE_TYPES which contains the following entries, ‚test-requirements.txt‘, ‚dev-requirements.txt‘, ‚requirements.txt‘, ‚global-requirements.txt‘, ‚global-requirement-pins.txt‘. The items in this list are arranged from least to most priority.

  • The os_horizon role now supports configuration of custom themes. Deployers can use the new horizon_custom_themes and horizon_default_theme variables to configure the dashboard with custom themes and default to a specific theme respectively.

  • The options of application logrotate configuration files are now configurable. rsyslog_client_log_rotate_options can be used to provide a list of directives, and rsyslog_client_log_rotate_scripts can be used to provide a list of postrotate, prerotate, firstaction, or lastaction scripts.

  • The os_swift role has 3 new variables that will allow a deployer to change the hard, soft and fs.file-max limits. the hard and soft limits are being added to the limits.conf file for the swift system user. The fs.file-max settings are added to storage hosts via kernel tuning. The new options are swift_hard_open_file_limits with a default of 10240 swift_soft_open_file_limits with a default of 4096 swift_max_file_limits with a default of 24 times the value of swift_hard_open_file_limits.

  • The repo_build role now provides the ability to override the upper-constraints applied which are sourced from OpenStack and from the global-requirements-pins.txt file. The variable repo_build_upper_constraints_overrides can be populated with a list of upper constraints. This list will take the highest precedence in the constraints process, with the exception of the pins set in the git source SHAs.

Known Issues

  • For OpenStack-Ansible Mitaka releases earlier than 13.3.1 the default container apt source used was http://mirror.rackspace.com/ubuntu. This mirror seems to sometimes have broken package indexes or missing packages. The default package source has therefore been changed to make use of http://archive.ubuntu.com/ubuntu/ for packages and http://security.ubuntu.com/ubuntu for security packages.

Upgrade Notes

  • The default container apt sources have been changed from using http://mirror.rackspace.com/ubuntu to http://archive.ubuntu.com/ubuntu/ for packages and http://security.ubuntu.com/ubuntu for security packages. This is to resolve issues with unavailable packages during the install process due to incomplete mirror updates.

Critical Issues

  • Horizon deployments were broken due to an incorrect hostname setting being placed in the apache ServerName configuration. This caused Horizon startup failure any time debug was disabled.

13.3.0

New Features

  • A new option has been added to bootstrap-ansible.sh to set the role fetch mode. The environment variable ANSIBLE_ROLE_FETCH_MODE sets how role dependencies are resolved.

  • Horizon now has the ability to set arbitrary configuration options using global option horizon_config_overrides in YAML format. The overrides follow the same pattern found within the other OpenStack service overrides. General documentation on overrides can be found here.

Upgrade Notes

  • Adding a new nova.conf entry, live_migration_uri. This entry will default to a qemu-ssh:// uri, which uses the ssh keys that have already been distributed between all of the compute hosts.

  • Cleanup tasks are added to remove the nova console git directories /usr/share/novnc and /usr/share/spice-html5, prior to cloning these inside the nova vnc and spice console playbooks. This is necessary to guarantee that local modifications do not break git clone operations, especially during upgrades.

Bug Fixes

  • The standard collectstatic and compression process in the os_horizon role now happens after horizon customizations are installed, so that all static resources will be collected and compressed.

  • When upgrading it is possible for an old „neutron-ns-metadata-proxy“ process to remain running in memory. If this happens the old version of the process can cause unexpected issues in a production environment. To fix this a task has been added to the os_neutron role that will execute a process lookup and kill any „neutron-ns-metadata-proxy“ processes that are not running the current release tag. Once the old processes are removed the metadata agent running will respawn everything needed within 60 seconds.

Other Notes

  • The run-playbooks.sh script has been refactored to run all playbooks using our core tool set and run order. The refactor work updates the old special case script to a tool that simply runs the integrated playbooks as they’ve been designed.

13.2.0

New Features

  • Apache MPM tunable support has been added to the os-keystone role in order to allow MPM thread tuning. Default values reflect the current Ubuntu default settings:

    keystone_httpd_mpm_backend: event
    keystone_httpd_mpm_start_servers: 2
    keystone_httpd_mpm_min_spare_threads: 25
    keystone_httpd_mpm_max_spare_threads: 75
    keystone_httpd_mpm_thread_limit: 64
    keystone_httpd_mpm_thread_child: 25
    keystone_httpd_mpm_max_requests: 150
    keystone_httpd_mpm_max_conn_child: 0
    

13.1.4

New Features

  • Deployers can now blacklist certain Nova extensions by providing a list of such extensions in horizon_nova_extensions_blacklist variable, for example:

    horizon_nova_extensions_blacklist:
      - "SimpleTenantUsage"
    
  • The audit rules added by the security role now have key fields that make it easier to link the audit log entry to the audit rule that caused it to appear.

  • A conditional has been added to the _local_ip settings used in the neutron_local_ip which removes the hard requirement for an overlay network to be set within a deployment. If no overlay network is set within the deployment the local_ip will be set to the value of „ansible_ssh_host“.

  • Added horizon_apache_custom_log_format tunable to the os-horizon role for changing CustomLog format. Default is „combined“.

  • Added keystone_apache_custom_log_format tunable for changing CustomLog format. Default is „combined“.

Bug Fixes

  • The role previously did not restart the audit daemon after generating a new rules file. The bug has been fixed and the audit daemon will be restarted after any audit rule changes.

  • When the security role was run in Ansible’s check mode and a tag was provided, the check_mode variable was not being set. Any tasks which depend on that variable would fail. This bug is fixed and the check_mode variable is now set properly on every playbook run.

  • Previously, the ansible_managed var was being used to insert a header into the swift.conf that contained date/time information. This meant that swift.conf across different nodes did not have the same MD5SUM, causing swift-recon --md5 to break. We now insert a piece of static text instead to resolve this issue.

13.1.3

Upgrade Notes

  • A new global variable has been created named openstack_domain. This variable has a default value of „openstack.local“.

Bug Fixes

  • The security role previously set the permissions on all audit log files in /var/log/audit to 0400, but this prevents the audit daemon from writing to the active log file. This will prevent auditd from starting or restarting cleanly.

    The task now removes any permissions that are not allowed by the STIG. Any log files that meet or exceed the STIG requirements will not be modified.

  • The /var/lib/libvirt/qemu/save directory is now a symlink to {{ nova_system_home_folder }}/save to resolve an issue where the default location used by the libvirt managed save command can result with the root partitions on compute nodes becoming full when nova image-create is run on large instances.

13.1.2

New Features

  • The new LBaaS v2 dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_neutron_lbaas: True
    

    The tasks in the os_horizon role will determine which LBaaS version is in use (via neutron_plugin_base) and activate the correct panel for LBaaS v1 or v2.

  • Horizon’s IPv6 support is now configurable with Ansible variables. Deployers can enable IPv6 support in Horizon by setting the following variable:

    horizon_enable_ipv6: True
    

    Please note: Horizon will still display IPv6 addresses in various panels with IPv6 support disabled. However, it will not allow any direct management of IPv6 configuration.

  • The openstack-ansible-memcached_server role includes a new override,`memcached_connections` which is automatically calculated from the number of memcached connection limit plus additional 1k to configure the OS nofile limit. Without proper nofile limit configuration, memcached will crash in order to support higher parallel connection TCP/Memcache counts.

  • Allow the fallocate_reserve option to be set (in bytes) for Swift, to help prevent disks from filling up and prevent a situation where Swift is unable to remove objects due to a lack of disk space. The fallocate_reserve value to is set to a default of 10GB.

  • Enable rsync module per object server drive by setting the swift_rsync_module_per_drive setting to True. Set this to configure rsync and swift to utilise individual configuration per drive. This is required when disabling rsyncs to individual disks. For example, in a disk full scenario.

Upgrade Notes

  • As described in the Mitaka release notes Neutron now correctly calculates for and advertises the MTU to instances. The default DHCP configuration to advertise an MTU to instances has therefore been removed from the variable neutron_dhcp_config.

  • As described in the Mitaka release notes Neutron now correctly calculates for and advertises the MTU to instances. As such the neutron_network_device_mtu variable has been removed and the hard-coded values in the templates for advertise_mtu, path_mtu, and segment_mtu have been removed to allow upstream defaults to operate as intended.

  • A new nova admin endpoint will be registered with the suffix /v2.1/%(tenant_id)s. The nova admin endpoint with the suffix /v2/%(tenant_id)s may be manually removed.

  • The swift_max_rsync_connections default value has changed from 2 to 4 in order to match the OpenStack swift documented value.

Bug Fixes

  • The dictionary-based variables in defaults/main.yml are now individual variables. The dictionary-based variables could not be changed as the documentation instructed. Instead it was required to override the entire dictionary. Deployers must use the new variable names to enable or disable the security configuration changes applied by the security role. For more information, see Launchpad Bug 1577944.

  • Failed access logging is now disabled by default and can be enabled by changing security_audit_failed_access to yes. The rsyslog daemon checks for the existence of log files regularly and this audit rule was triggered very frequently, which led to very large audit logs.

  • The security role now handles ssh_config files that contain Match stanzas. A marker is added to the configuration file and any new configuration items will be added below that marker. In addition, the configuration file is validated for each change to the ssh configuration file.

  • The nova admin endpoint is now correctly registered as /v2.1/%(tenant_id)s instead of /v2/%(tenant_id)s.

  • The XFS filesystem is excluded from the daily mlocate crond job in order to conserve disk IO for large IOPS bursts due to updatedb/mlocate file indexing.

13.1.1

Bug Fixes

  • The check to validate whether an appropriate ssh public key is available to copy into the container cache has been corrected to check the deployment host, not the LXC host.

13.1.0

Known Issues

  • Paramiko version 2.0 Python requires the Python cryptography library. New system packages must be installed for this library. For OpenStack-Ansible versions <12.0.12, <11.2.15, <13.0.2 the system packages must be installed on the deployment host manually by executing apt-get install -y build-essential libssl-dev libffi-dev.

13.0.1

New Features

  • LXC containers will now have a proper RFC1034/5 hostname set during post build tasks. A localhost entry for 127.0.1.1 will be created by converting all of the „_“ in the inventory_hostname to „-“. Containers will be created with a default domain of openstack.local. This domain name can be customized to meet your deployment needs by setting the option lxc_container_domain.

  • The ability to support MultiStrOps has been added to the config_template action plugin. This change updates the parser to use the set() type to determine if values within a given key are to be rendered as MultiStrOps. If an override is used in an INI config file the set type is defined using the standard yaml construct of „?“ as the item marker.

    # Example Override Entries
    Section:
      typical_list_things:
        - 1
        - 2
      multistrops_things:
        ? a
        ? b
    
    # Example Rendered Config:
    [Section]
    typical_list_things = 1,2
    multistrops_things = a
    multistrops_things = b
    
  • There is a new default configuration for keepalived, supporting more than 2 nodes.

  • In order to make use of the latest stable keepalived version, the variable keepalived_use_latest_stable must be set to True

  • Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge Agent should be enabled is now dynamically determined based on the neutron_plugin_type and the neutron_ml2_mechanism_drivers that are set. This aims to simplify the configuration of Neutron services and eliminate the need for deployers to override the entire neutron_services dict variable to disable these services.

  • Neutron VPN as a Service (VPNaaS) can now optionally be deployed and configured. Please see the OpenStack Networking Guide for details about the what the service is and what it provides. See the VPNaaS Install Guide for implementation details.

Known Issues

  • In the latest stable version of keepalived there is a problem with the priority calculation when a deployer has more than five keepalived nodes. The problem causes the whole keepalived cluster to fail to work. To work around this issue it is recommended that deployers limit the number of keepalived nodes to no more than five or that the priority for each node is set as part of the configuration (cf. haproxy_keepalived_vars_file variable).

Upgrade Notes

  • LXC containers will now have a proper RFC1034/5 hostname set during post build tasks. A localhost entry for 127.0.1.1 will be created by converting all of the „_“ in the inventory_hostname to „-“. Containers will be created with a default domain of openstack.local. This domain name can be customized to meet your deployment needs by setting the option lxc_container_domain.

  • The MariaDB wait_timeout setting is decreased to 1h to match the SQL Alchemy pool recycle timeout, in order to prevent unnecessary database session buildups.

  • There is a new default configuration for keepalived. When running the haproxy playbook, the configuration change will cause a keepalived restart unless the deployer has used a custom configuration file. The restart will cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then reconfigured.

  • A new version of keepalived will be installed on the haproxy nodes if the variable keepalived_use_latest_stable is set to True and more than one haproxy node is configured. The update of the package will cause keepalived to restart and therefore will cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then reconfigured.

  • Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge Agent should be enabled is now dynamically determined based on the neutron_plugin_type and the neutron_ml2_mechanism_drivers that are set. This aims to simplify the configuration of Neutron services and eliminate the need for deployers to override the entire neutron_services dict variable to disable these services.

  • Percona Xtrabackup has been removed from the Galera client role.

Deprecation Notes

  • The variables `galera_client_package_*` and `galera_client_apt_percona_xtrabackup_*` have been removed from the role as Xtrabackup is no longer deployed.

Security Issues

  • A sudoers entry has been added to the repo_servers in order to allow the nginx user to stop and start nginx via the init script. This is implemented in order to ensure that the repo sync process can shut off nginx while synchronising data from the master to the slaves.

  • Horizon disables password autocompletion in the browser by default, but deployers can now enable autocompletion by setting horizon_enable_password_autocomplete to True.

Bug Fixes

  • In order to ensure that the appropriate data is delivered to requesters from the repo servers, the slave repo_server web servers are taken offline during the synchronisation process. This ensures that the right data is always delivered to the requesters through the load balancer.

13.0.0

New Features

  • Ubuntu has 4 different ‚components‘ - main, universe, multiverse and restricted:

    • Main: Officially supported software.

    • Restricted: Supported software that is not available under a completely free license.

    • Universe: Community maintained software, i.e. not officially supported software.

    • Multiverse: Software that is not free.

    The default apt sources configuration is now set to only include the main and universe components as those are the only required components for a functional deployment. If deployers wish to include other components then the variable lxc_container_template_apt_components may be set in /etc/openstack_deploy/user_variables.yml with the full list of desired components.

  • Ceilometer now uses the Keystone v3 endpoint. The ‚identity_uri‘ directive has been removed since it’s unused. ‚region_name‘ has been added. The directives under ‚service_credentials‘ have been updated to support the keystoneauth library

  • Added a function in dynamic_inventory.py to improve the identification of incorrect settings inside the user config files.

  • Deployers can optionally set a UID and/or GID for the nova user and group. This is helpful for environments with shared storage.

  • A new variable called lxc_container_cache_files has been implemented which contains a list of dictionaries that specify files on the deployment host which should be copied into the LXC container cache and what attributes to assign to the copied file.

  • The haproxy-install.yml playbook will now be run as a part of setup-infrastructure.yml.

  • Logs for haproxy can now be found in /openstack/log/<haproxy host name>-haproxy/ on the container host (if haproxy is in a container), or /var/log/haproxy/ if haproxy is installed directly on a host.

  • Horizon deployment now supports an operator provided customization module which can be configured using the horizon_customization_module variable. Please see the Horizon documentation for more information.

  • Keystone’s v3 API is now the default for all services.

  • TLS certificate chain verification during the download of LXC cached images can now be toggled using the configuration variable ‚lxc_cache_validate_certs‘. The default behavior is to validate the certificate chain.

  • Keystone can now be configured for multiple LDAP or Active Directory identity back-ends. Configuration of this feature is documented in the Keystone Configuration section of the Install Guide.

  • LBaaS v2 is available for deployment in addition to LBaaS v1. Both versions are mutually exclusive and cannot be running at the same time. Deployers will need to re-create any existing load balancers if they switch between LBaaS versions. Switching to LBaaS v2 will stop any existing LBaaS v1 load balancers.

  • Neutron Firewall as a Service (FWaaS) can now optionally be deployed and configured. Please see the FWaaS Configuration Reference for details about the what the service is and what it provides. See the FWaaS Install Guide for implementation details.

  • Deployers can set a default availability zone (AZ) for new instance builds which do not provide an AZ. The value is None by default, but it can be changed with the nova_default_schedule_zone Ansible variable.

  • Two new variables (rabbitmq_async_threads and rabbitmq_process_limit) have been added to the openstack-ansible-rabbitmq_server role. The variable rabbitmq_async_threads limits the number of asynchronous threads for file and socket I/O. The variable rabbitmq_process_limit limits the overall number of supported processes inside the erlang VM.

  • OpenStack services have been set to communicate with RabbitMQ using SSL by default. This feature may be disabled by setting rabbit_use_ssl to false in /etc/openstack_deploy/user_variables.yml. The default behaviour will be to use a self-signed certificate for communications. This can be changed by the procedure referred to in the SSL documentation.

  • Repeatable deployments are now easier since the manifest files for OpenStack software uses the exact content from an upstream repository. Specific commits or tags can be referenced within the manifest. The yaprt package is no longer used to build the repo.

  • Developers can specify additional python packages for the repo build process by creating YAML files within /etc/openstack_deploy/. Refer to the documentation on adding packages for more details.

Upgrade Notes

  • Ubuntu has 4 different ‚components‘ - main, universe, multiverse and restricted:

    • Main: Officially supported software.

    • Restricted: Supported software that is not available under a completely free license.

    • Universe: Community maintained software, i.e. not officially supported software.

    • Multiverse: Software that is not free.

    The default apt sources configuration is now set to only include the main and universe components as those are the only required components for a functional deployment. If deployers wish to include other components then the variable lxc_container_template_apt_components may be set in /etc/openstack_deploy/user_variables.yml with the full list of desired components.

  • The pip_get_pip_options override has been removed from group_vars, resulting in the empty default being used. Previously this method was used to lock the pip install source appropriately, but this has been handled through the pip_lock_down role for several cycles. If a deployer has implemented an override in user_variables based on the previous group_vars settings then the settings should be reviewed. This override may now be used for catering to situations where the pip installation requires extra options set to allow installing through a proxy, or disabling certificate validation.

  • The dynamic_inventory.py script now takes a --config argument rather than a --file argument.

  • Deployers can optionally remove the Keystone v2 endpoints from the database. Those endpoints will not be removed by the upgrade process.

  • The distribution of the .my.cnf database access configuration file which contains sensitive root credentials has now been limited to only be distributed to containers and hosts which require it for troubleshooting purposes. It is recommended that this file be removed from all hosts and containers. The only containers that should have the file are the Utility container and the Galera containers. This may be done by executing ansible 'all:!galera:!utility' -m shell -a 'rm -f /root/.my.cnf' from the /opt/openstack-ansible/ directory.

  • The first tier of the keystone_ldap dictionary variable now relates to the Keystone Domain name. An existing keystone_ldap configuration entry can be converted by renaming the ldap key to the domain name ‚Default‘. Note that the domain name entry is case-sensitive.

  • The keystone_ldap_identity_driver variable has been removed. The driver for an LDAP back-end in Keystone now simply uses the value ‚ldap‘. There are no other back-end options for Keystone at this time.

  • Existing LBaaS v1 load balancers and agents will not be altered by the new OpenStack-Ansible release.

  • Database migration tasks have been added for the FWaaS neutron plugin.

  • Neutron notifications to nova will now use the internal API endpoint instead of the public endpoint.

  • The repo-clone-mirror.yml play has been removed as it is no longer used by the project.

  • A new database called nova_api has been created. This database has its own user credentials and nova-manage db sync process. For the database password there is a new variable entry in user_secrets.yml called nova_api_container_mysql_password.

  • The number of erlang asynchronous threads used by RabbitMQ have been increased from the default of 32 to 128 in order to speed up message processing.

  • The maximum erlang process limit for RabbitMQ has been set to 1048576 in order to prevent virtual machine lockups which are caused when this limit has been reached.

  • The deployment configuration file openstack_environment.yml has been removed and is no longer used in the dynamic inventory generation process. This file was previously rendered functionally irrelevant to the inventory generation process in the Liberty release.

  • The plugins folders have been renamed to the default names used in Ansible 2.x. This is part of the preparation for Ansible 2.x readiness. The renames are specifically actions > action, callbacks > callback, filters > filter, lookups > lookup.

  • The git source for python2_lxc was used in the past as the package was not available on pypi. Now that the package has been published the py_from_git role dependency has been removed from the lxc_hosts playbook, the role has been removed from the required roles list and the repo details have been removed from the repo_packages files as none of these details are required any more.

  • OpenStack services have been set to communicate with RabbitMQ using SSL by default. This feature may be disabled by setting rabbit_use_ssl to false in /etc/openstack_deploy/user_variables.yml. The default behaviour will be to use a self-signed certificate for communications. This can be changed by the procedure referred to in the SSL documentation.

  • The os_swift and os_swift_sync role have been merged into the single os_swift role. Two variables (swift_do_setup and swift_do_sync) have been implemented to action the install and synchronise code paths. The separate playbooks have been adjusted to make use of these variables to ensure that the behaviour is exactly the same as before.

  • The neutron_plugin_base variable has been modifed to use the friendly names. Deployers should change any customisations to this variable to ensure that the customisation makes use of the short names instead of the full class path.

  • Database migration tasks have been added for the LBaaS neutron plugin.

  • The repo-store-source.yml playbook has been removed as it is no longer needed.

  • The variable neutron_service_names has been removed. A more efficient way of determining the list of Neutron services for service restarts has been implemented.

Deprecation Notes

  • The dynamic_inventory.py script now takes a --config argument rather than a --file argument.

  • The old class path names used within the neutron_plugin_base have been deprecated in favor of the friendly names. Support for the use of the class path plugins will be removed in the OpenStack Newton cycle.

Security Issues

  • A sudoers entry is added to the repo_servers to allow the nginx user to stop and start NGINX from the init script. This ensures that the repo sync process can shut off NGINX while synchronizing data from master to slaves.

  • The distribution of the .my.cnf database access configuration file which contains sensitive root credentials has now been limited to only be distributed to containers and hosts which require it for troubleshooting purposes.

  • When enabled, Neutron Firewall as a Service (FWaaS) provides projects the option to implement perimeter security (filtering at the router), adding to filtering at the instance interfaces which is provided by ‚Security Groups‘.

  • OpenStack services have been set to communicate with RabbitMQ using SSL by default. This feature may be disabled by setting rabbit_use_ssl to false in /etc/openstack_deploy/user_variables.yml. The default behaviour will be to use a self-signed certificate for communications. This can be changed by the procedure referred to in the SSL documentation.

Bug Fixes

  • Containers might fail to retrieve packages from the repo server when connecting to a slave repo server that has not finished synchronizing. For more information, see https://bugs.launchpad.net/openstack-ansible/+bug/1543146. This is addressed by adding pre and post hooks into lsyncd to connect to the slave repo servers and disable NGINX for the duration for the sync.

  • The addition of multi-domain LDAP configuration support left behind a configuration file for the default domain that causes problems with Keystone. This file will automatically be removed if the deployer is not using the Default domain with an LDAP back end. (Bug 1547542)

  • The python packages pip, setuptools and wheel are now all pinned on a per-tag basis. The pins are updated along with every OpenStack Service update. This is done to ensure a consistent build experience with the latest available packages at the time the tag is released. A deployer may override the pins by adding a list of required pins using the pip_packages variable in user_variables.yml.

Other Notes

  • Neutron notifications to nova will now use the internal API endpoint instead of the public endpoint.