Stein Series Release Notes

10.6.2-114

New Features

  • Adds support for IGMP snooping (Multicast) in the Neutron ML2/OVS driver.

  • Added the configuration option to set reserved_huge_pages. When NovaReservedHugePages is set, “reserved_huge_pages” is set to the value of NovaReservedHugePages. If NovaReservedHugePages is unset and OvsDpdkSocketMemory is set, reserved_huge_pages value is calcuated from KernelArgs and OvsDpdkSocketMemory. KernelArgs helps determine the default huge page size used, the default is set to 2048kb and OvsDpdkSocketMemory helps determine the number of hugepages to reserve.

  • Added the Octavia anti-affinity parameters.

  • Added support for running the Octavia driver agent in a container. This will enable features such as the OVN load balancer provider in octavia as well as other third party providers.

  • Add new role parameters NovaCPUAllocationRatio, NovaRAMAllocationRatio and NovaDiskAllocationRatio which allows to configure cpu_allocation_ratio, ram_allocation_ratio and disk_allocation_ratio. Default value for NovaCPUAllocationRatio is 0.0 Default value for NovaRAMAllocationRatio is 1.0 Default value for NovaDiskAllocationRatio is 0.0

    The default values for CPU and Disk allocation ratio are taken 0.0 as mentioned in [1]. [1] https://specs.openstack.org/openstack/nova-specs/specs/stein/implemented/initial-allocation-ratios.html

  • Add a role specific parameter, ContainerCpusetCpus, default to ‘all’, which allows to limit the specific CPUs or cores a container can use. To disable it and rely on container engine default, set it to ‘’.

  • Add boolean parameter NovaSchedulerLimitTenantsToPlacementAggregate which allows to set scheduler/limit_tenants_to_placement_aggregate parameter value, to have tenant isolation with placement. It ensures hosts are in tenant-isolated host aggregate and availability zones will only be available to specific set of tenants. Default value for NovaSchedulerLimitTenantsToPlacementAggregate is false.

  • Add boolean parameter NovaSchedulerPlacementAggregateRequiredForTenants which allows to set scheduler/placement_aggregate_required_for_tenants parameter. It controls whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True. Default value for NovaSchedulerPlacementAggregateRequiredForTenants is false.

  • Add boolean parameter NovaSchedulerQueryPlacementForAvailabilityZone that sets scheduler/query_placement_for_availability_zone parameter. It allows the scheduler to look up a host aggregate with metadata key of availability zone set to the value provided by incoming request, and request result from placement be limited to that aggregate. Default value for NovaSchedulerQueryPlacementForAvailabilityZone is false.

  • HA services use a special container image name derived from the one configured in Heat parameter plus a fixed tag part, i.e. ‘<registry>/<namespace>/<servicename>:pcmklatest’. To implement rolling update without service disruption, this ‘pcmklatest’ tag is adjusted automatically during minor update every time a new image is pulled. A new Heat parameter ClusterCommonTag can now control the prefix part of the container image name. When set to true, the container name for HA services will look like ‘container-common-tag/<servicename>:pcmklatest’. This allows rolling update of HA services even when the <namespace> changes in Heat.

  • Enabling additional healtchecks for Swift to monitor account, container and object replicators as well as the rsync process.

Bug Fixes

  • The parameter ControlPlaneSubnetCidr was missing in the network/ports/net_vip_map_external.j2.yaml and network/ports/net_vip_map_external_v6.j2.yaml template files. This caused deployment failure since the VipMap resource pass this property. (See Bug: #1864912)

  • When deploying a spine-and-leaf (L3 routed architecture) with TLS enabled for internal endpoints the deployment would fail because some roles are not connected to the network mapped to the service in ServiceNetMap. To fix this issue a role specific parameter {{role.name}}ServiceNetMap is introduced (defaults to: {}). The role specific ServiceNetMap parameter allow the operator to override one or more service network mappings per-role. For example:

    ComputeLeaf2ServiceNetMap:
      NovaLibvirtNetwork: internal_api_leaf2
    

    The role specific {{role.name}}ServiceNetMap override is merged with the global ServiceNetMap when it’s passed as a value to the {{role.name}}ServiceChain resources, and the {{role.name}} resource groups so that the correct network for this role is mapped to the service.

    Closes bug: 1904482.

  • Fixed an issue where Octavia controller services were not properly configured.

  • Fixes an issue where filtering of networks for kerberos service principals was too aggressive, causing deployment failure. See bug 1854846.

  • Fixed an issue where containers octavia_api and octavia_driver_agent would fail to start on node reboot.

  • Fix Swift ring synchronization to ensure every node on the overcloud has the same copy to start with. This is especially required when replacing nodes or using manually modifed rings.

10.6.2

New Features

  • Added the “connection_logging” parameter for the Octavia service.

  • Three new parameter options are now added to Octavia service (OctaviaConnectionMaxRetries, OctaviaBuildActiveRetries, OctaviaPortDetachTimeout)

  • deep_compare is now enabled by default for stonith resources, allowing their properties to be updated via stack update. To disable it set ‘tripleo::fencing::deep_compare: false’.

  • Added NeutronPermittedEthertypes to allow configuring additional ethertypes on neutron security groups for L2 agents that support it.

  • Added new heat param OVNOpenflowProbeInterval to set ovn_openflow_probe_interval which is inactivity probe interval of the OpenFlow connection to the OpenvSwitch integration bridge, in seconds. If the value is zero, it disables the connection keepalive feature, by default this value is set on 60s. If the value is nonzero, then it will be forced to a value of at least 5s.

  • Under pressure, the default monitor timeout value of 20 seconds is not enough to prevent unnecessary failovers of the ovn-dbs pacemaker resource. While spawning a few VMs in the same time this could lead to unnecessary movements of master DB, then re-connections of ovn-controllers (slaves are read-only), further peaks of load on DBs, and at the end it could lead to snowball effect. Now this value can be configurable by OVNDBSPacemakerTimeout which will configure tripleo::profile::pacemaker::ovn_dbs_bundle (default is set to 60s).

Bug Fixes

  • Restart certmnonger after registering system with IPA. This prevents cert requests not completely correctly when doing a brownfield update.

  • If nova-api is delayed starting then the nova_wait_for_compute_service can timeout. A deployment using a slow/busy remote container repository is particularly susceptible to this issue. To resolve this nova_compute and nova_wait_for_compute_service have been postponed to step_5 and a task has been added to step_4 to ensure nova_api is active before proceeding. Resolves Bug 1842948.

Other Notes

  • Add “port_forwarding” service plugin and L3 agent extension to be enabled by default when Neutron ML2 plugin with OVS driver is used. New config option “NeutronL3AgentExtensions” is also added. This new option allows to set list of L3 agent’s extensions which should be used by agent.

  • Add “radvd_user” configuration parameter to the Neutron L3 container. This parameter defines the user pased to radvd. The default value is “root”.

10.6.1

New Features

  • ContainerImageRegistryLogin has been added to indicate if login calls should be issued by the container engine on deployment. The default is set to false.

  • Values specified in ContainerImageRegistryCredentials will now be used to issue a login call when deploying the container engine on the hosts if ContainerImageRegistryLogin is set to true

  • Created a ExtraKernelPackages parameter to allow users to install additional kernel related packages prior to loading the kernel modules defined in ExtraKernelModules.

  • When running config-download manually, fact gathering at the play level can now be controlled with the gather_facts Ansible boolean variable.

  • Add ContainerNovaLibvirtUlimit to configure Ulimit for containerized Libvirt. Defaults to nofile=131072,nproc=126960.

  • Add parameter NovaLibvirtMemStatsPeriodSeconds, which allows to set libvirt/mem_stats_period_seconds parameter value to number of seconds to memory usage statistics period, zero or negative value mean to disable memory usage statistics. Default value for NovaLibvirtMemStatsPeriodSeconds is 10.

  • Adds LibvirtLogFilters parameter to define a filter to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . Default: ‘1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util’

  • Adds LibvirtTLSPriority parameter to override the compile time default TLS priority string. Default: ‘NORMAL:-VERS-SSL3.0:-VERS-TLS-ALL:+VERS-TLS1.2’

  • This parameter sets inactive probe interval of the JSON session from ovn-controller to the OVN SB database. By default this it is 5s which not be sufficient in loaded systems or during high control-plane activity spikes, leading to unnecessary reconnections to OVSDB server. Now it is extended by default to 1 min and it is configurable by param OVNRemoteProbeInterval.

  • Introduce a PacemakerTLSPriorities parameter (which will set the PCMK_tls_priorities config option in /etc/sysconfig/pacemaker and the PCMK_tls_priorities variable inside the bundle. This, when set, allows an operator to specify what kind of GNUTLS ciphers are desired for the pacemaker control port.

Bug Fixes

  • Enable VFIO module on boot for SR-IOV deployments. Before this change on SR-IOV capable deployments when rebooting a compute node, vfio_iommu_type1 will not be loaded which will cause guest instances with VF/PF fail to start/spawn.

  • The passphrase for config option ‘server_certs_key_passphrase’, is used as a Fernet key in Octavia and thus must be 32 bytes long. In the case of an operator-provided passphrase, TripleO will validate that.

  • Certain nova containers require more locked memory that the default limit of 16KiB. Increase the default memlock to 64MiB via DockerNovaComputeUlimit.

    As this is only a maximum limit and not a pre-allocatiosn this will not increase the memory requirements for all nova containers. To date the only container to require this is nova_cell_v2_discover_hosts which is short lived.

  • Recent changes for e.g edge scenarios caused intended move of discovery from controller to bootstrap compute node. The task is triggered by deploy-identifier to make sure it gets run on any deploy,scale, … run. If deploy run is triggered with –skip-deploy-identifier flag, discovery will not be triggered at and as result causing failures in previously supported scenarios. This change moves the host discovery task to be an ansible deploy_steps_tasks that it gets triggered even if –skip-deploy-identifier is used, or the compute bootstrap node is blacklisted.

  • Deployment with enabled NFS share for nova ephemeral storage fails. Podman fails to relable with mounted nfs in /var/lib/nova/instances and container fail to start with “operation not supported”. This change only sets the z flag for the /var/lib/nova in case nfs is not enabled for the compute.

10.6.0

New Features

  • The parameter {{role.name}}RemovalPoliciesMode can be set to ‘update’ to reset the existing blacklisted nodes in heat. This will help re-use the node indexes when required.

  • Allows a deployer to specify the IdM domain with –domain on the ipa-client-install invocation by providing the IdMDomain parameter.

  • Allows a deployer to direct the ipa-client-install to skip NTP setup by specifying the IdMNoNtpSetup parameter. This is useful if the ipa-client-install setup clobbers the NTP setup by puppet.

  • Add GlanceImageCacheDir parameter to set base directory location that the Image Cache uses. Add GlanceImageCacheMaxSize parameter to set the upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Add GlanceImageCacheStallTime parameter to set the amount of time to let an image remain in the cache without being accessed.

  • New parameters, NovaCronDBArchivedMaxDelay and CinderCronDbPurgeMaxDelay, are introduced to configure max_delay parameter to calculate randomized sleep time before db archive/purge. This avoids db collisions when performing db archive/purge operations on multiple controller nodes.

  • Introduce new tag into roles that will create external_bridge (usable only for multiple-nics).

  • The passphrase for config option ‘server_certs_key_passphrase’, that was recently added to Octavia, and will now be auto-generated by TripleO by adding OctaviaServerCertsKeyPassphrase to the list of parameters TripleO configures in Octavia.

  • To allow PAM to create home directory for user who do not have one, ipa-client-install need an option. This change allow to enable it.

  • Add parameter NovaLiveMigrationWaitForVIFPlug which allows to set live_migration_wait_for_vif_plug which in turn allows whether to wait for network-vif-plugged events before starting guest transfer. The default value for the parameter is set to true.

  • Configure Neutron API for Nova Placement When the Neutron Routed Provider Networks feature is used in the overcloud, the Networking service will use those credentials to communicate with the Compute scheduler’s placement API.

  • The parameters NovaNfsEnabled, NovaNfsShare, NovaNfsOptions, NovaNfsVersion are changed to be role specific. This requires the usage of host aggregates as otherwise it will break live migration of instances as we can not do this with different storage backends.

  • Add role parameter NovaLibvirtNumPciePorts which sets libvirt/num_pcie_ports to specify the number of PCIe ports an instance will get. Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use. When using the ‘q35’ machine type, by default, it allows only a single PCIe device to be hotplugged. And Nova currently sets ‘num_pcie_ports’ to “0” (which means, it defaults to libvirt’s “1”), which is not sufficient for hotplug use. Default for NovaLibvirtNumPciePorts is 16.

  • Added OVN-DPDK support

  • Introduced two new numeric parameters OvsRevalidatorCores and OvsHandlerCores to set values of n-revalidator-threads and n-handler-threads on openvswitch.

Upgrade Notes

  • During upgrade user will need to create custom roles_data.yaml and remove external_bridge from tags to be sure that bridge will be not added.

  • The new role variable update_serial is introduced allowing parallel update execution. On Controller role this variable defaults to 1 as pacemaker has to be taken down and up in rolling fashion. The default value is 25 as that is default value for parallel ansible execution used by tripleo.

Deprecation Notes

  • The only OVN Tunnel Encap Type that we are supporting in OVN is Geneve and this is set by default in ovn puppet. So there are no need to set it in TripleO

Bug Fixes

  • When changeing the name_lower of the InternalApi network and using the service_net_map_replace option in network data. The subnet referenced in VipSubnetMapDefaults did not take in account the custom lowercase name for the network, causing deployment error. See bug: 1832461.

  • Fixes an issue where deployment would fail if a non-default name_lower is used in network data for one of the networks: External, InternalApi or StorageMgmt. (See bug: 1830852.)

  • Fixed service auth URL in Octavia to use the Keystone v3 internal endpoint.

  • With 405366fa32583e88c34417e5f46fa574ed8f4e98 the parameters RpcPort, RpcUserName, RpcPassword and RpcUseSSL got deprecated and nova::rabbitmq_port removed. As a result the healtcheck get called with null parameter and fail. We now get the global_config_settings from RabbitMQService and use oslo_messaging_rpc_port for the healthcheck.

  • Change-Id: I1a159a7c2ac286373df2b7c566426b37b7734961 moved the dicovery to run on a single compute host to not race on simultanious nova-manage commands. This change make sure we run the discover on every deploy run which is required for scaling up events.

Other Notes

  • The EndpointMap parameter is now required by post_deploy templates. So if an user overrides OS::TripleO::NodeExtraConfigPost with another template, the template would need to have EndpointMap parameter to work fine.

10.5.0

New Features

  • Added the configuration option to disable Exact Match Cache (EMC)

  • A new parameter, CinderEtcdLocalConnect, is available for the CinderVolume service. When deploying the service A/A, the parameter can be set to true which willconfigure cinder-volume to connect to Etcd locally through the node’s own IP instead of going through a VIP.

  • The Etcd service is added to the DistributedCompute and DistributedComputeHCI roles for Active/Active management of the CinderVolume service.

  • Added ability to rewrap project KEKs (key encryption keys) when doing an upgrade. This allows deployers to rewrap KEKs whenever they rotate the master KEK and HMAC keys when using the PKCS#11 plugin behind Barbican.

  • Also added some needed ordering for master key creation, sync and update when using a Thales HSM behind Barbican.

  • Podman is now the default ContainerCli unless you deploy Pacemaker. then you must run Docker when deploying on CentOS7.

  • A new option host_routes are now available for subnet defenitions in undercloud.conf.

    • Host routes specified for the local_subnet will be added to the routing table on the Undercloud.

    • Host routes for all subnets are passed to tripleo-heat-templates so that the host_routes property of the ctlplane subnets are set accordingly when installing the Undercloud.

  • ContainerHealthcheckDisabled is a new parameter which allows to disable the container healthcheck management in Paunch.

  • Adds the ability to set external_resource_network_id for the network, external_resource_vip_id for the network VIP, external_resource_subnet_id for the subnet(s), and external_resource_segment_id for the segment(s) to network_data.yaml. When setting these properties, the external_id attribute will be set on the corresponding Heat resources. This causes Heat to not re-create these resources and instead adopt them from outside the stack.

  • A new service, OS::TripleO::Services::NovaAZConfig, is available which can be used to create a host aggregate and availabiity zone in Nova during the deployment. Compute nodes in the deployment will also be added to the zone. The zone name is set with the parameter value NovaComputeAvailabilityZone. If let unset, it will default to the root stack name. By default the service is mapped to None, but can be enabled by including environments/nova-az-config.yaml.

  • The parameter NovaRbdPoolName is changed to be role specific. This requires the usage of host aggregates as otherwise it will break live migration of instances as we can not do this with different storage backends.

  • New parameter NovaNfsVersion allow configuring the NFS version used for nova storage (when NovaNfsEnabled is true). Since NFSv3 does not support full locking a NFSv4 version need to be used. To not break current installations the default is the previous hard coded version 4.

  • By adding parameter OctaviaAmphoraImageFormat, it adds flexibility to select amphora image format without forcing to use of the NovaEnableRbdBackend parameter.

  • When deploying with internal TLS, the Octavia API now runs as an Apache WSGI application improving support for IPv6 and performance.

  • Using Ansible timezone module to manage the system timezone for the deployed systems.

  • The get_attr function is now used to read the gateway_ip of a ports subnet. The gateway_ip value is passed to nic config templates using the %network%InterfaceDefaultRoute parameter. (This parameter is only used if the network is present in the roles default_route_networks.) Using get_attr ensures that the correct gateway ip address is used when networks have multiple subnets.

Upgrade Notes

  • Removes UpgradeRemoveUnusedPackages parameter and some service upgrade_tasks that use this parameter to remove any unused packages.

  • When deploying with internal TLS, previous versions configured a separate TLS proxy to provide a secure access point for the Octavia API. This is now implemented by running the Octavia API as an Apache WSGI application and the Octavia TLS Proxy will be removed during updates and upgrades.

Deprecation Notes

  • The nova-placement service is deprecated in Stein and will be replaced in Train by an extracted Placement API service.

  • As of Rocky [1], the nova-consoleauth service has been deprecated and cell databases are used for storing token authorizations. All new consoles will be supported by the database backend and existing consoles will be reset. Console proxies must be run per cell because the new console token authorizations are stored in cell databases.

    Lets deprecate it also in tripleo so that it can be removed in a later release.

    [1] https://docs.openstack.org/releasenotes/nova/rocky.html

  • Managing timezone via puppet is now deprecated.

Bug Fixes

  • Fixes an issue that caused a subnet to be wrongly created on the Undercloud provisioning network based on environment default values. If the default ctlplane-subnet was renamed in undercloud.conf, the defaults for ctlplane-subnet in environments/undercloud.yaml was merged with the subnets defined in undercloud.conf. See bug 1820330.

  • ServiceNetMap now handles any network name when computing the default network for each service in ServiceNetMapDefaults.

  • With large number of OSDs, where each OSD need a connection, the default nofile (1024) of nova_compute is too small. This changes the default DockerNovaComputeUlimit to 131072 what is the same for cinder.

  • With cellsv2 multicell in each cell there needs to be a novnc proxy as the console token is stored in the cell conductor database. This change adds the NovaVncProxy service to the CellController role and configures the endpoint to the local public address of the cell.

  • If nova-manage command was triggered on a host for the first time as root (usually manual runs) the nova-manage.log gets created as root user. On overcloud deploy runs the nova-manage command is run as nova user. In such situation the overcloud deploy fails as the nova user can not write to the nova-manage.log. With this change we run the chown of the logs files on every overcloud deploy to fix the nova-manage.log file permissions.

Other Notes

  • Congress was removed, as it seems nobody used it. Therefore, we don’t need to keep supporting it.

10.4.0

New Features

  • Adds a specific upgrade hiera file. This is currently used to override variables during upgrade.

  • Introduce new parameter, ContainerLogStdoutPath. Must be an absolute path to a directory where podman will output all containers stdout. The existence of the directory is ensured directly as a host_prep_task.

  • Support setting values for cephfs_volume_mode manila parameter via the THT parameter ManilaCephFSCephVolumeMode. These control the POSIX rwx mode of the cephfs volumes, snapshots, and groups of these that back corresponding manila resources. Default value for ManilaCephFSCephVolumeMode is ‘0755’, backwards-compatible with the mode for these objects before it was settable.

  • Adds a new GlobalConfigExtraMapData parameter that can be used to inject global_config_settings hieradata into the deployment. Any values generated in the stack will override those passed in by the parameter value.

  • Add neutron-plugin-ml2-mlnx-sdn-assist as a containerized Neutron Core service template to support Mellanox SDN ml2 plugin.

  • Adds functionality wheter to enable/disable KSM on compute nodes. Especially in NFV use case one wants to disable the service. Because ksm has little benefit in overcloud nodes it gets disabled per default but can be set via NovaComputeEnableKsm.

  • Added a new Barbican option BarbicanPkcs11AlwaysSetCkaSensitive. The default value is true.

  • Allow Neutron DHCP agent to use broadcast in DHCP replies

  • Add the ability to configure the cinder-volume service to run in active-active (A/A) mode using the cluster name specified by the new CinderVolumeCluster parameter. Note that A/A mode requires the backend driver support running A/A. Cinder’s RBD driver supports A/A, but most other cinder drivers currently do not.

  • ContainerImagePrepareDebug is a parameter that allows to run the tripleo container image prepare command with –debug. It is set to ‘False’ by default for backward compatibility.

  • Docker is deprecated in Stein and will be removed in Train. It is being replaced by Podman and Buildah.

  • Deprecated services now live in deployment/deprecated directory.

  • The baremetal ML2 mechanism driver is enabled in the Networking Service (neutron) in the overcloud by default when the Baremtal Service (ironic) is enabled. Previously the user would have to enable this driver manually by overriding the NeutronMechanismDrivers parameter.

  • Add new parameter ‘GlanceInjectMetadataProperties’, to add metadata properties to be injected in image. Add new parameter ‘GlanceIgnoreUserRoles’, to specify name of user roles to be ignored for injecting metadata properties in the image.

  • Add support for native TLS encryption on NBD for disk migration

    The NBD protocol previously runs in clear text, offering no security protection for the data transferred, unless it is tunnelled over some external transport like SSH. Such tunnelling is inefficient and inconvenient to manage. Support for TLS to the NBD clients & servers provided by QEMU was added. In tls-everywhere use case we want to take advantage of this feature to create the certificates and configure qemu to use nbd tls.

  • The RabbitMQ management plugin (rabbitmq_management) is now enabled. By default RabbitMQ managment is available on port 15672 on the localhost (127.0.0.1) interface.

  • OVS and neutron now supports endpoint creation on IPv6 networks. New network--v6-all.j2.yaml environment files are added to allow tenant network to be created on IPv6 addresses. Note that these files are only to be used for new deployments and not during update or upgrade. network_data.yaml files are also edited to reflect the same.

  • Add container for the Swift container sharder service. This service is required for sharding containers. It is disabled by default and can be enabled by setting the SwiftContainerSharderEnabled to true.

  • The Shared File Systems service (manila) API has been switched to running behind httpd, and it now supports configuring TLS options.

  • This patch switches the default mechanism driver for neutron from openvswitch to OVN. DVR is now enabled by default which in the case of OVN means that we’re distributing FIP N/S traffic as E/W is anyways distributed

  • When deploying mistral-executor, create a tripleo-admin user on the undercloud for running external deploy tasks with ansible.

  • Add new CinderNetappPoolNameSearchPattern parameter, which controls which Netapp FlexVol volumes represent pools in Cinder.

Known Issues

  • Add OvnDbInternal to EndpointMap and use it for ovn_db_host

    OVN controller/metadata use ovn_dbs_vip hiera key to configure the central ovn DB. This key is not available on split control plane or multi cell setup and therefore installation fails.

    With this change a new entry gets created in the EndpointMap named OvnDbInternal. This can then be exported for an overcloud stack and can be used as an input for the cell stack.

    The information from the EndpointMap is used for ovn-metadata and ovn-controller as the ovn_db_host information in puppet-tripleo

Upgrade Notes

  • Non-lifecycle stack actions like stack check and cancel update for undercloud are now disabled. Stack check is yet to be migrated to heat convergence architecture and cancel update is not recommended for overcloud. Both are disabled by adding required heat policy for undercloud. ‘overcloud update abort’ wrapper for stack cancel update had been dropped since few releases.

  • Installing haproxy services on baremetal is no longer supported.

  • Installing MySQL Server services on baremetal is no longer supported.

  • Installing Redis services on baremetal is no longer supported.

  • Installing sahara services on baremetal is no longer supported.

  • During upgrade from ml2/ovs please remember to provide similar environment file to environments/updates/update-from-ml2-ovs-from-rocky.yaml. This is good also to remember to provide this file as a first to avoid overwriting custom modification by upgrade environment file. If you will not provide such file during upgrade from ml2/ovs you will see error and notification about problems witch mutually exclusive network drivers.

Deprecation Notes

  • Duplicate environment files environments/neutron-sriov.yaml and environments/neutron-ovs-dpdk.yaml file are deprecated.

  • Xinetd tripleo service is no longer managed. The xinetd service hasn’t been managed since the switch to containers. OS::TripleO::Services::Xinetd is disabled by default and dropped from the roles. The OS::TripleO::Services::Xinetd will be removed in Train.

  • docker_puppet_tasks is deprecated in favor of container_puppet_tasks. docker_puppet_tasks is still working in Stein but will be removed in Train.

  • The NodeDataLookup parameter type was changed from string to json

  • Removed ‘glance-registry’ related changes since it’s been deprecated from glance & no longer been used.

  • The TLS-related environment files in the environments/ directory were deleted. The ones in the environments/ssl/ are preferred instead. Namely, the following files:: enable-internal-tls.yaml, enable-tls.yaml, inject-trust-anchor-hiera.yaml, inject-trust-anchor.yaml, no-tls-endpoints-public-ip.yaml, tls-endpoints-public-dns.yaml tls-endpoints-public-ip.yaml, tls-everywhere-endpoints-dns.yaml.

  • TripleO UI is deprecated in Stein and will be removed in Train.

  • The CinderNetappStoragePools parameter is deprecated in favor of the new CinderNetappPoolNameSearchPattern parameter. The previously deprecated CinderNetappEseriesHostType parameter has been removed.

  • The /var/lib/docker-puppet is deprecated and can now be found under /var/lib/container-puppet. We don’t have Docker anymore so we try to avoid confusion in the directories. The directory still exists but a readme file points to the right directory.

Bug Fixes

  • It is now possible for temporary containers inside THT to test if they are being run as part of a minor update by checking if the TRIPLEO_MINOR_UPDATE environment variable is set to ‘true’ (said containers need to export it to the container explicitely), see <service>_restart_bundles for examples.

  • When setting up TLS everywhere, some deployers may not have their FreIPA server in the ctlplane, causing the ipaclient registration to fail. We move this registration to host-prep tasks and invoke it using ansible. At this point, all networks should be set up and the FreeIPA server should be accessible.

    • Bug 1784967 invalid JSON in NodeDataLookup error message should be more helpful

  • e0e885b8ca3332e0815c537a32c564cac81f7f7e moved the cellv2 discovery from control plane to compute services. In case the computes won’t have access to the external API this task will fail. Switch nova_cell_v2_discover_host.py to use internal api.

Other Notes

  • Paramter ConfigDebug now also controls the paunch logs verbosity.

  • Octavia may be deployed for a standalone cloud, which has yet Nova services available for Amphorae SSH keys management. For that case, the parameter OctaviaAmphoraSshKeyFile must be defined by a user. Otherwise, it takes an empty value by usual for overcloud deployments meanings and Nova will be used to create a key-pair for Octavia instead.

  • The utility script tools/merge-new-params-nic-config-script.py previously used the Controller role by default if the --role-name argument was not specified. The argument (--role-name) no longer have a default. It is now mandatory to specify the role when merging new parameters into existing network configuration templates.

  • Remove NeutronExternalNetworkBridge Heat parameter. Option external_network_bridge is deprecated and should not be used in Neutron.

10.3.0

New Features

  • Added code in the barbican-api.yaml template to allow barbican to be configured to run with either an ATOS or Thales HSM back-end. Also added environment files with all the required variables. The added code installs and configures the client software on the barbican nodes, generates the required kets for the PKCS#11 plugin, and configures barbican correctly. For the Thales case, it also contacts the RFS server to add the new clients to the HSM.

  • Add new CinderNfsSnapshotSupport parameter, which controls whether cinder’s NFS driver supports snapshots. The default value is True.

  • Composable Networks now support creating L3 routed networks. L3 networks use multiple L2 network segments and multiple ip subnets. In addition to the base subnet automatically created for any composable network, additional subnets can be defined under the subnets key for each network in the data file (network_data.yaml) used by composable networks. Please refer to the network_data_subnets_routed.yaml file for an example demonstrating how to define composable L3 routed networks.

  • For composable roles it is now possible to control which subnet in a L3 routed network will host network ports for the role. This is done by setting the subnet for each network in the role defenition (roles_data.yaml). For example:

    - name: <role_name>
      networks:
        InternalApi:
          subnet: internal_api_leaf2
        Tenant:
          subnet: tenant_leaf2
        Storage:
          subnet: storage_leaf2
    
  • To enable control of which subnet is used for virtual IPs on L3 routed composable networks the new parameter VipSubnetMap where added. This allow the user to override the subnet where the VIP port should be hosted. For example:

    parameter_defaults:
      VipSubnetMap:
        ctlplane: ctlplane-leaf1
        InternalApi: internal_api_leaf1
        Storage: storage_leaf1
        redis: internal_api_leaf1
    
  • New roles for DistributedCompute and DistributedComputeHCI are added. These roles match the existing Compute roles, but also include the CinderVolume service. The CinderVolume service is included using the BlockStorageCinderVolume service name so that it can be mapped independently from CinderVolume.

  • Add new parameter ‘GlanceImageImportPlugins’, to enable plugins used by image import process. Add parameter ‘GlanceImageConversionOutputFormat’, to provide desired output format for image conversion plugin.

  • Allow to output HAProxy in a dedicated file

  • Adds new HAProxySyslogFacility param

  • Add parameter NovaHWMachineType which allows to explicitly set machine_type across all compute nodes during deployment, to allow migration compatibility from compute nodes with higher host OS version to compute nodes with lower host OS version.

  • Adds support to configure disjoint address pools for Ironic Inspector.

    When Inspector is deployed as a HA service disjoint address pools should be served by the DHCP instances to avoid address conflict issues. The disjoint address pools are configured by using hostname (short form) as the key, then pass the list of ranges for each host. For example:

    parameter_defaults:
    
      IronicInspectorSubnets:
        overcloud-ironic-0:
          - ip_range: 192.168.24.100,192.168.24.119
          - ip_range: 192.168.25.100,192.168.25.119
            netmask: 255.255.255.0
            gateway: 192.168.25.254
            tag: subnet1
        overcloud-ironic-1:
          - ip_range: 192.168.24.120,192.168.24.139
          - ip_range: 192.168.25.120,192.168.25.139
            netmask: 255.255.255.0
            gateway: 192.168.25.254
            tag: subnet1
    
  • The network data for composible networks have been extended to enable configuration of the maximum transmission unit (MTU) that is guaranteed to pass through the data path of the segments in the network. The MTU property is set on the neutron networks in the undercloud. The MTU information is used in the nic-config templates so that overcloud node networking is configured with the correct MTU settings.

  • Nova now allows use of templated urls in the database and mq connections which will allow static configuration elements to be applied to the urls read from the database per-node. This should be a simpler and less obscure method of configuring things like the per-node bind_address necessary for director’s HA arrangement. This patch addresses the templated DB urls as part 1. Nova support added here - https://review.openstack.org/578163

  • Nova now allows use of templated urls in the database and mq connections which will allow static configuration elements to be applied to the urls read from the database per-node. This should be a simpler and less obscure method of configuring things like the per-node bind_address necessary for director’s HA arrangement. This patch addresses the templated transport urls as part 2. Nova support added here - https://review.openstack.org/578163

  • The MTU defined for the Tenant network in network_data is now used to set neutron’s global_physnet_mtu unless the NeutronGlobalPhysnetMtu parameter is used to override the default. (Neutron uses the global_physnet_mtu value to calculate MTU for all virtual network components. For flat and VLAN networks, neutron uses this value without modification. For overlay networks such as VXLAN, neutron automatically subtracts the overlay protocol overhead from this value.)

  • Add new TunedCustomProfile parameter which may contain a string in INI format describing a custom tuned profile. Also provide a new environment file for users of hypercoverged Ceph deployments using the Ceph filestore storage backened. The tuned profile is based on heavy I/O load testing. The provided environment file creates /etc/tuned/ceph-filestore-osd-hci/tuned.conf and sets this tuned profile to be active. Not intended for use with Ceph bluestore.

Known Issues

  • Fix misnaming of service in firewall rule for Octavia Health Manager service.

Upgrade Notes

  • Cinder’s NFS driver does not support snapshots unless the feature is explicitly enabled (this policy was chosen to ensure compatibility with very old versions of libvirt). The CinderNfsSnapshotSupport default value is True, and so the new default behavior enables NFS snapshots. This change is safe because it just enables a capability (i.e. snapshots) that other cinder drivers generally provide.

  • Deployers that used resource_registry override in their environment to add networks to roles without also using a custom roles data file must create a custom roles data file and add the additional network(s) and use this when upgrading.

    Previously it was possible to add additional networks to a role without using a custom role by overriding the resource registry, for example:

    OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml
    

    Warning

    Since resources are no longer added to the plan unless the network is specified in the role, the resource_registry override alone is no longer sufficient.

  • Deployments using custom names for subnets must also set the subnet to use for the roles used in the deployment. I.e if NetworkNameSubnetName parameter was used to define a non-default subnet name for any network, the role defenition (roles_data.yaml) and VipSubnetMap parameter must use the same value.

    Warning

    The update will fail if <NetworkName>SubnetName was used to set a custom subnet name, and the role defenition and/or the VipSubnetMap is not set to match the custom subnet name.

  • Installing Aodh services on baremetal is no longer supported.

  • Installing glance on Baremetal is no longer supported

  • Installing Ironic on baremetal is no longer supported

  • Installing Keepalived service on baremetal is no longer supported.

  • Deploying keystone on baremetal is no longer supported.

  • Installing memcached services on baremetal is no longer supported.

  • Installing zaqar on baremetal is no longer supported

  • Tags are now used on the ctlplane network to store the list of cidrs associated with the subnets on the ctlplane network. Users of Deployed Server (pre-provisioned servers) need to update the port map (DeployedServerPortMap) to include the required data. For example:

    parameter_defaults:
      DeployedServerPortMap:
        controller0-ctlplane:
          fixed_ips:
            - ip_address: 192.168.24.9
          subnets:
            - cidr: 192.168.24.0/24
          network:
            tags:
              - 192.168.24.0/24
              - 192.168.25.0/24
        compute0-ctlplane:
          fixed_ips:
            - ip_address: 192.168.25.8
          subnets:
            - cidr: 192.168.25.0/24
          network:
            tags:
              - 192.168.24.0/24
              - 192.168.25.0/24
    
  • Prior to upgrading any custom nic-config templates must have the MTU associated parameters introduced in this release added. As an example the following must be added to all nic-config templates when network isolation is used:

    ControlPlaneMtu:
      default: 1500
      description: The maximum transmission unit (MTU) size(in bytes) that is
        guaranteed to pass through the data path of the segments in the network.
        (The parameter is automatically resolved from the ctlplane network's mtu attribute.)
      type: number
    StorageMtu:
      default: 1500
      description: The maximum transmission unit (MTU) size(in bytes) that is
        guaranteed to pass through the data path of the segments in the
        Storage network.
      type: number
    StorageMgmtMtu:
      default: 1500
      description: The maximum transmission unit (MTU) size(in bytes) that is
        guaranteed to pass through the data path of the segments in the
        StorageMgmt network.
      type: number
    InternalApiMtu:
      default: 1500
      description: The maximum transmission unit (MTU) size(in bytes) that is
        guaranteed to pass through the data path of the segments in the
        InternalApi network.
      type: number
    TenantMtu:
      default: 1500
      description: The maximum transmission unit (MTU) size(in bytes) that is
        guaranteed to pass through the data path of the segments in the
        Tenant network.
      type: number
    ExternalMtu:
      default: 1500
      description: The maximum transmission unit (MTU) size(in bytes) that is
        guaranteed to pass through the data path of the segments in the
        External network.
      type: numbe
    ManagementMtu:
      default: 1500
      description: The maximum transmission unit (MTU) size(in bytes) that is
        guaranteed to pass through the data path of the segments in the
        Management network.
      type: number
    
  • The hiera bootstrap_nodeid_ip key has been replaced with per-service SERVICE_bootstrap_node_ip where SERVICE is the service_name from the composable service templates. If any out-of-tree services use this key they will need to adjust to the new interface on upgrade.

  • We don’t run the upgrade_tasks Ansible tasks that stop systemd services and since all services are now containerized. However, we decided to keep the tasks that remove the rpms in case some of deployments didn’t cleanup them in previous releases, they can still do it now. These tasks were useful in Rocky when we converted the Undercloud from baremetal to containers but in Stein this is not useful anymore. It’s actually breaking upgrades for Podman, as containers are now seen by systemd, and these tasks conflicts with the way containers are managed in Paunch.

Deprecation Notes

  • For deploying with hw offloading, we should use the “environments/ovs-hw-offload.yaml” file beside neutron, opendaylight or ovn environments files, no needs to have seperated files as before

Critical Issues

  • Networks not specified for roles in roles data (roles_data.yaml) no longer have Heat resources created. It is now mandatory that custom roles are used when non-default networks is used for a role.

    Previously it was possible to add additional networks to a role without using a custom role by overriding the resource registry, for example:

    OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml
    

    Note

    The resource_registry override was the only requirement prior to the introduction of Composable Networks in the Pike release.

    Since Pike a custom role would ideally be used when adding networks to roles, but documentation and other guides may not have been properly updated and only mention the resource_registry override.

Bug Fixes

  • The recommended API for checking when OpenDaylight is up and ready has changed. Use the new ODL Infrautils diagstatus REST API endpoint, vs the old netvirt:1 endpoint.

  • The NtpServer default set now includes multiple pool.ntp.org hosts to ensure that the time can be properly synced during the deployment. Having only a single timesource can lead to deployment failures if the time source is unavailable during the deployment. It is recommended that you either set multiple NtpServers or use the NtpPool configuration to ensure that enough time sources are available for the hosts. Note that the NtpPool configuration is only available when using chrony. See LP#1806521

  • Novajoin now log’s to /var/log/container in the same way other TripleO container services do. See Bug: 1796658.

  • In other sections we already use the internal endpoints for authentication urls. With this change the auth_uri in the neutron section gets moved from KeystoneV3Admin to KeystoneV3Internal.

  • With tls-everywhere enabled connecting to keystone endpoint fails to retrieve the URL for the placement endpoint as the certificate can not be verified. While verification is disabled to check the placement endpoint later, it is not to communicate with keystone. This disables certificate verification for communication with keystone.

  • /opt/opendaylight/data folder is mounted on host. This folder contains information about installed features in ODL. Mounting this folder on container makes ODL believe that features are installed and it doesnot generate required for proper boot. Thus this folder is no longer mounted to host so that ODL can boot properly on restart.

  • CephOSD/Compute nodes crash under memory pressure unless custom tuned profile is used (bug 1800232).

Other Notes

  • HostPrepConfig has been removed. The resource isn’t used anymore. It was using the old fashion to run Ansible via Heat, which we don’t need anymore with config-download by default in Rocky.

  • MongoDB hasn’t been supported since Pike, it’s time to remove the deployment files. Starting in Stein, it’s not possible to deploy MongoDB anymore.

10.2.0

New Features

  • Add CinderStorageAvailabilityZone parameter that configures cinder’s DEFAULT/storage_availability_zone. The default value of ‘nova’ matches cinder’s own default value.

    Add several CinderXXXAvailabilityZone parameters, where XXX is any of the cinder volume service’s storage backends. The parameters are optional, and when set they override the “backend_availability_zone” for the corresponding backend.

  • Octavia default timeouts for backend member and frontend client can be set by params exposed in template:

    • OctaviaTimeoutClientData: Frontend client inactivity timeout

    • OctaviaTimeoutMemberConnect: Backend member connection timeout

    • OctaviaTimeoutMemberData: Backend member inactivity timeout

    • OctaviaTimeoutTcpInspect: Time to wait for TCP packets for content inspection

    The value for all of these options is expected to be in milliseconds.

  • The default timesync service has changed from NTP to Chrony.

  • Added Dell EMC SC multipath support This change adds support for cinder::backend::dellsc_iscsi::use_multipath_for_image_xfer Added a new parameter CinderDellScMultipathXfer.

  • Add GlanceCacheEnabled parameter which will enable the glance image cache by seetting up the flavor value to ‘keystone+cachemanagement’ in glance-api.conf

  • It is now possible to enable support for routed networks in the undercloud when the undercloud is updated or upgraded. To enable support for routed networks set enable_routed_networks to True in undercloud.conf and re-run the undercloud installer.

  • ContainerCli allows ‘docker’ (deprecated) and ‘podman’ for Neutron L3/DHCP and OVN metadata rootwrap containers managed by agents. Parameters OVNWrapperDebug and NeutronWrapperDebug (Defaults to False) allow to log debug messages for the wrapper scripts managing rootwrap containers. It is also controled by the global Debug setting.

Upgrade Notes

  • swift worker count parameter defaults have been changed from ‘auto’ to 0. If not provided, puppet module default would instead be used and the number of server processes will be limited to ‘12’.

  • Octavia amphora images are now expected to be located in directory /usr/share/openstack-octavia-amphora-images on the undercloud node for uniformization across different OpenStack distributions.

Deprecation Notes

  • NTP timesync has been deprecated for Chrony and will be removed in T.

  • The environments/docker.yaml is no longer necessary as the default registry points to containerized services too. The environment file is now deprecated (and emptied) and will be removed in the future.

  • The Fluentd service is deprecated and it will be removed in future releases. It will be replaced by rsyslog. Rsyslog is not integrated yet, so Fluentd will be an option as long as rsyslog is not integrated.

  • Sensu service will be remove in the future releases.

  • The dynamic tripleo firewall_rules, haproxy_endpoints, haproxy_userlists that are configured with dots are deprecated with the update to puppet 5. They will no longer work and must be switched to the colon notation to continue to function. For example tripleo.core.firewall_rules must be converted to tripleo::core::firewall_rules. Similarly the haproxy endpoints and userlists that are dynamic using dots must also be converted to use colons.

  • Ensure Octavia amphora image files are placed in directory /usr/share/openstack-octavia-amphora-images on the undercloud node.

  • Parameter DockerAdditionalSockets is deprecated. No sockets are expected to bind mount for podman. So it only works for the docker runtime.

Bug Fixes

  • When masqurading was eneabled on the Undercloud the networks 192.168.24.0/24 and 10.0.0.0/24 was always masqueraded. (See bug: 1794729.)

  • Add customized libvirt-guests unit file to properly shutdown instances

    If resume_guests_state_on_host_boot is set in nova.conf instances need to be shutdown using libvirt-guests after nova_compute container is shut down. Therefore we need a customized libvirt-guests unit file 1) removes the dependency to libvirt (non container) that it don’t get started as a dependency and make the nova_libvirt container to fail. 2) adds a dependency to docker related services that a shutdown of nova_compute container is possible on system reboot. 3) stops nova_compute container 4) shutdown VMs

    This is a missing part of Bug 1778216.

  • Nova metadata api is running via http wsgi in its own service. Therefore we can cleanup ports being opened by nova api service.

  • Fix an issue where Octavia amphora images were not accessible during overcloud deployment.

  • Empty /var/lib/config-data/puppet-generated/opendaylight/opt/ opendaylight/etc/opendaylight/karaf directory on host empties /opt/opendaylight/etc/opendaylight/karaf inside the ODL container because of the mount. This leads to deployment failure on redeploy. Delete the empty karaf directory on host before redeploying.

  • The previous installation method for the undercloud installed some extra OpenStack clients during the installation. Since we did not have an equivalent way in the containerized version of the undercloud, we’ve added a new TripleO ‘service’ to install all of the OpenStack clients on a system. OS::TripleO::Services::OpenStackClients has been added which can be added to a custom role to install the clients. By default, only the Undercloud and Standalone roles will have this available.

  • The deployed-server get-occ-config.sh script now allows $SSH_OPTIONS to be overridden.

  • Neutron/OVN rootwrap containers are managed by agents and will no longer be deleted, when the parent container restarts.

10.1.0

New Features

  • Add support for ODL deployment on IPv6 networks.

  • Adds posibilities to set ‘neutron::agents::ml2::ovs::tunnel_csum’ via NeutronOVSTunnelCsum in heat template. This param set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel in ovs agent.

  • Add nova file_backed_memory and memory_backing_dir support for qemu.conf

    The libvirt driver now allows utilizing file backed memory for qemu/KVM virtual machines, via a new configuration attribute [libvirt]/file_backed_memory, defaulting to 0 (disabled).

    [libvirt]/file_backed_memory specifies the available capacity in MiB for file backed memory, at the directory configured for memory_backing_dir in libvirt’s qemu.conf. When enabled, the libvirt driver will report the configured value for the total memory capacity of the node, and will report used memory as the sum of all configured guest memory.

    Running Nova with file_backed_memory requires libvirt version 4.0.0 and qemu version 2.6.0

  • Add provision to set java options like heap size configurations in ODL.

  • Add support for libvirt volume_use_multipath the ability to use multipath connection of the iSCSI or FC volume. Volumes can be connected in the LibVirt as multipath devices. Adds new parameter “NovaLibvirtVolumeUseMultipath”.

Upgrade Notes

  • The Dell EMC SC configuration option excluded_domain_ip has been deprecated and will be removed in a future release. Deployments should now migrate to the option excluded_domain_ips for equivalent functionality.

  • The online part of the service upgrades (online data migrations) is now run using:

    openstack overcloud external-upgrade run --tags online_upgrade
    

    or per-service like:

    openstack overcloud external-upgrade run --tags online_upgrade_nova
    openstack overcloud external-upgrade run --tags online_upgrade_cinder
    openstack overcloud external-upgrade run --tags online_upgrade_ironic
    

    Consult the upgrade documentation regarding the full upgrade workflow.

  • The environment file puppet-pacemaker.yaml has been removed, make sure that you no longer reference it. The docker-ha.yaml file should have already been used in place of puppet-pacemaker.yaml during upgrade from Ocata to Pike. The environment file puppet-pacemaker-no-restart.yaml has been removed too, it was only used in conjunction with puppet-pacemaker.yaml.

  • The environment file deployed-server-pacemaker-environment.yaml has been removed, make sure that you no longer reference it. Its current contents result in no tangible difference from the default resource registry state, so removing the file should not change the overcloud.

  • Remove zaqar wbesocket service when upgrading from non-containerized environment.

Deprecation Notes

  • All references to the logging_source output in the services templates have been removed, since it’s been unused for a couple of releases now.

Bug Fixes

  • Fixed an issue where if Octavia API or Glance API were deployed away from the controller node with internal TLS, the service principals wouldn’t be created.

  • Nova Scheduler added worker support in Rocky. Added NovaSchedulerWorkers to allow it to be configurable.

  • Make sure all Swift services are disabled after upgrading to a containerized undercloud.

  • With OOO we configure a separate DB for placement for the undercloud and overcloud since the beginning. But the placement_database config options were reverted with https://review.openstack.org/#/c/442762/1 , which means so far even if the config option was set, it was not used. With rocky the options were introduced again which is not a problem on a fresh installed env, but on upgrades from queens to rocky. We should use the same DB for both fresh deployments on and upgrades to rocky before we switch to the new DB as part of the extraction of placement.

  • SELinux can be configured on the Standalone deployment by setting SELinuxMode.

10.0.0

New Features

  • Allow plugins that support it to create VLAN transparent networks The vlan_transparent determines if plugins that support it to create VLAN transparent networks or not

  • We now provide an example set of environment files that can be used to deploy a single all-in-one standalone cloud node via the ‘openstack overcloud deploy’ and ‘openstack tripleo deploy’ (experimental) commands. For the overcloud deployment, use environments/standalone/standalone-overcloud.yaml. For the tripleo deploy deployment, use environments/standalone/standalone-tripleo.yaml.

  • Now it’s possible to define the number of API and RPC workers separately for neutron-api service. This is good for certain network backends such as OVN that don’t require RPC communication.

  • Usage of eventlet of all the WSGI-run nova services get deprecated, including nova-api and nova-metadata-api. See https://review.openstack.org/#/c/549510/ for more details. With this change we move nova-metadata to run via httpd wsgi.

  • Add OctaviaEventStreamDriver parameter to specify which driver to use for syncing Octavia and Neutron LBaaS databases.

Upgrade Notes

Deprecation Notes

  • The environments/standalone.yaml has been deprecated and should be replaced with environments/standalone/standalone-tripleo.yaml when using the ‘openstack tripleo deploy’ command.

  • All references to the logging_group output in the services templates have been removed, since it’s been unused for a couple of releases now.

Bug Fixes

  • An issue causing undercloud installer re-run (or update) to fail because VIP’s where lost in case the networking configuration was changed has been fixed. See Bug: 1791238.

  • Fixes an issue in the legacy port_from_pool templates for predictable IP addressing. Prior to this fix using these tamplates would fail with the following error: Referenced Attribute (%network_name%%Port host_routes) is incorrect. (Bug: 1792968.)

  • Ping the default gateways before controllers in validation script. In certain situations when using IPv6 its necessary to establish connectivity to the router before other hosts.

  • The baremetal API version is no longer hardcoded in stackrc. This allows easy access to new features in ironicclient as they are introduced. If you need to use a fixed API version, set the OS_BAREMETAL_API_VERSION environment variable.

Other Notes

  • A new parameter called ‘RabbitAdditionalErlArgs’ that specifies additional arguments to the Erlang VM has been added. It now defaults to “’+sbwt none’” (http://erlang.org/doc/man/erl.html#+sbwt) This threshold determines how long schedulers are to busy wait when running out of work before going to sleep. By setting it to none we let the erlang threads go to sleep right away when they do not have any work to do.

  • The common tasks in deploy-steps-tasks.yaml that are common to all roles are now tagged with one of: host_config, container_config, container_config_tasks, container_config_scripts, or container_startup_configs.

  • The step plays in deploy-steps.j2 (which generates the deploy_steps_tasks.yaml playbook) are now tagged with step[1-5] so that they can run individually if needed.