Prepare the target hosts

_images/installation-workflow-targethosts.png

Configuring the operating system

This section describes the installation and configuration of operating systems for the target hosts, as well as deploying SSH keys and configuring storage.

Installing the operating system

Install one of the following supported operating systems on the target host:

  • Ubuntu server 18.04 (Bionic Beaver) LTS 64-bit

  • Ubuntu server 20.04 (Focal Fossa) LTS 64-bit

  • Debian 10 64-bit

  • Centos 8 64-bit

  • openSUSE 15.X 64-bit

Configure at least one network interface to access the Internet or suitable local repositories.

We recommend adding the Secure Shell (SSH) server packages to the installation on target hosts that do not have local (console) access.

Note

We also recommend setting your locale to en_US.UTF-8. Other locales might work, but they are not tested or supported.

Configure Ubuntu

  1. Update package source lists

    # apt update
    
  2. Upgrade the system packages and kernel:

    # apt dist-upgrade
    
  3. Reboot the host.

  4. Ensure that the kernel version is 3.13.0-34-generic or later:

    # uname -r
    
  5. Install additional software packages:

    # apt install bridge-utils debootstrap ifenslave ifenslave-2.6 \
      lsof lvm2 chrony openssh-server sudo tcpdump vlan python3
    
  6. Install the kernel extra package if you have one for your kernel version

    # apt install linux-image-extra-$(uname -r)
    
  7. Add the appropriate kernel modules to the /etc/modules file to enable VLAN and bond interfaces:

    # echo 'bonding' >> /etc/modules
    # echo '8021q' >> /etc/modules
    
  8. Configure Network Time Protocol (NTP) in /etc/chrony/chrony.conf to synchronize with a suitable time source and restart the service:

    # service chrony restart
    
  9. Reboot the host to activate the changes and use the new kernel.

Configure CentOS

  1. Upgrade the system packages and kernel:

    # dnf upgrade
    
  2. Disable SELinux. Edit /etc/sysconfig/selinux, make sure that SELINUX=enforcing is changed to SELINUX=disabled.

    Note

    SELinux enabled is not currently supported in OpenStack-Ansible for CentOS/RHEL due to a lack of maintainers for the feature.

  3. Reboot the host.

  4. Ensure that the kernel version is 3.10 or later:

    # uname -r
    
  5. Install additional software packages:

    # dnf install iputils lsof lvm2 chrony \
      openssh-server sudo tcpdump python3
    
  6. Add the appropriate kernel modules to the /etc/modules-load.d file to enable VLAN and bond interfaces:

    # echo 'bonding' >> /etc/modules-load.d/openstack-ansible.conf
    # echo '8021q' >> /etc/modules-load.d/openstack-ansible.conf
    
  7. Configure Network Time Protocol (NTP) in /etc/chrony.conf to synchronize with a suitable time source and start the service:

    # systemctl enable chronyd.service
    # systemctl start chronyd.service
    
  8. (Optional) Reduce the kernel log level by changing the printk value in your sysctls:

    # echo "kernel.printk='4 1 7 4'" >> /etc/sysctl.conf
    
  9. Reboot the host to activate the changes and use the new kernel.

Configure openSUSE

  1. Upgrade the system packages and kernel:

    # zypper up
    
  2. Reboot the host.

  3. Ensure that the kernel version is 4.4 or later:

    # uname -r
    
  4. Install additional software packages:

    # zypper install bridge-utils iputils lsof lvm2 \
      chrony opensshr sudo tcpdump python3
    
  5. Add the appropriate kernel modules to the /etc/modules-load.d file to enable VLAN and bond interfaces:

    # echo 'bonding' >> /etc/modules-load.d/openstack-ansible.conf
    # echo '8021q' >> /etc/modules-load.d/openstack-ansible.conf
    
  6. Configure Network Time Protocol (NTP) in /etc/chrony.conf to synchronize with a suitable time source and start the service:

    # systemctl enable chronyd.service
    # systemctl start chronyd.service
    
  7. Reboot the host to activate the changes and use the new kernel.

Configure SSH keys

Ansible uses SSH to connect the deployment host and target hosts.

  1. Copy the contents of the public key file on the deployment host to the /root/.ssh/authorized_keys file on each target host.

  2. Test public key authentication from the deployment host to each target host by using SSH to connect to the target host from the deployment host. If you can connect and get the shell without authenticating, it is working. SSH provides a shell without asking for a password.

For more information about how to generate an SSH key pair, as well as best practices, see GitHub’s documentation about generating SSH keys.

Important

OpenStack-Ansible deployments require the presence of a /root/.ssh/id_rsa.pub file on the deployment host. The contents of this file is inserted into an authorized_keys file for the containers, which is a necessary step for the Ansible playbooks. You can override this behavior by setting the lxc_container_ssh_key variable to the public key for the container.

Configuring the storage

Logical Volume Manager (LVM) enables a single device to be split into multiple logical volumes that appear as a physical storage device to the operating system. The Block Storage (cinder) service, and LXC containers that optionally run the OpenStack infrastructure, can optionally use LVM for their data storage.

Note

OpenStack-Ansible automatically configures LVM on the nodes, and overrides any existing LVM configuration. If you had a customized LVM configuration, edit the generated configuration file as needed.

  1. To use the optional Block Storage (cinder) service, create an LVM volume group named cinder-volumes on the storage host. Specify a metadata size of 2048 when creating the physical volume. For example:

    # pvcreate --metadatasize 2048 physical_volume_device_path
    # vgcreate cinder-volumes physical_volume_device_path
    
  2. Optionally, create an LVM volume group named lxc for container file systems and set lxc_container_backing_store: lvm in user_variables.yml if you want to use LXC with LVM. If the lxc volume group does not exist, containers are automatically installed on the file system under /var/lib/lxc by default.

Configuring the network

OpenStack-Ansible uses bridges to connect physical and logical network interfaces on the host to virtual network interfaces within containers. Target hosts need to be configured with the following network bridges:

Bridge name

Best configured on

With a static IP

br-mgmt

On every node

Always

br-storage

On every storage node

When component is deployed on metal

On every compute node

Always

br-vxlan

On every network node

When component is deployed on metal

On every compute node

Always

br-vlan

On every network node

Never

On every compute node

Never

For a detailed reference of how the host and container networking is implemented, refer to OpenStack-Ansible Reference Architecture, section Container Networking.

For use case examples, refer to User Guides.

Host network bridges information

  • LXC internal: lxcbr0

    The lxcbr0 bridge is required for LXC, but OpenStack-Ansible configures it automatically. It provides external (typically Internet) connectivity to containers with dnsmasq (DHCP/DNS) + NAT.

    This bridge does not directly attach to any physical or logical interfaces on the host because iptables handles connectivity. It attaches to eth0 in each container.

    The container network that the bridge attaches to is configurable in the openstack_user_config.yml file in the provider_networks dictionary.

  • Container management: br-mgmt

    The br-mgmt bridge provides management of and communication between the infrastructure and OpenStack services.

    The bridge attaches to a physical or logical interface, typically a bond0 VLAN subinterface. It also attaches to eth1 in each container.

    The container network interface that the bridge attaches to is configurable in the openstack_user_config.yml file.

  • Storage:br-storage

    The br-storage bridge provides segregated access to Block Storage devices between OpenStack services and Block Storage devices.

    The bridge attaches to a physical or logical interface, typically a bond0 VLAN subinterface. It also attaches to eth2 in each associated container.

    The container network interface that the bridge attaches to is configurable in the openstack_user_config.yml file.

  • OpenStack Networking tunnel: br-vxlan

    The br-vxlan bridge is required if the environment is configured to allow projects to create virtual networks using VXLAN. It provides the interface for virtual (VXLAN) tunnel networks.

    The bridge attaches to a physical or logical interface, typically a bond1 VLAN subinterface. It also attaches to eth10 in each associated container.

    The container network interface it attaches to is configurable in the openstack_user_config.yml file.

  • OpenStack Networking provider: br-vlan

    The br-vlan bridge is provides infrastructure for VLAN tagged or flat (no VLAN tag) networks.

    The bridge attaches to a physical or logical interface, typically bond1. It attaches to eth11 for VLAN type networks in each associated container. It is not assigned an IP address because it handles only layer 2 connectivity.

    The container network interface that the bridge attaches to is configurable in the openstack_user_config.yml file.