Kuryr installation as a Kubernetes network addon

Kuryr installation as a Kubernetes network addon

Building images

First you should build kuryr-controller and kuryr-cni docker images and place them on cluster-wide accessible registry.

For creating controller image on local machine:

$ docker build -t kuryr/controller -f controller.Dockerfile .

For creating cni daemonset image on local machine:

$ ./tools/build_cni_daemonset_image [<cni_bin_dir>] [<cni_conf_dir>] [<enable_cni_daemon>]
  • cni_bin_dir - host directory where CNI binaries are located, defaults to /opt/cni/bin.
  • cni_conf_dir - host directory where CNI configuration is located, defaults to /etc/cni/net.d.
  • enable_cni_daemon - Set to True if you want CNI Docker image to run CNI daemon by default. Defaults to False.

Note

You can override those build variables by passing env variables when running the Docker image. Supported variables are CNI_CONFIG_DIR_PATH, CNI_BIN_DIR_PATH and CNI_DAEMON.

Alternatively, you can remove imagePullPolicy: Never from kuryr-controller Deployment and kuryr-cni DaemonSet definitions to use pre-built controller and cni images from the Docker Hub. Those definitions will be generated in next step.

Generating Kuryr resource definitions for Kubernetes

kuryr-kubernetes includes a tool that lets you generate resource definitions that can be used to Deploy Kuryr on Kubernetes. The script is placed in tools/generate_k8s_resource_definitions.sh and takes up to 3 arguments:

$ ./tools/generate_k8s_resource_definitions <output_dir> [<controller_conf_path>] [<cni_conf_path>]
  • output_dir - directory where to put yaml files with definitions.
  • controller_conf_path - path to custom kuryr-controller configuration file.
  • cni_conf_path - path to custom kuryr-cni configuration file (defaults to controller_conf_path).

If no path to config files is provided, script automatically generates minimal configuration. However some of the options should be filled by the user. You can do that either by editing the file after the ConfigMap definition is generated or provide your options as environment variables before running the script. Below is the list of available variables:

  • $KURYR_K8S_API_ROOT - [kubernetes]api_root (default: https://127.0.0.1:6443)
  • $KURYR_K8S_AUTH_URL - [neutron]auth_url (default: http://127.0.0.1/identity)
  • $KURYR_K8S_USERNAME - [neutron]username (default: admin)
  • $KURYR_K8S_PASSWORD - [neutron]password (default: password)
  • $KURYR_K8S_USER_DOMAIN_NAME - [neutron]user_domain_name (default: Default)
  • $KURYR_K8S_KURYR_PROJECT_ID - [neutron]kuryr_project_id
  • $KURYR_K8S_PROJECT_DOMAIN_NAME - [neutron]project_domain_name (default: Default)
  • $KURYR_K8S_PROJECT_ID - [neutron]k8s_project_id
  • $KURYR_K8S_POD_SUBNET_ID - [neutron_defaults]pod_subnet_id
  • $KURYR_K8S_POD_SG - [neutron_defaults]pod_sg
  • $KURYR_K8S_SERVICE_SUBNET_ID - [neutron_defaults]service_subnet_id
  • $KURYR_K8S_WORKER_NODES_SUBNET - [pod_vif_nested]worker_nodes_subnet
  • $KURYR_K8S_BINDING_DRIVER - [binding]driver (default: kuryr.lib.binding.drivers.vlan)
  • $KURYR_K8S_BINDING_IFACE - [binding]link_iface (default: eth0)

Note

kuryr-daemon will be started in the CNI container. It is using os-vif and oslo.privsep to do pod wiring tasks. By default it’ll call sudo to raise privileges, even though container is priviledged by itself or sudo is missing from container OS (e.g. default CentOS 7). To prevent that make sure to set following options in kuryr.conf used for kuryr-daemon:

[vif_plug_ovs_privileged]
helper_command=privsep-helper
[vif_plug_linux_bridge_privileged]
helper_command=privsep-helper

Those options will prevent oslo.privsep from doing that. If rely on aformentioned script to generate config files, those options will be added automatically.

In case of using ports pool functionality, we may want to make the kuryr-controller not ready until the pools are populated with the existing ports. To achive this a readiness probe must be added to the kuryr-controller deployment. To add the readiness probe, in addition to the above environment variables or the kuryr-controller configuration file, and extra environmental variable must be set:

  • $KURYR_USE_PORTS_POOLS - True (default: False)

Example run:

$ KURYR_K8S_API_ROOT="192.168.0.1:6443" ./tools/generate_k8s_resource_definitions /tmp

This should generate 4 files in your <output_dir>:

  • config_map.yml
  • service_account.yml
  • controller_deployment.yml
  • cni_ds.yml

Deploying Kuryr resources on Kubernetes

To deploy the files on your Kubernetes cluster run:

$ kubectl apply -f config_map.yml -n kube-system
$ kubectl apply -f service_account.yml -n kube-system
$ kubectl apply -f controller_deployment.yml -n kube-system
$ kubectl apply -f cni_ds.yml -n kube-system

After successful completion:

  • kuryr-controller Deployment object, with single replica count, will get created in kube-system namespace.
  • kuryr-cni gets installed as a daemonset object on all the nodes in kube-system namespace
To see kuryr-controller logs ::
$ kubectl logs <pod-name>

NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet logs.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.