Kuryr Kubernetes Services Integration Design

Purpose

The purpose of this document is to present how Kubernetes Service is supported by the kuryr integration and to capture the design decisions currently taken by the kuryr team.

Overview

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. Service is a Kubernetes managed API object. For Kubernetes-native applications, Kubernetes offers an Endpoints API that is updated whenever the set of Pods in a Service changes. For detailed information please refer to Kubernetes service. Kubernetes supports services with kube-proxy component that runs on each node, Kube-Proxy.

Proposed Solution

Kubernetes service in its essence is a Load Balancer across Pods that fit the service selection. Kuryr’s choice is to support Kubernetes services by using Neutron LBaaS service. The initial implementation is based on the OpenStack LBaaSv2 API, so compatible with any LBaaSv2 API provider.

In order to be compatible with Kubernetes networking, Kuryr-Kubernetes makes sure that services Load Balancers have access to Pods Neutron ports. This may be affected once Kubernetes Network Policies will be supported. Oslo versioned objects are used to keep translation details in Kubernetes entities annotation. This will allow future changes to be backward compatible.

Data Model Translation

Kubernetes service is mapped to the LBaaSv2 Load Balancer with associated Listeners and Pools. Service endpoints are mapped to Load Balancer Pool members.

Kuryr Controller Impact

Three Kubernetes Event Handlers are added to the Controller pipeline.

  • ServiceHandler manages Kubernetes Service events. Based on the service spec and metadata details, it creates KuryrLoadBalancer CRD or it updates the CRD, more specifically the spec part of the CRD with details to be used for translation to LBaaSv2 model, such as tenant-id, subnet-id, ip address and security groups.

  • EndpointsHandler is responsible for adding endpoints subsets to the KuryrLoadBalancer CRD. If endpoint is created before Service, this handler creates the CRD with the endpoints subsets, otherwise the existent CRD is updated.

  • KuryrLoadBalancerHandler manages KuryrLoadBalancer CRD events when the CRD is successfully created and filled with spec data. This handler is responsible for creating the needed Octavia resources according to the CRD spec and update the status field with information about the generated resources, such as LoadBalancer, LoadBalancerListener, LoadBalancerPool and LoadBalancerMembers.

These Handlers use Project, Subnet and SecurityGroup service drivers to get details for service mapping. In order to prevent Kubernetes objects from being deleted before the OpenStack resources are cleaned up, finalizers are used. Finalizers block deletion of the Service, Endpoints and KuryrLoadBalancer objects until Kuryr deletes the associated OpenStack loadbalancers. After that the finalizers are removed allowing the Kubernetes API to delete the objects. LBaaS Driver is added to manage service translation to the LBaaSv2-like API. It abstracts all the details of service translation to Load Balancer. LBaaSv2Driver supports this interface by mapping to neutron LBaaSv2 constructs.