Configure IPsec for Selected Inter-host Pod-to-pod Traffic using IPsec Policies¶
Procedure
Create the IPsec policy.
An IPsec policy can be defined in a yaml file. The following is an example of a yaml file that defines IPsec policies to protect
kube-dns
service in thekube-system
namespace, for traffic on the serving ports TCP 53 and 9153, and UDP 53, and to protectcm-cert-manager
service in thecert-manager
namespace, for traffic on the serving port TCP 9402:apiVersion: starlingx.io/v1 kind: IPsecPolicy metadata: labels: app.kubernetes.io/name: ipsec-policy-manager-operator app.kubernetes.io/managed-by: kustomize name: ipsecpolicy-kube-dns-sample spec: policies: - name: kube-dns servicename: kube-dns servicens: kube-system serviceports: udp/53,tcp/53,tcp/9153 - name: cert-manager servicename: cm-cert-manager servicens: cert-manager serviceports: tcp/9402
Create the IPsec policy by applying it with the following command:
~(keystone_admin)$ kubectl apply -f <policy yaml file>
After the IPsec policy is created, IPsec will be reconfigured to establish tunnels to protect the traffic for the service and ports specified in the policy.
The IPsec policy can be checked by running the following command:
~(keystone_admin)$ kubectl get ipsecpolicies
Check the IPsec tunnels for the protected services.
After the policies are created, platform IPsec will be reconfigured and IPsec SAs in tunnel mode will be established among hosts. Inter-host pod-to-pod traffic will then go through these IPsec tunnels.
The IPsec SAs on controller-0 is as follows:
[sysadmin@controller-0 ~(keystone_admin)]$ sudo swanctl --list-sa Password: k8s-node-controller-1: #2740, ESTABLISHED, IKEv2, 69aa6987b09b40eb_i* 9058f37222979fcd_r local 'CN=ipsec-controller-0' @ 192.168.206.2[4500] remote 'CN=ipsec-controller-1' @ 192.168.206.3[4500] AES_CBC-128/HMAC_SHA2_256_128/PRF_AES128_XCBC/MODP_3072 established 34s ago, rekeying in 2705s, reauth in 722s udp_kube-dns_egress: #232, reqid 2, INSTALLED, TUNNEL, ESP:AES_GCM_16-128 installed 879s ago, rekeying in 2558s, expires in 3081s in c81f7511, 13558 bytes, 82 packets, 14s ago out ca8bfcd7, 7500 bytes, 82 packets, 14s ago local 172.16.192.64/26[udp] remote 172.16.166.178/32[udp/domain] udp_kube-dns_ingress: #233, reqid 3, INSTALLED, TUNNEL, ESP:AES_GCM_16-128 installed 724s ago, rekeying in 2657s, expires in 3236s in c25b22fc, 1800 bytes, 24 packets, 19s ago out c10f8157, 3456 bytes, 24 packets, 19s ago local 172.16.192.115/32[udp/domain] remote 172.16.166.128/26[udp] k8s-node-worker-0: #2736, ESTABLISHED, IKEv2, 4c65d0b8a7510d28_i b03433939e605003_r* local 'CN=ipsec-controller-0' @ 192.168.206.2[4500] remote 'CN=ipsec-worker-0' @ 192.168.206.66[4500] AES_CBC-128/HMAC_SHA2_256_128/PRF_AES128_XCBC/MODP_3072 established 1217s ago, rekeying in 1379s, reauth in 12434s udp_kube-dns_ingress: #231, reqid 11, INSTALLED, TUNNEL, ESP:AES_GCM_16-128 installed 1217s ago, rekeying in 2215s, expires in 2743s in c30805c9, 0 bytes, 0 packets out ce3a2304, 0 bytes, 0 packets local 172.16.192.115/32[udp/domain] remote 172.16.43.0/26[udp]
In the above output:
DNS traffic (that is, DNS queries and responses) between any pods running on controller-0 and UDP port 53 of
kube-dns
service pods running on controller-1, will go through IPsec SA k8s-node-controller-1.udp_kube-dns_egress.DNS traffic (that is, DNS queries and responses) between any pods running on controller-1, and UDP port 53 of
kube-dns
service pods running on controller-0, will go through IPsec SA k8s-node-controller-1.udp_kube-dns_ingress.DNS traffic (that is, DNS queries and responses) between any pods running on worker-0, and UDP port 53 of
kube-dns
service pods running on controller-0, will go through IPsec SA k8s-node-worker-0.udp_kube-dns_ingress.
Similarly, the IPsec SAs on controller-1 is as follows:
sysadmin@controller-1:~$ sudo swanctl --list-sa Password: k8s-node-controller-0: #174, ESTABLISHED, IKEv2, 659eb03ee57aa0f2_i bfe4fda94539ae8d_r* local 'CN=ipsec-controller-1' @ 192.168.206.3[4500] remote 'CN=ipsec-controller-0' @ 192.168.206.2[4500] AES_CBC-128/HMAC_SHA2_256_128/PRF_AES128_XCBC/MODP_3072 established 1323s ago, rekeying in 2181s, reauth in 12993s udp_kube-dns_ingress: #205, reqid 3, INSTALLED, TUNNEL, ESP:AES_GCM_16-128 installed 1323s ago, rekeying in 2016s, expires in 2637s in cb0b1818, 11976 bytes, 128 packets, 23s ago out ca5bd520, 21544 bytes, 128 packets, 23s ago local 172.16.166.178/32[udp/domain] remote 172.16.192.64/26[udp] udp_kube-dns_egress: #206, reqid 2, INSTALLED, TUNNEL, ESP:AES_GCM_16-128 installed 1298s ago, rekeying in 1967s, expires in 2662s in c11916c1, 6912 bytes, 48 packets, 31s ago out c1f452cf, 3600 bytes, 48 packets, 31s ago local 172.16.166.128/26[udp] remote 172.16.192.115/32[udp/domain] k8s-node-worker-0: #173, ESTABLISHED, IKEv2, 7d2e6d327cb71be2_i 4b0aefe2ed367e48_r* local 'CN=ipsec-controller-1' @ 192.168.206.3[4500] remote 'CN=ipsec-worker-0' @ 192.168.206.66[4500] AES_CBC-128/HMAC_SHA2_256_128/PRF_AES128_XCBC/MODP_3072 established 1393s ago, rekeying in 1557s, reauth in 11731s udp_kube-dns_ingress: #204, reqid 10, INSTALLED, TUNNEL, ESP:AES_GCM_16-128 installed 1393s ago, rekeying in 2024s, expires in 2567s in c171851d, 0 bytes, 0 packets out c04ec9b1, 0 bytes, 0 packets local 172.16.166.178/32[udp/domain] remote 172.16.43.0/26[udp]
And the IPsec SAs on worker-0 is as follows:
sysadmin@worker-0:~$ sudo swanctl --list-sa Password: k8s-node-controller-0: #143, ESTABLISHED, IKEv2, b20390e5aa9880c8_i* 43a431724a5da2b0_r local 'CN=ipsec-worker-0' @ 192.168.206.66[4500] remote 'CN=ipsec-controller-0' @ 192.168.206.2[4500] AES_CBC-128/HMAC_SHA2_256_128/PRF_AES128_XCBC/MODP_3072 established 1470s ago, rekeying in 854s, reauth in 10706s udp_kube-dns_egress: #128, reqid 3, INSTALLED, TUNNEL, ESP:AES_GCM_16-128 installed 1471s ago, rekeying in 1954s, expires in 2490s in c2db53f4, 0 bytes, 0 packets out cf6f85e6, 6148 bytes, 100 packets, 92s ago local 172.16.43.0/26[udp] remote 172.16.192.115/32[udp/domain] k8s-node-controller-1: #142, ESTABLISHED, IKEv2, 7d2e6d327cb71be2_i* 4b0aefe2ed367e48_r local 'CN=ipsec-worker-0' @ 192.168.206.66[4500] remote 'CN=ipsec-controller-1' @ 192.168.206.3[4500] AES_CBC-128/HMAC_SHA2_256_128/PRF_AES128_XCBC/MODP_3072 established 1574s ago, rekeying in 1451s, reauth in 10110s udp_kube-dns_egress: #127, reqid 8, INSTALLED, TUNNEL, ESP:AES_GCM_16-128 installed 1574s ago, rekeying in 1772s, expires in 2386s in c04ec9b1, 0 bytes, 0 packets out c171851d, 8957 bytes, 143 packets, 31s ago local 172.16.43.0/26[udp] remote 172.16.166.178/32[udp/domain]
Note
The IPsec child SA is configured with
start_action = trap
. This means that the IPsec tunnels will only be established when there is matching traffic. In the above example, policies are defined and created forkube-dns
on UDP port 53, TCP port 53 and 9153, and forcert-manager
on TCP port 9402. However, as there is only existing traffic that matches the policies forkube-dns
on UDP port 53, there are only IPsec tunnels established forkube-dns
on UDP port 53.The IPsec policies can be created before or after a service is deployed. The policies can be created as part of the service deployment.
Update the IPsec policy.
The existing IPsec policies can be updated by running the following command:
~(keystone_admin)$ kubectl edit ipsecpolicies <ipsec policy>
The existing policy can also be changed by updating the original yaml file, then re-applying it.
The following updated yaml file removes UDP port 53 from the policy. When applied, only TCP traffic on port 53 and 9153 are protected by IPsec.
apiVersion: starlingx.io/v1 kind: IPsecPolicy metadata: labels: app.kubernetes.io/name: ipsec-policy-manager-operator app.kubernetes.io/managed-by: kustomize name: ipsecpolicy-kube-dns-sample spec: policies: - name: kube-dns servicename: kube-dns servicens: kube-system serviceports: tcp/53,tcp/9153 - name: cert-manager servicename: cm-cert-manager servicens: cert-manager serviceports: tcp/9402
Check the IPsec tunnels after the IPsec policies of services are updated.
After the policies are updated, platform IPsec will be reconfigured and the IPsec SAs in tunnel mode for the removed services’ ports will be removed. Inter-host pod-to-pod traffic on the removed ports will no longer go through IPsec.
The IPsec SAs and tunnels can be checked by running the following command:
~(keystone_admin)$ sudo swanctl --list-sa
Remove the IPsec policy.
Existing IPsec policies can be removed by running the following command:
~(keystone_admin)$ kubectl delete ipsecpolicies <ipsec policy>
After the IPsec policy is removed, the service’s traffic on the specified ports are no longer protected by IPsec.