BGP VPN Interconnection Service Overview

BGP VPN Interconnection Service Overview


BGP-based IP VPNs networks are widely used in the industry especially for enterprises. This project aims at supporting inter-connection between L3VPNs and Neutron resources, i.e. Networks, Routers and Ports.

A typical use-case is the following: a tenant already having a BGP IP VPN (a set of external sites) setup outside the datacenter, and they want to be able to trigger the establishment of connectivity between VMs and these VPN sites.

Another similar need is when E-VPN is used to provide an Ethernet interconnect between multiple sites.

This service plugin exposes an API to interconnect OpenStack Neutron ports, typically VMs, via the Networks and Routers they are connected to, with an L3VPN network as defined by [RFC4364] (BGP/MPLS IP Virtual Private Networks). The framework is generic to also support E-VPN [RFC7432], which inherits the same protocol architecture as BGP/MPLS IP VPNs.

Reminder on BGP VPNs and Route Targets

BGP-based VPNs allow a network operator to offer a VPN service to a VPN customer, delivering isolating connectivity between multiple sites of this customer.

Contrarily to for instance IPSec or SSL-based VPNs, these VPNs are not typically built over the Internet, are most often not encrypted, and their creation is not at the hand of the end-user.

Here is a reminder on how the connectivity is defined between sites of a VPN (VRFs).

In BGP-based VPNs, a set of identifiers called Route Targets are associated with a VPN, and in the typical case identify a VPN ; they can also be used to build other VPN topologies such as hub’n’spoke.

Each VRF (Virtual Routing and Forwarding) in a PE (Provider Edge router) imports/exports routes from/to Route Targets. If a VRF imports from a Route Target, BGP IP VPN routes will be imported in this VRF. If a VRF exports to a Route Target, the routes in the VRF will be associated to this Route Target and announced by BGP.

Mapping between PEs/CEs and Neutron constructs

As outlined in the overview, how PEs, CEs (Customer Edge router), VRFs map to Neutron constructs will depend on the backend driver used for this service plugin.

For instance, with the current bagpipe driver, the PE and VRF functions are implemented on compute nodes and the VMs are acting as CEs. This PE function will BGP-peer with edge IP/MPLS routers, BGP Route Reflectors or other PEs.

Bagpipe BGP which implements this function could also be instantiated in network nodes, at the l3agent level, with a BGP speaker on each l3agent; router namespaces could then be considered as CEs.

Other backends might want to consider the router as a CE and drive an external PE to peer with the service provider PE, based on information received with this API. It’s up to the backend to manage the connection between the CE and the cloud provider PE.

Another typical option is where the driver delegates the work to an SDN controller which drives a BGP implementation advertising/consuming the relevant BGP routes and remotely drives the vswitches to setup the datapath accordingly.

API and Workflows

BGP VPN are deployed, and managed by the operator, in particular to manage Route Target identifiers that control the isolation between the different VPNs. Because of this BGP VPN parameters cannot be chosen by tenants, but only by the admin. In addition, network operators may prefer to not expose actual Route Target values to the users.

The operation that is let at the hand of a tenant is the association of a BGPVPN resource that it owns with his Neutron Networks or Routers.

So there are two workflows, one for the admin, one for a tenant.

  • Admin/Operator Workflow: Creation of a BGPVPN
    • the cloud/network admin creates a BGPVPN for a tenant based on contract and OSS information about the VPN for this tenant
    • at this stage, the list of associated Networks and Routers can be empty
  • Tenant Workflow: Association of a BGPVPN to Networks and/or Routers, on-demand
    • the tenant lists the BGPVPNs that it can use
    • the tenant associates a BGPVPN with one or more Networks or Routers.

Sequence diagram summarizing these two workflows:

blockdiag Openstack Admin Openstack Tenant X Neutron BGPVPN API BGPVPN Driver for backend Foo BGPVPN Backend Foo BGP Peers BGP sessions live in parallel to BGPVPN service plugin persist resource now ready to interconnect Network Z and BGPVPN Y forwarding plane setup (e.g. MPLS/GRE) BGP peerings POST: create a BGP VPN resource corresponding to a BGP VPN (driver-backend exchange s varying bw. backends) BGPVPN Y UPDATE: set tenant X as owner of BGPVPN Y (?) GET:Learns that it owns BGPVPN Y (?) UPDATE:Associate BGPVPN Y to network Z MP-BGP VPNv4 routes toward Network Z exporte d to BGP VPN Y MP-BGP VPNv4 routes from BGP VPN Y prefixes

Component architecture overview

This diagram gives an overview of the architecture:

blockdiag BGPVPN API DB driver can be e.g. an 'SDN' solution... ... vswitches and/or routers MP-BGP Speakers BGP Peers MPLS routers admin, tenant MPLS or .. MP-BGP BGPVPN service plugin Backend

This second diagram depicts how the bagpipe reference driver implements its backend:

blockdiag BGPVPN API Neutron DB bagpipe driver on each compute node ... : OpenVSwitch Agent + BGPVPN extension OVS br-int/br-tun OVS br-mpls bagpipe-bgp BGP Peers MPLS routers admin, tenant RPCs packets MPLS or .. MP-BGP BGPVPN service plugin bagpipe backend


[RFC4364]BGP/MPLS IP Virtual Private Networks (IP VPNs)
[RFC7432]BGP MPLS-Based Ethernet VPN (Ethernet VPNs, a.k.a E-VPN)
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.