networking-bagpipe

Driver and agent code to use bagpipe-bgp lightweight implementation of BGP-based VPNs as a backend, for Neutron-BGPVPN Interconnection or Neutron ML2.

Overview

This package includes:

  • a Neutron ML2 mechanism driver (‘bagpipe’)

  • compute node agent code for::

BGP-based VPNs

BGP-based VPNs rely on extensions to the BGP routing protocol and typically MPLS or VXLAN encapsulation to provide multi-site isolated networks. The specification for BGP/MPLS IPVPNs is RFC4364 and the specification for E-VPN is RFC7432.

Neutron ML2 mechanism driver

The bagpipe mechanism driver allocates a BGP VPN identifier (called “route target”) for each Neutron network, and will setup an E-VPN instance for each network.

When a Neutron port goes up, the agent on the corresponding compute node provides this VPN identifier to the locally running bagpipe-bgp, to trigger the attachement of the VM tap interface to the E-VPN instance.

Once E-VPN routes are exchanged, bagpipe-bgp setups VXLAN forwarding state in the linuxbridge.

Neutron BGPVPN Interconnection

The compute node agent code extends the OVS agent of the OVS ML2 driver.

It allows the establishment of interconnections between Neutron networks and BGP/MPLS IP VPNs, using the BGPVPN Interconnection service plugin (networking-bgpvpn) with its bagpipe driver.

How to use ?

How to use the ML2 driver in devstack?

  • install devstack (whether stable/kilo or master)

  • enable the devstack plugin by adding this to local.conf:

    • to use branch stable/X (e.g. stable/mitaka):

      enable_plugin networking-bagpipe https://git.openstack.org/openstack/networking-bagpipe.git stable/X
      
    • to use the development branch:

      enable_plugin networking-bagpipe https://git.openstack.org/openstack/networking-bagpipe.git master
      
  • use the following options in devstack local.conf:

    Q_PLUGIN=ml2
    Q_AGENT=bagpipe-linuxbridge
    Q_ML2_PLUGIN_TYPE_DRIVERS=flat,vlan,vxlan,route_target
    Q_ML2_PLUGIN_MECHANISM_DRIVERS=bagpipe
    
    [[post-config|/$Q_PLUGIN_CONF_FILE]]
    [ml2]
    tenant_network_types=route_target
    
    [ml2_type_route_target]
    # E-VPN route target ranges
    rt_nn_ranges = 100:119,500:519
    
    [ml2_bagpipe]
    # Data Center AS number
    as_number = 64512
    
  • configure bagpipe-bgp on each compute node

    • (note that with devstack, bagpipe-bgp is installed automatically as a git submodule of networking-bagpipe)

    • the following is needed in local.conf to configure bagpipe-bgp and start it in devstack:

      BAGPIPE_DATAPLANE_DRIVER_EVPN=linux_vxlan.LinuxVXLANDataplaneDriver
      
      enable_service b-bgp
      
    • you also need each bagpipe_bgp_ to peer with a BGP Route Reflector:

      • in local.conf:

        # IP of your route reflector or BGP router, or fakeRR:
        BAGPIPE_BGP_PEERS=1.2.3.4
        
      • for two compute nodes, you can use the FakeRR provided in bagpipe-bgp

      • for more than two compute nodes, you can use GoBGP (sample configuration) or a commercial E-VPN implementation (e.g. vendors participating in EANTC interop testing on E-VPN)

How to use the networking-bgpvpn driver in devstack ?

Information on how to use bagpipe driver for networking-bgpvpn is provided in BGPVPN documentation.