VPP L2 Switching
Introduction
VPP L2 Switching implements the typical 48-bit destination MAC addresses based packet forwarding function. Packet forwarding information is stored in the l2fib table. Below L2 features are supported:
Forwarding
MAC Learning
Flooding
The l2fib table starts out empty. Static table entries can be added manually. Additionally, the VPP switch can dynamically learn table entries while it switches frames.
When the VPP switch receives a frame, it will first record the source MAC and input interface into the l2fib. This is how VPP performs MAC learning. Next, VPP will determine which interface(s) to transmit the frame out on. VPP will look up the egress interface in the l2fib using the frame’s destination MAC address. If there is no entry matching the destination MAC address in the l2fib, then VPP will flood the frame out every interface connected on the same bridge domain.
This guide explains in detail on how to use the VPP based L2 switching using either memif or NIC interfaces. Other interfaces supported by VPP (e.g. veth) should follow a similar setup, but are not covered in this guide. Users can execute bundled scripts in the dataplane-stack repo to quickly establish the L2 switching cases or manually run the use cases by following detailed guidelines step by step.
Memif Connection
Shared memory packet interface (memif) is a software emulated Ethernet interface, which provides high performance packet transmit and receive between VPP and user application or multiple VPP instances.
In this setup, two pairs of memif interfaces are configured to connect VPP L2 switch instance and VPP based traffic generator. On VPP switch side, DPDK zero-copy memif interfaces are used for testing VPP + DPDK stack. On the VPP traffic generator side, VPP’s native memif interfaces are used for performance reason.
Note
This setup requires at least three isolated cores for VPP workers. Cores 2 - 4 are assumed to be isolated in this guide.
Automated Execution
Quickly set up VPP switch/traffic generator and test L2 switching use case:
cd $NW_DS_WORKSPACE/dataplane-stack
./usecase/l2_switching/run_vpp_tg.sh -c 1,2,3
./usecase/l2_switching/run_vpp_sw.sh -m -c 1,4
Note
Use
-h
to check scripts supported options.The VPP traffic generator instance has to be started firstly since it is in memif server role.
After several seconds, examine VPP switch memif interfaces rx/tx counters and packet processing runtime:
./usecase/l2_switching/traffic_monitor.sh
Below is key output:
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
Ethernet0 1 up 9000/0/0/0 rx packets 14321664
rx bytes 916586496
Ethernet1 2 up 9000/0/0/0 tx packets 14321920
tx bytes 916602880
Thread 1 vpp_wk_0 (lcore 4)
Time 1.1, 10 sec internal node vector rate 256.00 loops/sec 47268.46
vector rates in 1.2394e7, out 1.2394e7, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
Ethernet1-output active 54453 13939968 0 5.40e-2 256.00
Ethernet1-tx active 54453 13939968 0 7.03e-1 256.00
dpdk-input polling 54453 13939968 0 4.28e-1 256.00
ethernet-input active 54453 13939968 0 2.33e-1 256.00
l2-fwd active 54453 13939968 0 1.66e-1 256.00
l2-input active 54453 13939968 0 1.54e-1 256.00
l2-learn active 54453 13939968 0 2.12e-1 256.00
l2-output active 54453 13939968 0 6.29e-2 256.00
unix-epoll-input polling 53 0 0 2.14e1 0.00
Note
VPP
Ethernet0
is the aliased name of the input memif interface in the example.VPP
Ethernet1
is the aliased name of the output memif interface in the example.vector rates
provide insights into the packet processing throughput of a specific node or function in VPP.Vectors/Call
measures packet processing efficiency in VPP as operations per function call for a specific node or function.
Stop VPP:
./usecase/l2_switching/stop.sh
Manual Execution
Users can also set up VPP switch/traffic generator and test L2 switching case step by step.
VPP Traffic Generator Setup
Declare variables to hold the runtime directory and CLI socket for VPP traffic generator:
export runtime_dir_tg="/run/vpp/tg"
export sockfile_tg="${runtime_dir_tg}/cli_tg.sock"
Run a VPP instance as software traffic generator on cores 1-3:
cd $NW_DS_WORKSPACE/dataplane-stack/components/vpp/build-root/install-vpp-native/vpp/bin
sudo ./vpp unix {runtime-dir ${runtime_dir_tg} cli-listen ${sockfile_tg}} cpu {main-core 1 corelist-workers 2-3} plugins {plugin dpdk_plugin.so {disable}}
Create VPP memif interfaces and traffic flow with destination MAC address of 00:00:0a:81:00:02
:
sudo ./vppctl -s ${sockfile_tg} create memif socket id 1 filename /tmp/memif_dut_1
sudo ./vppctl -s ${sockfile_tg} create int memif id 1 socket-id 1 rx-queues 1 tx-queues 1 master
sudo ./vppctl -s ${sockfile_tg} create memif socket id 2 filename /tmp/memif_dut_2
sudo ./vppctl -s ${sockfile_tg} create int memif id 1 socket-id 2 rx-queues 1 tx-queues 1 master
sudo ./vppctl -s ${sockfile_tg} set interface mac address memif1/1 02:fe:a4:26:ca:ac
sudo ./vppctl -s ${sockfile_tg} set interface mac address memif2/1 02:fe:51:75:42:ed
sudo ./vppctl -s ${sockfile_tg} set int state memif1/1 up
sudo ./vppctl -s ${sockfile_tg} set int state memif2/1 up
sudo ./vppctl -s ${sockfile_tg} packet-generator new "{ \
name tg0 \
limit -1 \
size 60-60 \
worker 0 \
node memif1/1-output \
data { \
IP4: 00:00:0a:81:00:01 -> 00:00:0a:81:00:02 \
UDP: 192.81.0.1 -> 192.81.0.2 \
UDP: 1234 -> 2345 \
incrementing 8 \
} \
}"
VPP Switch Setup
Declare variables to hold the runtime directory and CLI socket for VPP switch:
export runtime_dir_sw="/run/vpp/sw"
export sockfile_sw="${runtime_dir_sw}/cli_sw.sock"
Run another VPP instance as L2 switch on cores 1 & 4, using DPDK zero-copy memif interfaces:
cd $NW_DS_WORKSPACE/dataplane-stack/components/vpp/build-root/install-vpp-native/vpp/bin
sudo ./vpp unix {runtime-dir ${runtime_dir_sw} cli-listen ${sockfile_sw}} cpu {main-core 1 corelist-workers 4} dpdk { no-pci single-file-segments dev default {num-tx-queues 1 num-rx-queues 1 } vdev net_memif0,role=client,id=1,socket-abstract=no,socket=/tmp/memif_dut_1,mac=02:fe:a4:26:ca:f2,zero-copy=yes vdev net_memif1,role=client,id=1,socket-abstract=no,socket=/tmp/memif_dut_2,mac=02:fe:51:75:42:42,zero-copy=yes }
For more VPP configuration parameters, refer to VPP configuration reference.
Configure DPDK memif interfaces and associate interfaces with a bridge domain:
sudo ./vppctl -s ${sockfile_sw} set int state Ethernet0 up
sudo ./vppctl -s ${sockfile_sw} set int state Ethernet1 up
sudo ./vppctl -s ${sockfile_sw} set interface l2 bridge Ethernet0 10
sudo ./vppctl -s ${sockfile_sw} set interface l2 bridge Ethernet1 10
Add a static entry with MAC address 00:00:0a:81:00:02
and interface Ethernet1 to l2fib table:
sudo ./vppctl -s ${sockfile_sw} l2fib add 00:00:0a:81:00:02 10 Ethernet1 static
To display the entries of the l2fib table, use the command sudo ./vppctl -s ${sockfile_sw} show l2fib all
.
Here is a sample output for the static l2fib entry added previously:
Mac-Address BD-Idx If-Idx BSN-ISN Age(min) static filter bvi Interface-Name
00:00:0a:81:00:02 1 2 0/0 no * - - Ethernet1
L2FIB total/learned entries: 1/0 Last scan time: 0.0000e0sec Learn limit: 16777216
For more detailed usage of VPP commands used above, refer to the following links:
To explore more on VPP’s available commands, please review VPP CLI reference.
Test
Let VPP traffic generator instance start to send the traffic to VPP switch instance:
sudo ./vppctl -s ${sockfile_tg} packet-generator enable-stream tg0
Then VPP switch instance will forward those packets out on output interface.
To display VPP switch interfaces rx/tx counters, firstly clear interfaces counters by sudo ./vppctl -s ${sockfile_sw} clear interfaces
.
After several seconds, run the command sudo ./vppctl -s ${sockfile_sw} show interface
.
Here is a sample output:
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
Ethernet0 1 up 9000/0/0/0 rx packets 50938112
rx bytes 3260039168
Ethernet1 2 up 9000/0/0/0 tx packets 50938112
tx bytes 3260039168
To display packet processing runtime, firstly clear packet processing runtime statistics by sudo ./vppctl -s ${sockfile_sw} clear runtime
.
After several seconds, run the command sudo ./vppctl -s ${sockfile_sw} show runtime
.
Below is key output:
---------------
Thread 1 vpp_wk_0 (lcore 4)
Time 2.4, 10 sec internal node vector rate 256.00 loops/sec 46835.29
vector rates in 1.2221e7, out 1.2221e7, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
Ethernet1-output active 115248 29503488 0 5.49e-2 256.00
Ethernet1-tx active 115248 29503488 0 7.09e-1 256.00
dpdk-input polling 115248 29503488 0 4.32e-1 256.00
ethernet-input active 115248 29503488 0 2.25e-1 256.00
l2-fwd active 115248 29503488 0 1.66e-1 256.00
l2-input active 115248 29503488 0 1.76e-1 256.00
l2-learn active 115248 29503488 0 2.14e-1 256.00
l2-output active 115248 29503488 0 6.41e-2 256.00
unix-epoll-input polling 112 0 0 2.47e1 0.00
Stop
Kill VPP instances:
sudo pkill -9 vpp
Ethernet Connection
In this L2 switching scenario, DUT and traffic generator run on separated hardware platforms and are connected with Ethernet adapters and cables. The traffic generator could be software-based, e.g., VPP/TRex/TrafficGen running on regular servers, or hardware platforms, e.g., IXIA/Spirent Smartbits.
Find out which DUT interfaces are connected to the traffic generator.
sudo ethtool --identify <interface_name>
will typically blink a light on the NIC
to help identify the physical port associated with the interface.
Get interface names and PCIe addresses from lshw
command:
sudo lshw -c net -businfo
The output will look similar to:
Bus info Device Class Description
====================================================
pci@0000:07:00.0 eth0 network RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
pci@0001:01:00.0 enP1p1s0f0 network MT27800 Family [ConnectX-5]
pci@0001:01:00.1 enP1p1s0f1 network MT27800 Family [ConnectX-5]
Of the two interfaces connected to the traffic generator, arbitrarily choose one
to be the input interface and the other to be the output interface. In this setup
example, enP1p1s0f0
at PCIe address 0001:01:00.0
is the input interface,
and enP1p1s0f1
at PCIe address 0001:01:00.1
is the output interface.
Automated Execution
Quickly set up VPP switch with input/output interface PCIe addresses on specified cores:
cd $NW_DS_WORKSPACE/dataplane-stack
./usecase/l2_switching/run_vpp_sw.sh -p 0001:01:00.0,0001:01:00.1 -c 1,2
Note
Replace sample addresses in above command with desired PCIe addresses on DUT.
Configure traffic generator to send packets to VPP input interface with a destination
MAC address of 00:00:0a:81:00:02
, then VPP switch will forward those packets out
on VPP output interface.
After several seconds, examine VPP switch DPDK interfaces rx/tx counters and packet processing runtime:
./usecase/l2_switching/traffic_monitor.sh
Below is key output:
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
local0 0 down 0/0/0/0
eth0 1 up 9000/0/0/0 rx packets 25261056
rx bytes 37891584000
eth1 2 up 9000/0/0/0 tx packets 25261056
tx bytes 37891584000
---------------
Thread 1 vpp_wk_0 (lcore 2)
Time 32.4, 10 sec internal node vector rate 15.94 loops/sec 1170803.77
vector rates in 5.7792e6, out 5.7792e6, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
dpdk-input polling 40083994 187040880 0 1.63e0 4.67
eth1-output active 11711018 187040880 0 1.28e-1 15.97
eth1-tx active 11711018 187040880 0 5.01e-1 15.97
ethernet-input active 11711018 187040880 0 6.72e-1 15.97
l2-fwd active 11711018 187040880 0 2.99e-1 15.97
l2-input active 11711018 187040880 0 2.67e-1 15.97
l2-learn active 11711018 187040880 0 4.63e-1 15.97
l2-output active 11711018 187040880 0 1.79e-1 15.97
unix-epoll-input polling 39107 0 0 7.89e0 0.00
Note
VPP
eth0
is the aliased name of the input interface, which is at PCIe address0001:01:00.0
in the example.VPP
eth1
is the aliased name of the output interface, which is at PCIe address0001:01:00.1
in the example.
Stop VPP switch:
./usecase/l2_switching/stop.sh
Manual Execution
Users can also set up VPP switch and test L2 switching case step by step.
VPP Switch Setup
Declare a variable to hold the CLI socket for VPP switch:
export sockfile_sw="/run/vpp/cli_sw.sock"
Run a VPP instance as L2 switch on cores 1 & 2 with input/output interface’s PCIe addresses:
cd $NW_DS_WORKSPACE/dataplane-stack/components/vpp/build-root/install-vpp-native/vpp/bin
sudo ./vpp unix {cli-listen ${sockfile_sw}} cpu {main-core 1 corelist-workers 2} dpdk {dev 0000:01:00.0 {name eth0} dev 0000:01:00.1 {name eth1}}
Note
Replace sample addresses in above command with desired PCIe addresses on DUT.
Bring two Ethernet interfaces in VPP switch up and associate them with a bridge domain:
sudo ./vppctl -s ${sockfile_sw} set interface state eth0 up
sudo ./vppctl -s ${sockfile_sw} set interface state eth1 up
sudo ./vppctl -s ${sockfile_sw} set interface l2 bridge eth0 10
sudo ./vppctl -s ${sockfile_sw} set interface l2 bridge eth1 10
Add a static entry with MAC address 00:00:0a:81:00:02
and interface eth1 to l2fib table:
sudo ./vppctl -s ${sockfile_sw} l2fib add 00:00:0a:81:00:02 10 eth1 static
To display the entries of the l2fib table, use the command sudo ./vppctl -s ${sockfile_sw} show l2fib all
.
Here is a sample output for the static l2fib entry added previously:
Mac-Address BD-Idx If-Idx BSN-ISN Age(min) static filter bvi Interface-Name
00:00:0a:81:00:02 1 2 0/0 no * - - eth1
L2FIB total/learned entries: 1/0 Last scan time: 0.0000e0sec Learn limit: 16777216
For more detailed usage of VPP DPDK section used above, refer to the following link:
Test
Configure traffic generator to send packets to VPP input interface eth0
with
a destination MAC address of 00:00:0a:81:00:02
, then VPP switch will forward
those packets out on VPP output interface eth1
.
To display VPP switch interfaces rx/tx counters, firstly clear interfaces counters by sudo ./vppctl -s ${sockfile_sw} clear interfaces
.
After several seconds, run the command sudo ./vppctl -s ${sockfile_sw} show interface
.
Here is a sample output:
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
local0 0 down 0/0/0/0
eth0 1 up 9000/0/0/0 rx packets 25261056
rx bytes 37891584000
eth1 2 up 9000/0/0/0 tx packets 25261056
tx bytes 37891584000
To display packet processing runtime, firstly clear packet processing runtime statistics by sudo ./vppctl -s ${sockfile_sw} clear runtime
.
After several seconds, run the command sudo ./vppctl -s ${sockfile_sw} show runtime
.
Below is key output:
---------------
Thread 1 vpp_wk_0 (lcore 2)
Time 31.7, 10 sec internal node vector rate 15.96 loops/sec 1174650.79
vector rates in 5.7792e6, out 5.7792e6, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
dpdk-input polling 39295300 183270628 0 1.63e0 4.66
eth1-output active 11472326 183270628 0 1.28e-1 15.98
eth1-tx active 11472326 183270628 0 5.01e-1 15.98
ethernet-input active 11472326 183270628 0 6.72e-1 15.98
l2-fwd active 11472326 183270628 0 2.99e-1 15.98
l2-input active 11472326 183270628 0 2.67e-1 15.98
l2-learn active 11472326 183270628 0 4.63e-1 15.98
l2-output active 11472326 183270628 0 1.79e-1 15.98
unix-epoll-input polling 38337 0 0 7.90e0 0.00
Stop
Kill VPP switch:
sudo pkill -9 vpp