VPP IPv4 L3fwd¶
Introduction¶
VPP IPv4 L3fwd implements the typical routing function based on 32-bit IPv4 address. It forwards packets using Longest Prefix Match algorithm based on the mtrie forwarding table.
This guide explains in detail on how to use the VPP based IPv4 forwarding related use cases.
Test Setup¶
This guide assumes the following setup:
+------------------+ +-------------------+
| | | |
| Traffic | +----| DUT |
| Generator | Ethernet Connection(s) | N | |
| |<----------------------->| I | |
| | | C | |
| | +----| |
+------------------+ +-------------------+
As shown, the Device Under Test (DUT) should have at least one NIC connected to the traffic generator. The user can use any traffic generator.
Run¶
Find out which interface is connected with traffic generator,
sudo ethtool --identify <interface>
will typically blink a light on the NIC to help identify the
physical port associated with the interface. Get the PCIe address for the interface by running sudo lshw -c net -businfo
.
In this example output, if the interface enP1p1s0f0
is connected to the traffic generator, the corresponding
PCIe address is 0001:01:00.0
:
$ sudo lshw -c net -businfo
Bus info Device Class Description
====================================================
pci@0000:07:00.0 eth0 network RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
pci@0001:01:00.0 enP1p1s0f0 network MT27800 Family [ConnectX-5]
pci@0001:01:00.1 enP1p1s0f1 network MT27800 Family [ConnectX-5]
Start VPP with interactive
command line arguments, and alias the interface name as eth0
for short. For more argument parameters,
refer to VPP configuration reference:
cd <nw_ds_workspace>/dataplane-stack
sudo ./components/vpp/build-root/install-vpp-native/vpp/bin/vpp unix {interactive} dpdk { dev 0001:01:00.0 { name eth0 } }
Typically we configure VPP with 1 packet flow and 10k packet flows. Both cases start with following common VPP command configuration:
# Same for different packet flow setups
vpp# set interface ip address eth0 1.1.1.2/30
vpp# set ip neighbor eth0 1.1.1.1 02:00:00:00:00:00
vpp# set interface state eth0 up
For more detailed usage on above commands, refer to following links:
For 1 packet flow case:
# Add only one route entry here
vpp# ip route add 10.0.0.1/32 count 1 via 1.1.1.1 eth0
For 10k packet flows case:
# Add 10k route entries here
vpp# ip route add 10.0.0.1/32 count 10000 via 1.1.1.1 eth0
Refer to VPP ip route reference for more ip route
options.
To explore more on VPP’s accepted commands, please review VPP cli reference.
Test¶
To display the current set of routes, use the command show ip fib
.
Here is a sample output for added routes:
vpp# show ip fib 10.0.0.1/32
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] epoch:0 flags:none locks:[adjacency:1, default-route:1, ]
10.0.0.1/32 fib:0 index:17 locks:2
CLI refs:1 src-flags:added,contributing,active,
path-list:[22] locks:20000 flags:shared,popular, uPRF-list:22 len:1 itfs:[1, ]
path:[26] pl-index:22 ip4 weight=1 pref=0 attached-nexthop: oper-flags:resolved,
1.1.1.1 eth0
[@0]: ipv4 via 1.1.1.1 eth0: mtu:9000 next:3 flags:[] 02000000000098039b6b62680800
forwarding: unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:22 to:[0:0]]
[0] [@5]: ipv4 via 1.1.1.1 eth0: mtu:9000 next:3 flags:[] 02000000000098039b6b62680800
Check the packet flow with IP destination 10.0.0.1/32, the next hop is resolved, packets will be forwarded to 1.1.1.1 via eth0.
To configure traffic generator for the destination MAC address,
get the VPP interface MAC address via show hardware-interfaces verbose
:
vpp# show hardware-interfaces verbose
Name Idx Link Hardware
eth0 1 up eth0
Link speed: 40 Gbps
RX Queues:
queue thread mode
0 vpp_wk_0 (1) polling
1 vpp_wk_0 (1) polling
Ethernet address 02:fe:40:5e:73:e3
netdev enP1p1s0f0 pci-addr 0001:01:00.0
For 1 packet flow case, configure your traffic generator to send packets
with a destination MAC address of 02:fe:40:5e:73:e3
and a destination IP address 10.0.0.1
,
then vpp
will forward those packets out on eth0.
For the 10000 packet flows case, configure the traffic generator to send packets
with a destination MAC address of 02:fe:40:5e:73:e3
and a destination IP address
starting from 10.0.0.1/32
and incrementing by 1 for 10000 increments. VPP will
forward these packets out on eth0.
Suggested Experiments¶
Add another interface¶
To add another interface in VPP, first obtain its PCIe address. For example, enP1p1s0f1
in the lshw output sample has a PCIe address of 0001:01:00.1
.
Start VPP with interactive
command line arguments, and add additional interface as following:
cd <nw_ds_workspace>/dataplane-stack
sudo ./components/vpp/build-root/install-vpp-native/vpp/bin/vpp unix {interactive} dpdk { dev 0001:01:00.0 { name eth0 } dev 0001:01:00.1 { name eth1 }}
Create another interface in VPP command line with different interface name:
vpp# set interface ip address eth1 3.3.3.2/30
vpp# set ip neighbor eth1 3.3.3.3 02:00:00:00:00:01
vpp# set interface state eth1 up
New routes can be add to this interface afterwards:
vpp# ip route add 30.0.0.0/32 count 1 via 3.3.3.3 eth1
Start with configuration file¶
To start VPP with startup configuration file, refer to VPP starts with configuration file.
Create a very simple startup.conf file:
cd <nw_ds_workspace>/dataplane-stack
cat <<EOF > startup.conf
unix {
interactive
}
EOF
Instruct VPP to load this file with the -c option. For example:
sudo ./components/vpp/build-root/install-vpp-native/vpp/bin/vpp -c startup.conf
Add CPU cores to worker thread¶
To add more CPU cores for VPP data plane to achieve better performance, refer to VPP configuration cpu section
cpu {
main-core 1
corelist-workers 2-3,18-19
}
Change number of descriptors in receive ring and transmit ring¶
Changing number of descriptors in receive ring and transmit ring can impact performance. The default number is 1024, refer to VPP configuration num-rx-desc num-tx-desc
dpdk {
dev default {
num-rx-desc 512
num-tx-desc 512
}
}
Use faster DPDK vector PMDs¶
It is possible to use faster DPDK vector PMDs by disabling multi-segment buffers and UDP/TCP TX checksum offload. This improves performance but does not support Jumbo MTU. To utilize the DPDK vector PMDs, refer to VPP configuration no-multi-seg
dpdk {
no-multi-seg
no-tx-checksum-offload
}
Use other types of device drivers¶
Besides Mellanox ConnectX-5, VPP supports NICs from other vendors as well. VPP is integrated with NICs using the following 2 methods: