VPP IPSec
Introduction
IPSec (Internet Protocol Security) is a set of protocols and algorithms used to secure and protect communication over the internet or any public network. IPSec provides the following main security services:
Confidentiality
Integrity
Authentication
Anti-replay
IPSec uses the following protocols to perform various functions:
AH: The authentication header (AH) protocol adds a header that contains sender authentication data and protects entire IP packet or only the payload of the IP packet.
ESP: The encapsulating security payload (ESP) protocol performs encryption on the entire IP packet or only the payload of the IP packet.
IKE: Internet key exchange (IKEv1 and IKEv2) is a protocol that establishes a secure connection between two devices on the internet, which involves negotiating encryption keys and algorithms.
The IPSec protocols AH and ESP can be implemented in two different modes with different degrees of protection:
Transport: only the payload of the IP packet is encrypted or authenticated. The routing is unmodified, so transport mode is used for end-to-end secure communication.
Tunnel: the entire IP packet is encrypted and authenticated. It is then encapsulated into a new IP packet with a new IP header. Tunnel mode is used for gateway-to-gateway secure communication.
VPP based IPSec solution provides two configurations for selecting traffic to secure: policy and protection.
Policy: references a policy to determine which type of IP traffic needs to be secured using IPSec and how to secure that traffic.
Protection: references a route rule to determine which traffic needs to be secured using IPSec based on the destination IP address.
This guide explains in detail on how to create an IPSec session between two VPP instances using the memif interface or the Ethernet interface. This guide will establish an IPSec session in ESP tunnel mode and cover policy and protection. By following the guidance and leveraging the knowledge it delivers, users can easily figure out the configuration for ESP transport mode, and also the AH tunnel and transport modes.
Memif Connection
Shared memory packet interface (memif) is a software emulated Ethernet interface, which provides high performance packet transmit and receive between VPP and user application or multiple VPP instances.
In this setup, there are two VPP instances named Local
and Remote
, connected with two pairs of memif interfaces.
On the Local
VPP instance, the DPDK zero-copy memif interfaces are used for testing VPP + DPDK stack.
On the Remote
VPP instance, the VPP’s native memif interfaces are used for performance reason.
The Local
instance receives unencrypted packets from one DPDK memif interface and forwards them in an encrypted form through another DPDK memif interface.
The Remote
instance is configured as a traffic generator for unencrypted packets while also capable of receiving encrypted packets from one memif interface and decrypting them.
Here is the topology:
Users can quickly run VPP instances and set up IPSec session through the provided scripts
located at: $NW_DS_WORKSPACE/dataplane-stack/usecase/ipsec
.
Overview
In the memif connection scenario, the main operations of each script are as follows:
run_vpp_remote.sh
Run the
Remote
VPP instance on the specified CPU coresCreate two VPP memif interfaces in the server role
Bring interfaces up and set their IP addresses
Configure a software traffic generator
run_vpp_local.sh
Run the
Local
VPP instance on the specified CPU coresCreate two DPDK memif interfaces in the client role with zero-copy enabled
Bring interfaces up and set their IP addresses
ipsec_remote_setup.sh
Set up IPSec on the Remote
VPP instance.
Set the crypto engine as specified
Create a Security Policy Database (SPD)
Enable SPD on the interface sending packets from local to remote
Create a Security Association (SA), a set of security parameters
Add a SPD entry
Add an IP route entry
Start to send traffic
Set the crypto engine as specified
Create an IP-in-IP tunnel
Create a Security Association (SA), a set of security parameters
Add the SA to the IP-in-IP tunnel
Add an IP route entry
Start to send traffic
ipsec_local_setup.sh
Set up IPSec on the Local
VPP instance.
Set the crypto engine as specified
Create a Security Policy Database (SPD)
Enable SPD on an interface sending packets from local to remote
Create a Security Association (SA), a set of security parameters
Add a SPD entry
Add an IP route entries
Set the crypto engine as specified
Create an IP-in-IP tunnel
Create a Security Association (SA), a set of security parameters
Add the SA to the IP-in-IP tunnel
Add an IP route entries
traffic_monitor.sh
Monitor IPSec throughput with the VPP
show runtime
command
stop.sh
Stop both
Local
andRemote
VPP instances
Execution
Before setup, declare a variable to hold VPP’s IPSec configuration:
export IPSEC_CONFIG=policy
export IPSEC_CONFIG=protection
Note
The below setup requires at least three isolated cores for the VPP workers. Cores 2-4 are assumed to be isolated in this guide.
Quickly set up the Local
and Remote
VPP instances and set up IPSec session:
cd $NW_DS_WORKSPACE/dataplane-stack
./usecase/ipsec/run_vpp_remote.sh -m -c 1,2,3
./usecase/ipsec/run_vpp_local.sh -m -c 1,4
./usecase/ipsec/ipsec_remote_setup.sh -m -e native -a aes-gcm-128 --config ${IPSEC_CONFIG}
./usecase/ipsec/ipsec_local_setup.sh -m -e native -a aes-gcm-128 --config ${IPSEC_CONFIG}
This setup will run Local
and Remote
VPP instances on the specified CPU cores, and also configure the memif interfaces.
Furthermore, it will establish an IPSec session through the memif interfaces using the native
crypto engine, aes-gcm-128
cipher algorithm, and the specified configuration.
Note
The
Remote
VPP instance has to be started first since it is in the memif server role.Use
-h
to check the scripts’ supported options.Local
andRemote
VPP instances need to configure the same VPP IPSec configuration.
After setup, the Remote
VPP instance will start to send traffic to the Local
instance. Then, the Local
VPP instance will encapsulate the received packets and forward them to the Remote
VPP instance.
Now, examine the IPSec packet processing runtime.
On the Local
VPP instance:
./usecase/ipsec/traffic_monitor.sh -m -i local
The runtime of the IPSec encrypt worker thread will look like:
---------------
Thread 1 vpp_wk_0 (lcore 4)
Time 3.0, 10 sec internal node vector rate 256.00 loops/sec 15579.95
vector rates in 3.9887e6, out 3.9887e6, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
Ethernet1-output active 46966 12023296 0 5.81e-2 256.00
Ethernet1-tx active 46966 12023296 0 7.32e-1 256.00
dpdk-input polling 46966 12023296 0 4.51e-1 256.00
esp4-encrypt active 46966 12023296 0 2.57e0 256.00
ethernet-input active 46966 12023296 0 2.29e-1 256.00
interface-output active 46966 12023296 0 6.83e-2 256.00
ip4-input active 46966 12023296 0 1.88e-1 256.00
ip4-load-balance active 46966 12023296 0 1.28e-1 256.00
ip4-lookup active 46966 12023296 0 1.59e-1 256.00
ip4-rewrite active 93932 24046592 0 2.90e-1 256.00
ipsec4-output-feature active 93932 24046592 0 5.48e-1 256.00
unix-epoll-input polling 45 0 0 3.85e1 0.00
---------------
Thread 1 vpp_wk_0 (lcore 4)
Time 3.0, 10 sec internal node vector rate 256.00 loops/sec 17669.54
vector rates in 4.5869e6, out 4.5869e6, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
Ethernet1-output active 54005 13825280 0 6.07e-2 256.00
Ethernet1-tx active 54005 13825280 0 7.41e-1 256.00
adj-midchain-tx active 54005 13825280 0 1.26e-1 256.00
dpdk-input polling 54005 13825280 0 4.51e-1 256.00
esp4-encrypt-tun active 54005 13825280 0 2.58e0 256.00
ethernet-input active 54005 13825280 0 2.23e-1 256.00
ip4-input active 54005 13825280 0 1.89e-1 256.00
ip4-lookup active 54005 13825280 0 1.53e-1 256.00
ip4-midchain active 54005 13825280 0 7.07e-1 256.00
ip4-rewrite active 54005 13825280 0 2.16e-1 256.00
unix-epoll-input polling 53 0 0 3.02e1 0.00
On the Remote
VPP instance:
./usecase/ipsec/traffic_monitor.sh -m -i remote
The runtime of the IPSec decrypt worker thread will look like:
---------------
Thread 2 vpp_wk_1 (lcore 3)
Time 3.0, 10 sec internal node vector rate 256.00 loops/sec 3370221.41
vector rates in 3.9840e6, out 0.0000e0, drop 3.9840e6, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
drop active 46904 12007424 0 1.42e-1 256.00
error-drop active 46904 12007424 0 5.66e-2 256.00
esp4-decrypt active 46904 12007424 0 2.41e0 256.00
ethernet-input active 46904 12007424 0 1.75e-1 256.00
ip4-drop active 46904 12007424 0 2.92e-2 256.00
ip4-input active 46904 12007424 0 2.63e-1 256.00
ip4-input-no-checksum active 46904 12007424 0 2.28e-1 256.00
ip4-lookup active 46904 12007424 0 1.57e-1 256.00
ipsec4-input-feature active 93808 24014848 0 4.09e-1 256.00
memif-input polling 10308218 12007424 0 1.38e0 1.16
unix-epoll-input polling 10057 0 0 1.37e1 0.00
---------------
Thread 2 vpp_wk_1 (lcore 3)
Time 3.0, 10 sec internal node vector rate 256.00 loops/sec 2064070.44
vector rates in 4.5923e6, out 0.0000e0, drop 4.5923e6, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
drop active 54064 13840384 0 1.43e-1 256.00
error-drop active 54064 13840384 0 5.61e-2 256.00
esp4-decrypt-tun active 54064 13840384 0 2.43e0 256.00
ethernet-input active 54064 13840384 0 1.76e-1 256.00
ip4-drop active 54064 13840384 0 2.92e-2 256.00
ip4-input active 54064 13840384 0 1.87e-1 256.00
ip4-input-no-checksum active 54064 13840384 0 1.56e-1 256.00
ip4-lookup active 108128 27680768 0 1.76e-1 256.00
ip4-receive active 54064 13840384 0 2.11e-1 256.00
ipsec4-tun-input active 54064 13840384 0 1.98e-1 256.00
memif-input polling 6380277 13840384 0 1.18e0 2.17
unix-epoll-input polling 6224 0 0 1.73e1 0.00
Note
vector rates
provide insights into the packet processing throughput of a specific node or function in VPP.Vectors/Call
measures the packet processing efficiency in VPP as operations per function call for a specific node or function.
Stop both Local
and Remote
VPP instances:
./usecase/ipsec/stop.sh
For more detailed usage on the VPP CLI commands used in scripts, refer to the following links:
Ethernet Connection
In this IPSec scenario, the Local
and Remote
VPP instances run on separate hardware platforms and are connected together via Ethernet adapters and cables.
The traffic generator could be software-based, e.g., VPP/TRex/TrafficGen running on regular servers, or hardware platforms, e.g., IXIA/Spirent Smartbits.
Before starting, all the required interfaces, illustrated above, need to be identified. These are the interfaces connecting the Local
machine to the
traffic generator and to Remote
, which in turn has an interface connecting back to Local
.
The interface names and PCIe addresses can be fetched by running sudo lshw -c net -businfo
.
Note
You can also physically identify the port associated with the interface by running sudo ethtool --identify <interface_name>
,
which typically blinks a light on the NIC.
On Local
the output will look like:
Bus info Device Class Description
====================================================
pci@0001:01:00.0 enP1p1s0f0 network MT27800 Family [ConnectX-5]
pci@0001:01:00.1 enP1p1s0f1 network MT27800 Family [ConnectX-5]
Whereas on Remote
it will look like:
Bus info Device Class Description
========================================================
pci@0000:01:00.0 enp1s0f0np0 network MT28800 Family [ConnectX-5 Ex]
In this example setup, enP1p1s0f0
at PCIe address 0001:01:00.0
on Local
is connected to the traffic generator,
enP1p1s0f1
at PCIe address 0001:01:00.1
on Local
and enp1s0f0np0
at PCIe address 0000:01:00.0
on Remote
are interconnected.
Users can quickly run the VPP instances and set up the IPSec session through the provided scripts
located at: $NW_DS_WORKSPACE/dataplane-stack/usecase/ipsec
Overview
In an Ethernet connection scenario, the main operations of each script are as follows:
run_vpp_remote.sh
Run the
Remote
VPP instance on the specified CPU coresAttach one DPDK interface to the specified PCIe address to VPP
Bring the interface up and set an interface IP address
run_vpp_local.sh
Run the
Local
VPP instance on the specified CPU coresAttach two DPDK interfaces to the specified PCIe addresses to VPP
Bring the interfaces up and set the interfaces IP addresses
ipsec_remote_setup.sh
Set up IPSec on the Remote
VPP instance.
Set the crypto engine as specified
Create a Security Policy Database (SPD)
Enable SPD on the interface sending packets from local to remote
Create a Security Association (SA), a set of security parameters
Add a SPD entry
Add the IP route entries
Set the crypto engine as specified
Create an IP-in-IP tunnel
Create a Security Association (SA), a set of security parameters
Add the SA to IP-in-IP tunnel
Add the IP route entries
ipsec_local_setup.sh
Set up IPSec on the Local
VPP instance.
Set the crypto engine as specified
Create a Security Policy Database (SPD)
Enable SPD on an interface sending packets from local to remote
Create a Security Association (SA), a set of security parameters
Add a SPD entry
Add the IP route entries
Set the crypto engine as specified
Create an IP-in-IP tunnel
Create a Security Association (SA), a set of security parameters
Add the SA to IP-in-IP tunnel
Add the IP route entries
stop.sh
Stop both
Local
andRemote
VPP instances
traffic_monitor.sh
Monitor IPSec throughput with the VPP
show runtime
command
Execution
On the Local
machine declare a variable to hold VPP’s IPSec configuration:
export IPSEC_CONFIG=policy
export IPSEC_CONFIG=protection
Note
The below setup requires at least one isolated core for VPP workers on every machine. Core 2 is assumed to be isolated on the
Local
machine.
Quickly set up the Local
VPP instance and set up IPSec session:
cd $NW_DS_WORKSPACE/dataplane-stack
./usecase/ipsec/run_vpp_local.sh -p 0001:01:00.0,0001:01:00.1 -c 1,2
./usecase/ipsec/ipsec_local_setup.sh -p -e native -a aes-gcm-128 --config ${IPSEC_CONFIG}
This setup will run the Local
VPP instance on specified CPU cores and configure two physical NIC.
Furthermore, it will establish an IPSec session through physical NIC using native
crypto engine, aes-gcm-128
cipher algorithm and specified configuration.
On the Remote
machine declare a variable to hold VPP’s IPSec configuration:
export IPSEC_CONFIG=policy
export IPSEC_CONFIG=protection
Note
The setup below assumes that core 2 is isolated on the
Remote
machine.Local
andRemote
VPP instances need to be configured in the same IPSec mode.
Quickly set up the Remote
VPP instance and set up IPSec session:
cd $NW_DS_WORKSPACE/dataplane-stack
./usecase/ipsec/run_vpp_remote.sh -p 0000:01:00.0 -c 1,2
./usecase/ipsec/ipsec_remote_setup.sh -p -e native -a aes-gcm-128 --config ${IPSEC_CONFIG}
This setup will run the Remote
VPP instance on specified CPU cores and configure one physical NIC.
Furthermore, set up IPSec session through physical NIC using native
crypto engine, aes-gcm-128
cipher algorithm and specified configuration.
Note
Replace the sample addresses in the above command with your PCIe addresses for the
Local
andRemote
machines.
On the Local
machine enP1p1s0f0
is connected to the traffic generator, get the enP1p1s0f0
MAC address via ip link show enP1p1s0f0
:
10: enp1s0f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8996 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether b8:ce:f6:10:e4:6c brd ff:ff:ff:ff:ff:ff
In this case, configure the traffic generator to send packets to Local
VPP with a destination MAC address of b8:ce:f6:10:e4:6c
, a source IP address 192.81.0.1
and a destination IP address of 192.82.0.1
,
then Local
VPP will encapsulate the received packets using a new IP header with destination IP address 192.162.0.1
, encrypt the original packets and forward the encrypted ones through the IPSec tunnel to the Remote
VPP instance.
Now, monitor the IPSec throughput with traffic_monitor.sh
. On the Local
VPP instance run:
./usecase/ipsec/traffic_monitor.sh -p -i local
The runtime output of the IPSec encrypt worker thread will look like:
---------------
Thread 1 vpp_wk_0 (lcore 2)
Time 3.0, 10 sec internal node vector rate 134.15 loops/sec 30524.29
vector rates in 4.1284e6, out 4.1284e6, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
Ethernet1-output active 92733 12440744 0 5.14e-2 134.16
Ethernet1-tx active 92733 12440744 0 4.09e-1 134.16
dpdk-input polling 92733 12440744 0 8.66e-1 134.16
esp4-encrypt active 92733 12440744 0 2.18e0 134.16
ethernet-input active 92733 12440744 0 4.58e-1 134.16
interface-output active 92733 12440744 0 7.09e-2 134.16
ip4-input-no-checksum active 92733 12440744 0 1.53e-1 134.16
ip4-load-balance active 92733 12440744 0 1.27e-1 134.16
ip4-lookup active 92733 12440744 0 1.65e-1 134.16
ip4-rewrite active 185466 24881488 0 2.49e-1 134.16
ipsec4-output-feature active 185466 24881488 0 5.35e-1 134.16
unix-epoll-input polling 91 0 0 2.72e1 0.00
---------------
Thread 1 vpp_wk_0 (lcore 2)
Time 3.0, 10 sec internal node vector rate 134.11 loops/sec 35762.10
vector rates in 4.8512e6, out 4.8512e6, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
Ethernet1-output active 109020 14619307 0 5.28e-2 134.09
Ethernet1-tx active 109020 14619307 0 4.10e-1 134.09
adj-midchain-tx active 109020 14619307 0 1.26e-1 134.09
dpdk-input polling 109020 14619307 0 8.64e-1 134.09
esp4-encrypt-tun active 109020 14619307 0 2.39e0 134.09
ethernet-input active 109020 14619307 0 4.57e-1 134.09
ip4-input-no-checksum active 109020 14619307 0 1.53e-1 134.09
ip4-lookup active 109020 14619307 0 1.57e-1 134.09
ip4-midchain active 109020 14619307 0 3.18e-1 134.09
ip4-rewrite active 109020 14619307 0 2.15e-1 134.09
unix-epoll-input polling 106 0 0 2.68e1 0.00
On the Remote
VPP instance, run:
./usecase/ipsec/traffic_monitor.sh -p -i remote
The runtime output of the IPSec decrypt worker thread will look like:
---------------
Thread 1 vpp_wk_0 (lcore 2)
Time 3.0, 10 sec internal node vector rate 108.15 loops/sec 32832.26
vector rates in 3.5439e6, out 0.0000e0, drop 3.5439e6, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
dpdk-input polling 104394 10682800 0 9.71e-1 102.33
drop active 98780 10682800 0 1.55e-1 108.15
error-drop active 98780 10682800 0 7.27e-2 108.15
esp4-decrypt active 98780 10682800 0 4.14e0 108.15
ethernet-input active 98780 10682800 0 4.93e-1 108.15
ip4-drop active 98780 10682800 0 3.63e-2 108.15
ip4-input-no-checksum active 197560 21365600 0 2.27e-1 108.15
ip4-lookup active 98780 10682800 0 1.67e-1 108.15
ipsec4-input-feature active 197560 21365600 0 2.78e-1 108.15
unix-epoll-input polling 101 0 0 2.99e1 0.00
---------------
Thread 1 vpp_wk_0 (lcore 2)
Time 3.0, 10 sec internal node vector rate 115.77 loops/sec 30719.77
vector rates in 3.5959e6, out 0.0000e0, drop 3.5959e6, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
dpdk-input polling 93819 10837964 0 9.71e-1 115.52
drop active 93637 10837964 0 1.53e-1 115.74
error-drop active 93637 10837964 0 6.94e-2 115.74
esp4-decrypt-tun active 93637 10837964 0 4.09e0 115.74
ethernet-input active 93637 10837964 0 4.87e-1 115.74
ip4-drop active 93637 10837964 0 3.57e-2 115.74
ip4-input-no-checksum active 187274 21675928 0 1.59e-1 115.74
ip4-lookup active 187274 21675928 0 1.89e-1 115.74
ip4-receive active 93637 10837964 0 2.25e-1 115.74
ipsec4-tun-input active 93637 10837964 0 2.13e-1 115.74
unix-epoll-input polling 92 0 0 3.46e1 0.00
Note
vector rates
provide insights into the packet processing throughput of a specific node or function in VPP.Vectors/Call
measures packet processing efficiency in VPP as operations per function call for a specific node or function.
Kill the VPP instances on both Local and Remote machines:
./usecase/ipsec/stop.sh
Suggested Experiments
Performance Improvement
SPD Acceleration
Use the Security Policy Database (SPD) acceleration options in the startup configuration to improve IPSec performance in policy mode with multiple tunnels.
ipsec {
ipv6-outbound-spd-fast-path on
ipv4-outbound-spd-fast-path on
ipv6-inbound-spd-fast-path on
ipv4-inbound-spd-fast-path on
spd-fast-path-num-buckets 256
ipv4-outbound-spd-flow-cache on
ipv4-outbound-spd-hash-buckets 4194304
ipv4-inbound-spd-flow-cache on
ipv4-inbound-spd-hash-buckets 4194304
}
For example, run a VPP instance with the above startup configuration by command line: (remember to adjust the MAIN_CORE and WORKER_CORES to suit your machine):
export VPP_RUNTIME_DIR="/run/vpp/local"
export SOCKFILE="${VPP_RUNTIME_DIR}/cli_local.sock"
export VPP_LOCAL_PIDFILE="${VPP_RUNTIME_DIR}/vpp_local.pid"
export MEMIF_SOCKET1="/tmp/memif_ipsec_1"
export MEMIF_SOCKET2="/tmp/memif_ipsec_2"
export MAIN_CORE=1
export WORKER_CORES=4
cd $NW_DS_WORKSPACE/dataplane-stack/components/vpp/build-root/install-vpp-native/vpp/bin
sudo ./vpp unix { runtime-dir ${VPP_RUNTIME_DIR} cli-listen ${SOCKFILE} pidfile ${VPP_LOCAL_PIDFILE} } cpu { main-core ${MAIN_CORE} corelist-workers ${WORKER_CORES} } plugins { plugin default { disable } plugin dpdk_plugin.so { enable } plugin crypto_native_plugin.so {enable} plugin crypto_openssl_plugin.so {enable} } dpdk { no-pci single-file-segments dev default {num-tx-queues 1 num-rx-queues 1 } vdev net_memif0,role=client,id=1,socket-abstract=no,socket=${MEMIF_SOCKET1},mac=02:fe:a4:26:ca:ac,zero-copy=yes vdev net_memif1,role=client,id=1,socket-abstract=no,socket=${MEMIF_SOCKET2},mac=02:fe:a4:26:ca:ad,zero-copy=yes } ipsec { ipv6-outbound-spd-fast-path on ipv4-outbound-spd-fast-path on ipv6-inbound-spd-fast-path on ipv4-inbound-spd-fast-path on spd-fast-path-num-buckets $((1<<8)) ipv4-outbound-spd-flow-cache on ipv4-outbound-spd-hash-buckets $((1<<22)) ipv4-inbound-spd-flow-cache on ipv4-inbound-spd-hash-buckets $((1<<22)) }
Configure the DPDK memif interface:
sudo ./vppctl -s "${SOCKFILE}" set interface state Ethernet0 up
sudo ./vppctl -s "${SOCKFILE}" set interface state Ethernet1 up
sudo ./vppctl -s "${SOCKFILE}" set interface ip address Ethernet0 10.11.0.1/16
sudo ./vppctl -s "${SOCKFILE}" set interface ip address Ethernet1 10.12.0.1/16
This needs to be done in conjunction with running the remaining scripts.
The manual launch above replaces the run_vpp_local.sh
script.
Crypto Engine
VPP supports multiple software-implemented crypto engines, i.e., native, IPSec-MB, and OpenSSL based engines. Show crypto engines with:
sudo ./vppctl -s ${SOCKFILE} show crypto engines
Name Prio Description
ipsecmb 80 Intel(R) Multi-Buffer Crypto for IPsec Library 1.3.0
native 100 Native ISA Optimized Crypto
openssl 50 OpenSSL
sw_scheduler 100 SW Scheduler Async Engine
dpdk_cryptodev 100 DPDK Cryptodev Engine
sudo ./vppctl -s ${SOCKFILE} show crypto handlers
Algo Type Simple Chained
(nil)
des-cbc encrypt openssl* openssl*
decrypt openssl* openssl*
3des-cbc encrypt openssl* openssl*
decrypt openssl* openssl*
aes-128-cbc encrypt ipsecmb native* openssl openssl*
decrypt ipsecmb native* openssl openssl*
aes-192-cbc encrypt ipsecmb native* openssl openssl*
decrypt ipsecmb native* openssl openssl*
aes-256-cbc encrypt ipsecmb native* openssl openssl*
decrypt ipsecmb native* openssl openssl*
aes-128-ctr encrypt ipsecmb* openssl openssl*
decrypt ipsecmb* openssl openssl*
aes-192-ctr encrypt ipsecmb* openssl openssl*
decrypt ipsecmb* openssl openssl*
aes-256-ctr encrypt ipsecmb* openssl openssl*
decrypt ipsecmb* openssl openssl*
aes-128-gcm aead-encrypt ipsecmb native* openssl ipsecmb* openssl
aead-decrypt ipsecmb native* openssl ipsecmb* openssl
aes-192-gcm aead-encrypt ipsecmb native* openssl ipsecmb* openssl
aead-decrypt ipsecmb native* openssl ipsecmb* openssl
aes-256-gcm aead-encrypt ipsecmb native* openssl ipsecmb* openssl
aead-decrypt ipsecmb native* openssl ipsecmb* openssl
chacha20-poly130aead-encrypt ipsecmb* openssl ipsecmb* openssl
aead-decrypt ipsecmb* openssl ipsecmb* openssl
hmac-md5 hmac openssl* openssl*
hmac-sha-1 hmac ipsecmb* openssl openssl*
hmac-sha-224 hmac ipsecmb* openssl openssl*
hmac-sha-256 hmac ipsecmb* openssl openssl*
hmac-sha-384 hmac ipsecmb* openssl openssl*
hmac-sha-512 hmac ipsecmb* openssl openssl*
sha-1 hash openssl* openssl*
sha-224 hash openssl* openssl*
sha-256 hash openssl* openssl*
sha-384 hash openssl* openssl*
sha-512 hash openssl* openssl*
Users can specify the crypto engine for a certain cipher algorithm to get better performance. Normally, native crypto engine delivers better performance than IPSec-MB and OpenSSL, while OpenSSL provides full support of various algorithms.
Internet Key Exchange
Internet key exchange (IKE) is a protocol that establishes a secure connection between two devices on the internet. Both devices set up security association (SA), which involves negotiating encryption keys and algorithms to transmit and receive subsequent data packets. This section describes how to initiate an IKEv2 session between two VPP instances using memif interfaces.
Responder
Run the responder VPP instance:
export ike_res_runtime_dir="/run/vpp/ike_res"
export sockfile_responder="${ike_res_runtime_dir}/cli_responder.sock"
export memif_socket_ike="/tmp/vpp_ipsec_ike"
cd $NW_DS_WORKSPACE/dataplane-stack/components/vpp/build-root/install-vpp-native/vpp/bin
sudo ./vpp unix { runtime-dir ${ike_res_runtime_dir} cli-listen ${sockfile_responder}} cpu { main-core 1 corelist-workers 2 } dpdk { no-pci }
Create the VPP memif interfaces:
sudo ./vppctl -s ${sockfile_responder} create memif socket id 1 filename ${memif_socket_ike}
sudo ./vppctl -s ${sockfile_responder} create int memif id 1 socket-id 1 rx-queues 1 tx-queues 1 master
Configure the VPP memif interfaces:
sudo ./vppctl -s ${sockfile_responder} set interface state memif1/1 up
sudo ./vppctl -s ${sockfile_responder} set interface ip address memif1/1 192.161.0.1/16
sudo ./vppctl -s ${sockfile_responder} set ip neighbor memif1/1 10.11.0.2 02:fe:a4:26:ca:f2
Configure the responder for IKEv2:
sudo ./vppctl -s ${sockfile_responder} ikev2 profile add pr1
sudo ./vppctl -s ${sockfile_responder} ikev2 profile set pr1 auth shared-key-mic string Vpp123
sudo ./vppctl -s ${sockfile_responder} ikev2 profile set pr1 id remote ip4-addr 192.162.0.1
sudo ./vppctl -s ${sockfile_responder} ikev2 profile set pr1 id local ip4-addr 192.161.0.1
sudo ./vppctl -s ${sockfile_responder} ikev2 profile set pr1 traffic-selector remote ip-range 192.82.0.1 - 192.82.0.255 port-range 0 - 65535 protocol 0
sudo ./vppctl -s ${sockfile_responder} ikev2 profile set pr1 traffic-selector local ip-range 192.81.0.1 - 192.81.0.255 port-range 0 - 65535 protocol 0
sudo ./vppctl -s ${sockfile_responder} create ipip tunnel src 192.161.0.1 dst 192.162.0.1
Last command will create the ipip0
tunnel. Continue after its creation completion.
sudo ./vppctl -s ${sockfile_responder} ikev2 profile set pr1 tunnel ipip0
sudo ./vppctl -s ${sockfile_responder} ip route add 192.82.0.1/16 via 192.162.0.1 ipip0
sudo ./vppctl -s ${sockfile_responder} set interface unnumbered ipip0 use memif1/1
sudo ./vppctl -s ${sockfile_responder} ip route add 192.162.0.0/16 via 192.162.0.1 memif1/1
sudo ./vppctl -s ${sockfile_responder} set ip neighbor memif1/1 192.162.0.1 02:fe:a4:26:ca:f2
Initiator
Run the initiator VPP instance and create the DPDK memif interface:
export ike_ini_runtime_dir="/run/vpp/ike_ini"
export sockfile_initiator="${ike_ini_runtime_dir}/cli_initiator.sock"
cd $NW_DS_WORKSPACE/dataplane-stack/components/vpp/build-root/install-vpp-native/vpp/bin
sudo ./vpp unix { runtime-dir ${ike_ini_runtime_dir} cli-listen ${sockfile_initiator}} cpu { main-core 3 corelist-workers 4 } dpdk { no-pci dev default {num-tx-queues 1 num-rx-queues 1 } vdev net_memif0,role=client,id=1,socket-abstract=no,socket=${memif_socket_ike},mac=02:fe:a4:26:ca:f2 }
Configure the DPDK memif interface:
sudo ./vppctl -s ${sockfile_initiator} set interface state Ethernet0 up
sudo ./vppctl -s ${sockfile_initiator} set interface ip address Ethernet0 192.162.0.1/16
sudo ./vppctl -s ${sockfile_initiator} set ip neighbor Ethernet0 10.11.0.1 02:fe:a4:26:ca:ac
Configure the initiator of IKEv2:
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile add pr1
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 auth shared-key-mic string Vpp123
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 id local ip4-addr 192.162.0.1
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 id remote ip4-addr 192.161.0.1
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 traffic-selector local ip-range 192.82.0.1 - 192.82.0.255 port-range 0 - 65535 protocol 0
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 traffic-selector remote ip-range 192.81.0.1 - 192.81.0.255 port-range 0 - 65535 protocol 0
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 responder Ethernet0 192.161.0.1/16
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 ike-crypto-alg aes-gcm-16 256 ike-dh modp-2048
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 esp-crypto-alg aes-gcm-16 256
sudo ./vppctl -s ${sockfile_initiator} create ipip tunnel src 192.162.0.1 dst 192.161.0.1
Last command will create the ipip0
tunnel. Continue after its creation completion.
sudo ./vppctl -s ${sockfile_initiator} ikev2 profile set pr1 tunnel ipip0
sudo ./vppctl -s ${sockfile_initiator} ip route add 192.82.0.1/16 via 192.161.0.1 ipip0
sudo ./vppctl -s ${sockfile_initiator} set interface unnumbered ipip0 use Ethernet0
sudo ./vppctl -s ${sockfile_initiator} ip route add 192.161.0.0/16 via 192.161.0.1 Ethernet0
sudo ./vppctl -s ${sockfile_initiator} set ip neighbor Ethernet0 192.161.0.1 02:fe:a4:26:ca:f2
Initiate the IKEv2 connection:
sudo ./vppctl -s ${sockfile_initiator} ikev2 initiate sa-init pr1
Verify IPSec connection
On the responder side:
sudo ./vppctl -s ${sockfile_responder} show ipsec all
[0] sa 2164260864 (0x81000000) spi 2071676411 (0x7b7b45fb) protocol:esp flags:[esn anti-replay aead ctr ]
[1] sa 3238002688 (0xc1000000) spi 2821943926 (0xa8337276) protocol:esp flags:[esn anti-replay inbound aead ctr ]
SPD Bindings:
ipip0 flags:[none]
output-sa:
[0] sa 2164260864 (0x81000000) spi 2071676411 (0x7b7b45fb) protocol:esp flags:[esn anti-replay aead ctr ]
input-sa:
[1] sa 3238002688 (0xc1000000) spi 2821943926 (0xa8337276) protocol:esp flags:[esn anti-replay inbound aead ctr ]
IPSec async mode: off
sudo ./vppctl -s ${sockfile_responder} show ikev2 profile
profile pr1
auth-method shared-key-mic auth data Vpp123
local id-type ip4-addr data 192.161.0.1
remote id-type ip4-addr data 192.162.0.1
local traffic-selector addr 192.81.0.1 - 192.81.0.255 port 0 - 65535 protocol 0
remote traffic-selector addr 192.82.0.1 - 192.82.0.255 port 0 - 65535 protocol 0
protected tunnel ipip0
lifetime 0 jitter 0 handover 0 maxdata 0
On the initiator side:
sudo ./vppctl -s ${sockfile_initiator} show ipsec all
[0] sa 2164260864 (0x81000000) spi 2821943926 (0xa8337276) protocol:esp flags:[esn anti-replay aead ctr ]
[1] sa 3238002688 (0xc1000000) spi 2071676411 (0x7b7b45fb) protocol:esp flags:[esn anti-replay inbound aead ctr ]
SPD Bindings:
ipip0 flags:[none]
output-sa:
[0] sa 2164260864 (0x81000000) spi 2821943926 (0xa8337276) protocol:esp flags:[esn anti-replay aead ctr ]
input-sa:
[1] sa 3238002688 (0xc1000000) spi 2071676411 (0x7b7b45fb) protocol:esp flags:[esn anti-replay inbound aead ctr ]
IPSec async mode: off
sudo ./vppctl -s ${sockfile_initiator} show ikev2 profile
profile pr1
auth-method shared-key-mic auth data Vpp123
local id-type ip4-addr data 192.162.0.1
remote id-type ip4-addr data 192.161.0.1
local traffic-selector addr 192.82.0.1 - 192.82.0.255 port 0 - 65535 protocol 0
remote traffic-selector addr 192.81.0.1 - 192.81.0.255 port 0 - 65535 protocol 0
protected tunnel ipip0
responder Ethernet0 192.161.0.1
ike-crypto-alg aes-gcm-16 256 ike-integ-alg none ike-dh modp-2048
esp-crypto-alg aes-gcm-16 256 esp-integ-alg none
lifetime 0 jitter 0 handover 0 maxdata 0
For more detailed usage of VPP IKEv2 commands used above, refer to the following link: