Mellanox 网卡 Offload 支持¶
Kube-OVN 在最终的数据平面使用 OVS 来完成流量转发,相关的流表匹配,隧道封装等功能为 CPU 密集型,在大流量下会消耗大量 CPU 资源并导致 延迟上升和吞吐量下降。Mellanox 的 Accelerated Switching And Packet Processing (ASAP²) 技术可以将 OVS 相关的操作卸载到硬件网卡内的 eSwitch 上执行。该技术可以在无需对 OVS 控制平面进行修改的情况下,缩短数据路径,避免对主机 CPU 资源的使用,大幅降低延迟并显著提升吞吐量。
前置条件¶
- Mellanox CX5/CX6/CX7/BlueField 等支持 ASAP² 的硬件网卡。
- CentOS 8 Stream 或上游 Linux 5.7 以上内核支持。
- 由于当前网卡不支持
dp_hash
和hash
操作卸载,需关闭 OVN LB 功能。 - 为了配置卸载模式,网卡不能绑定 bond。
配置 SR-IOV 和 Device Plugin¶
Mellanox 网卡支持两种配置 offload 的方式,一种手动配置网卡 SR-IOV 和 Device Plugin,另一种通过 sriov-network-operator 进行自动配置。
手动配置 SR-IOV 和 Device Plugin¶
配置 SR-IOV¶
查询网卡的设备 ID,下面的例子中为 84:00.0
和 84.00.1
:
# lspci -nn | grep ConnectX-5
84:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017]
84:00.1 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017]
根据设备 ID 找到对应网卡:
# ls -l /sys/class/net/ | grep 84:00.0
lrwxrwxrwx 1 root root 0 Feb 4 16:16 enp132s0f0np0 -> ../../devices/pci0000:80/0000:80:08.0/0000:84:00.0/net/enp132s0f0np0
# ls -l /sys/class/net/ | grep 84:00.1
lrwxrwxrwx 1 root root 0 Feb 4 16:16 enp132s0f1np1 -> ../../devices/pci0000:80/0000:80:08.0/0000:84:00.1/net/enp132s0f1np1
检查网卡是否绑定 bond:
本示例中网卡 enp132s0f0np0 和 enp132s0f1np1 绑定 bond1
# ip link show enp132s0f0np0 | grep bond
160: enp132s0f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000
# ip link show enp132s0f1np1 | grep bond
169: enp132s0f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000
移除 bond 和现有的 VF:
ifenslave -d bond1 enp132s0f0np0
ifenslave -d bond1 enp132s0f1np1
echo 0 > /sys/class/net/enp132s0f0np0/device/sriov_numvfs
echo 0 > /sys/class/net/enp132s0f1np1/device/sriov_numvfs
ip link set enp132s0f0np0 down
ip link set enp132s0f1np1 down
配置规则下发模式:
OVS 内核支持两种规则插入硬件的模式:SMFS 和 DMFS
- SMFS (software-managed flow steering):默认模式,规则由软件(驱动程序)直接插入硬件。这种模式对规则插入进行了优化。
- DMFS (device-managed flow steering):规则插入是通过固件命令完成的。该模式针对系统中少量规则的吞吐量进行了优化。
可在支持该模式的内核中通过 sysfs 或 devlink API 进行配置:
# 通过 sysfs 进行配置
echo <smfs|dmfs> > /sys/class/net/enp132s0f0np0/compat/devlink/steering_mode
echo <smfs|dmfs> > /sys/class/net/enp132s0f1np1/compat/devlink/steering_mode
# 通过 devlink 进行配置
devlink dev param set pci/84:00.0 name flow_steering_mode value smfs cmode runtime
devlink dev param set pci/84:00.1 name flow_steering_mode value smfs cmode runtime
注意:若不了解应该选择哪个模式,则可使用默认模式,无需进行配置。
检查可用 VF 数量:
# cat /sys/class/net/enp132s0f0np0/device/sriov_totalvfs
127
# cat /sys/class/net/enp132s1f0np1/device/sriov_totalvfs
127
创建 VF,总数不要超过上面查询出的数量:
# echo '4' > /sys/class/net/enp132s0f0np0/device/sriov_numvfs
# echo '4' > /sys/class/net/enp132s1f0np1/device/sriov_numvfs
# ip link show enp132s0f0np0
160: enp132s0f0np0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether 08:c0:eb:74:c3:4a brd ff:ff:ff:ff:ff:ff
vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
# ip link show enp132s0f1np1
169: enp132s0f1np1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether 08:c0:eb:74:c3:4b brd ff:ff:ff:ff:ff:ff
vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
# ip link set enp132s0f0np0 up
# ip link set enp132s0f1np1 up
找到上述 VF 对应的设备 ID:
# lspci -nn | grep ConnectX-5 | grep Virtual
84:00.2 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
84:00.3 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
84:00.4 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
84:00.5 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
84:00.6 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
84:00.7 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
84:01.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
84:01.1 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
将 VF 从驱动中解绑:
echo 0000:84:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:84:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:84:00.4 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:84:00.5 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:84:00.6 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:84:00.7 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:84:01.0 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:84:01.1 > /sys/bus/pci/drivers/mlx5_core/unbind
开启 eSwitch 模式,并设置硬件卸载:
devlink dev eswitch set pci/0000:84:00.0 mode switchdev
devlink dev eswitch set pci/0000:84:00.1 mode switchdev
ethtool -K enp132s0f0np0 hw-tc-offload on
ethtool -K enp132s0f1np1 hw-tc-offload on
SR-IOV VF 链路聚合配置:
SR-IOV VF LAG 允许网卡的 PF 获取 OVS 试图卸载到绑定网络设备的规则,并将其卸载到硬件 e-switch 上。支持的 bond 模式如下:
- Active-backup
- XOR
- LACP
SR-IOV VF LAG 可将 LAG 功能完全卸载给硬件。bond 会创建一个单一的 bond PF 端口。当使用硬件卸载时,两个端口的数据包可转发到任何一个 VF。来自 VF 的流量可根据 bond 状态转发到两个端口。这意味着,在主备模式下,只有一个 PF 处于运行状态,来自任何 VF 的流量都会通过该 PF。在 XOR 或 LACP 模式下,如果两个 PF 都正常运行,则来自任何 VF 的流量都会在这两个 PF 之间分配。
本示例中将采用 LACP 的模式,配置方式如下:
modprobe bonding mode=802.3ad
ip link set enp132s0f0np0 master bond1
ip link set enp132s0f1np1 master bond1
ip link set enp132s0f0np0 up
ip link set enp132s0f1np1 up
ip link set bond1 up
注意:若不需要绑定 bond,请忽略上述操作。
重新绑定驱动,完成 VF 设置:
echo 0000:84:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:84:00.3 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:84:00.4 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:84:00.5 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:84:00.6 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:84:00.7 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:84:01.0 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:84:01.1 > /sys/bus/pci/drivers/mlx5_core/bind
NetworkManager
的一些行为可能会导致驱动异常,如果卸载出现问题建议关闭 NetworkManager
再进行尝试:
systemctl stop NetworkManager
systemctl disable NetworkManager
配置 Device Plugin¶
由于每个机器的 VF 数量优先,每个使用加速的 Pod 会占用 VF 资源,我们需要使用 SR-IOV Device Plugin 管理相应资源,使得调度器知道如何根据 资源进行调度。
创建 SR-IOV 相关 Configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: sriovdp-config
namespace: kube-system
data:
config.json: |
{
"resourceList": [{
"resourcePrefix": "mellanox.com",
"resourceName": "cx5_sriov_switchdev",
"selectors": {
"vendors": ["15b3"],
"devices": ["1018"],
"drivers": ["mlx5_core"]
}
}
]
}
SR-IOV Device Plugin 会根据关联的 ConfigMap 中指定的配置创建设备插件端点,ConfigMap 的 name 为 sriovdp-config。
selectors
: VF 选择器vendors
: 目标设备供应商十六进制代码字符串devices
: 目标设备的设备十六进制代码字符串drivers
: 以字符串形式显示的目标设备驱动程序名称
selectors
还支持基于 pciAddresses
、acpiIndexes
等参数进行 VF 的选择,更多详细配置请参考SR-IOV ConfigMap 配置
参考 SR-IOV 文档进行部署:
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/sriov-network-device-plugin/v3.6.2/deployments/sriovdp-daemonset.yaml
检查 SR-IOV 资源是否已经注册到 Kubernetes Node 中:
kubectl describe node kube-ovn-01 | grep mellanox
mellanox.com/cx5_sriov_switchdev: 8
mellanox.com/cx5_sriov_switchdev: 8
mellanox.com/cx5_sriov_switchdev 0 0
使用 sriov-network-operator 配置 SR-IOV 和 Device Plugin¶
安装 node-feature-discovery自动检测硬件的功能和系统配置:
kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.11.3
或者通过下面的命令,手动给有 offload 能力的网卡增加 annotation:
kubectl label nodes [offloadNicNode] feature.node.kubernetes.io/network-sriov.capable=true
克隆代码仓库并安装 Operator:
git clone --depth=1 https://github.com/kubeovn/sriov-network-operator.git
kubectl apply -k sriov-network-operator/deploy
检查 Operator 组件是否工作正常:
# kubectl get -n kube-system all | grep sriov
NAME READY STATUS RESTARTS AGE
pod/sriov-network-config-daemon-bf9nt 1/1 Running 0 8s
pod/sriov-network-operator-54d7545f65-296gb 1/1 Running 0 10s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/sriov-network-config-daemon 1 1 1 1 1 beta.kubernetes.io/os=linux,feature.node.kubernetes.io/network-sriov.capable=true 8s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/sriov-network-operator 1/1 1 1 10s
NAME DESIRED CURRENT READY AGE
replicaset.apps/sriov-network-operator-54d7545f65 1 1 1 10s
检查 SriovNetworkNodeState
,下面以 node1
节点为例,该节点上有两个 Mellanox 网卡:
# kubectl get sriovnetworknodestates.sriovnetwork.openshift.io -n kube-system node1 -o yaml
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodeState
spec: ...
status:
interfaces:
- deviceID: "1017"
driver: mlx5_core
mtu: 1500
pciAddress: "0000:5f:00.0"
totalvfs: 8
vendor: "15b3"
linkSeed: 25000Mb/s
linkType: ETH
mac: 08:c0:eb:f4:85:bb
name: ens41f0np0
- deviceID: "1017"
driver: mlx5_core
mtu: 1500
pciAddress: "0000:5f:00.1"
totalvfs: 8
vendor: "15b3"
linkSeed: 25000Mb/s
linkType: ETH
mac: 08:c0:eb:f4:85:bb
name: ens41f1np1
创建 SriovNetworkNodePolicy
资源,并通过 nicSelector
选择要管理的网卡:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: policy
namespace: kube-system
spec:
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: "true"
eSwitchMode: switchdev
numVfs: 3
nicSelector:
pfNames:
- ens41f0np0
- ens41f1np1
resourceName: cx_sriov_switchdev
再次检查 SriovNetworkNodeState
的 status
字段:
# kubectl get sriovnetworknodestates.sriovnetwork.openshift.io -n kube-system node1 -o yaml
...
spec:
interfaces:
- eSwitchMode: switchdev
name: ens41f0np0
numVfs: 3
pciAddress: 0000:5f:00.0
vfGroups:
- policyName: policy
vfRange: 0-2
resourceName: cx_sriov_switchdev
- eSwitchMode: switchdev
name: ens41f1np1
numVfs: 3
pciAddress: 0000:5f:00.1
vfGroups:
- policyName: policy
vfRange: 0-2
resourceName: cx_sriov_switchdev
status:
interfaces
- Vfs:
- deviceID: 1018
driver: mlx5_core
pciAddress: 0000:5f:00.2
vendor: "15b3"
- deviceID: 1018
driver: mlx5_core
pciAddress: 0000:5f:00.3
vendor: "15b3"
- deviceID: 1018
driver: mlx5_core
pciAddress: 0000:5f:00.4
vendor: "15b3"
deviceID: "1017"
driver: mlx5_core
linkSeed: 25000Mb/s
linkType: ETH
mac: 08:c0:eb:f4:85:ab
mtu: 1500
name: ens41f0np0
numVfs: 3
pciAddress: 0000:5f:00.0
totalvfs: 3
vendor: "15b3"
- Vfs:
- deviceID: 1018
driver: mlx5_core
pciAddress: 0000:5f:00.5
vendor: "15b3"
- deviceID: 1018
driver: mlx5_core
pciAddress: 0000:5f:00.6
vendor: "15b3"
- deviceID: 1018
driver: mlx5_core
pciAddress: 0000:5f:00.7
vendor: "15b3"
deviceID: "1017"
driver: mlx5_core
linkSeed: 25000Mb/s
linkType: ETH
mac: 08:c0:eb:f4:85:bb
mtu: 1500
name: ens41f1np1
numVfs: 3
pciAddress: 0000:5f:00.1
totalvfs: 3
vendor: "15b3"
检查 VF 的状态:
# lspci -nn | grep ConnectX
5f:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017]
5f:00.1 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017]
5f:00.2 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
5f:00.3 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
5f:00.4 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
5f:00.5 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
5f:00.6 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
5f:00.7 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] [15b3:1018]
检查 PF 工作模式:
# cat /sys/class/net/ens41f0np0/compat/devlink/mode
switchdev
安装 Multus-CNI¶
SR-IOV Device Plugin 调度时获得的设备 ID 需要通过 Multus-CNI 传递给 Kube-OVN,因此需要配置 Multus-CNI 配合完成相关任务。
参考 Multus-CNI 文档进行部署:
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/v4.0.2/deployments/multus-daemonset-thick.yml
注意:multus 提供了 Thin 和 Thick 版本的插件,若要支持 SR-IOV 则需要安装 Thick 版本。
创建 NetworkAttachmentDefinition
:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov
namespace: default
annotations:
k8s.v1.cni.cncf.io/resourceName: mellanox.com/cx5_sriov_switchdev
spec:
config: '{
"cniVersion": "0.3.1",
"name": "kube-ovn",
"plugins":[
{
"type":"kube-ovn",
"server_socket":"/run/openvswitch/kube-ovn-daemon.sock",
"provider": "sriov.default.ovn"
},
{
"type":"portmap",
"capabilities":{
"portMappings":true
}
}
]
}'
provider
: 格式为当前NetworkAttachmentDefinition
的 {name}.{namespace}.ovn。
Overlay 卸载¶
Kube-OVN 中开启卸载模式¶
下载安装脚本:
wget https://raw.githubusercontent.com/kubeovn/kube-ovn/release-1.12/dist/images/install.sh
修改相关参数,IFACE
需要为物理网卡名,该网卡需要有可路由 IP:
ENABLE_MIRROR=${ENABLE_MIRROR:-false}
HW_OFFLOAD=${HW_OFFLOAD:-true}
ENABLE_LB=${ENABLE_LB:-false}
IFACE="bond1"
# 以手动配置 SR-IOV 和 Device Plugin 中的网卡为例,若绑定 bond,则将 IFACE 设置为 bond1,若未绑定 bond,则可将 IFACE 设置为 enp132s0f0np0 或 enp132s0f1np1
安装 Kube-OVN:
bash install.sh
创建使用 VF 网卡的 Pod¶
可以使用如下 yaml 格式创建使用 VF 进行网络卸载加速的 Pod:
apiVersion: v1
kind: Pod
metadata:
name: nginx-overlay
annotations:
v1.multus-cni.io/default-network: default/sriov
sriov.default.ovn.kubernetes.io/logical_switch: ovn-default
spec:
containers:
- name: nginx-overlay
image: docker.io/library/nginx:alpine
resources:
requests:
mellanox.com/cx5_sriov_switchdev: '1'
limits:
mellanox.com/cx5_sriov_switchdev: '1'
v1.multus-cni.io/default-network
: 为上一步骤中NetworkAttachmentDefinition
的 {namespace}/{name}。sriov.default.ovn.kubernetes.io/logical_switch
: 指定 Pod 所属的 Subnet,若希望 Pod 所属的子网为默认子网,则该行注解可省略。
Underlay 卸载¶
Kube-OVN 中开启卸载模式¶
下载安装脚本:
wget https://raw.githubusercontent.com/kubeovn/kube-ovn/release-1.12/dist/images/install.sh
修改相关参数,IFACE
需要为物理网卡名,该网卡需要有可路由 IP:
ENABLE_MIRROR=${ENABLE_MIRROR:-false}
HW_OFFLOAD=${HW_OFFLOAD:-true}
ENABLE_LB=${ENABLE_LB:-false}
IFACE=""
# 若需要 Underlay 卸载,IFACE 需设置为其它非 PF 的网卡。(IFACE 为空时会默认使用 K8s 集群通信的网卡,注意这张网卡不能是 PF 的网卡)
安装 Kube-OVN:
bash install.sh
创建使用 VF 网卡的 Pod¶
可以使用如下 yaml 格式创建使用 VF 进行网络卸载加速的 Pod:
apiVersion: kubeovn.io/v1
kind: ProviderNetwork
metadata:
name: underlay-offload
spec:
defaultInterface: bond1
---
apiVersion: kubeovn.io/v1
kind: Vlan
metadata:
name: vlan0
spec:
id: 0
provider: underlay-offload
---
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
name: vlan0
spec:
protocol: IPv4
provider: ovn
cidrBlock: 10.10.204.0/24
gateway: 10.10.204.254
vlan: vlan0
excludeIps:
- 10.10.204.1..10.10.204.100
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-underlay
annotations:
k8s.v1.cni.cncf.io/networks: '[{
"name": "sriov",
"namespace": "default",
"default-route": ["10.10.204.254"]
}]'
sriov.default.ovn.kubernetes.io/logical_switch: vlan0
spec:
containers:
- name: nginx-underlay
image: docker.io/library/nginx:alpine
resources:
requests:
mellanox.com/cx5_sriov_switchdev: '1'
limits:
mellanox.com/cx5_sriov_switchdev: '1'
v1.multus-cni.io/default-network
: 为上一步骤中NetworkAttachmentDefinition
的 {namespace}/{name}。
注意:上述示例中通过 multus 创建了使用 VF 作为副网卡的 Pod,同时将 VF 作为 Pod 的默认路由。还可以将 VF 作为 Pod 的主网卡,更多 multus 配置详见多网卡管理。
需要注意的是仍可以使用如下 yaml 格式创建不使用 VF 进行网络卸载加速的 Pod:
apiVersion: v1
kind: Pod
metadata:
name: nginx-underlay-noVF
annotations:
ovn.kubernetes.io/logical_switch: vlan0
spec:
containers:
- name: nginx-underlay-noVF
image: docker.io/library/nginx:alpine
上述示例会创建一个不使用 VF 进行网络卸载加速的 Pod,其流表仍会被下发至 ovs-kernel 中而不会下发到 e-switch 中。
卸载验证¶
可通过在 Pod 运行节点的 ovs-ovn
容器中运行下面的命令观察卸载是否成功:
# ovs-appctl dpctl/dump-flows -m type=offloaded
ufid:91cc45de-e7e9-4935-8f82-1890430b0f66, skb_priority(0/0),skb_mark(0/0),ct_state(0/0x23),ct_zone(0/0),ct_mark(0/0),ct_label(0/0x1),recirc_id(0),dp_hash(0/0),in_port(5b45c61b307e_h),packet_type(ns=0/0,id=0/0),eth(src=00:00:00:c5:6d:4e,dst=00:00:00:e7:16:ce),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0,ttl=0/0,frag=no), packets:941539, bytes:62142230, used:0.260s, offloaded:yes, dp:tc, actions:54235e5753b8_h
ufid:e00768d7-e652-4d79-8182-3291d852b791, skb_priority(0/0),skb_mark(0/0),ct_state(0/0x23),ct_zone(0/0),ct_mark(0/0),ct_label(0/0x1),recirc_id(0),dp_hash(0/0),in_port(54235e5753b8_h),packet_type(ns=0/0,id=0/0),eth(src=00:00:00:e7:16:ce,dst=00:00:00:c5:6d:4e),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0,ttl=0/0,frag=no), packets:82386659, bytes:115944854173, used:0.260s, offloaded:yes, dp:tc, actions:5b45c61b307e_h
如果有 offloaded:yes, dp:tc
内容证明卸载成功。
微信群 Slack Twitter Support Meeting