Manage QoS¶
Kube-OVN supports two types of Pod level QoS:
- Maximum bandwidth limit QoS.
linux-netem
, QoS for simulating latency and packet loss that can be used for simulation testing.
Currently, only Pod level QoS is supported, and QoS restrictions at the Namespace or Subnet level are not supported.
Maximum Bandwidth Limit QoS¶
This type of QoS can be dynamically configured via Pod annotation and can be adjusted without restarting running Pod. Bandwidth speed limit unit is Mbit/s
.
apiVersion: v1
kind: Pod
metadata:
name: qos
namespace: ls1
annotations:
ovn.kubernetes.io/ingress_rate: "3"
ovn.kubernetes.io/egress_rate: "1"
spec:
containers:
- name: qos
image: docker.io/library/nginx:alpine
Use annotation to dynamically adjust QoS:
kubectl annotate --overwrite pod nginx-74d5899f46-d7qkn ovn.kubernetes.io/ingress_rate=3
Test QoS¶
Deploy the containers needed for performance testing:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: perf
namespace: ls1
labels:
app: perf
spec:
selector:
matchLabels:
app: perf
template:
metadata:
labels:
app: perf
spec:
containers:
- name: nginx
image: docker.io/kubeovn/perf
Exec into one Pod and run iperf3 server:
# kubectl exec -it perf-4n4gt -n ls1 sh
# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Exec into the other Pod and run iperf3 client to connect above server address:
# kubectl exec -it perf-d4mqc -n ls1 sh
# iperf3 -c 10.66.0.12
Connecting to host 10.66.0.12, port 5201
[ 4] local 10.66.0.14 port 51544 connected to 10.66.0.12 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 86.4 MBytes 725 Mbits/sec 3 350 KBytes
[ 4] 1.00-2.00 sec 89.9 MBytes 754 Mbits/sec 118 473 KBytes
[ 4] 2.00-3.00 sec 101 MBytes 848 Mbits/sec 184 586 KBytes
[ 4] 3.00-4.00 sec 104 MBytes 875 Mbits/sec 217 671 KBytes
[ 4] 4.00-5.00 sec 111 MBytes 935 Mbits/sec 175 772 KBytes
[ 4] 5.00-6.00 sec 100 MBytes 840 Mbits/sec 658 598 KBytes
[ 4] 6.00-7.00 sec 106 MBytes 890 Mbits/sec 742 668 KBytes
[ 4] 7.00-8.00 sec 102 MBytes 857 Mbits/sec 764 724 KBytes
[ 4] 8.00-9.00 sec 97.4 MBytes 817 Mbits/sec 1175 764 KBytes
[ 4] 9.00-10.00 sec 111 MBytes 934 Mbits/sec 1083 838 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1010 MBytes 848 Mbits/sec 5119 sender
[ 4] 0.00-10.00 sec 1008 MBytes 846 Mbits/sec receiver
iperf Done.
Modify the ingress bandwidth QoS for the first Pod:
kubectl annotate --overwrite pod perf-4n4gt -n ls1 ovn.kubernetes.io/ingress_rate=30
Test the Pod bandwidth again from the second Pod:
# iperf3 -c 10.66.0.12
Connecting to host 10.66.0.12, port 5201
[ 4] local 10.66.0.14 port 52372 connected to 10.66.0.12 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 3.66 MBytes 30.7 Mbits/sec 2 76.1 KBytes
[ 4] 1.00-2.00 sec 3.43 MBytes 28.8 Mbits/sec 0 104 KBytes
[ 4] 2.00-3.00 sec 3.50 MBytes 29.4 Mbits/sec 0 126 KBytes
[ 4] 3.00-4.00 sec 3.50 MBytes 29.3 Mbits/sec 0 144 KBytes
[ 4] 4.00-5.00 sec 3.43 MBytes 28.8 Mbits/sec 0 160 KBytes
[ 4] 5.00-6.00 sec 3.43 MBytes 28.8 Mbits/sec 0 175 KBytes
[ 4] 6.00-7.00 sec 3.50 MBytes 29.3 Mbits/sec 0 212 KBytes
[ 4] 7.00-8.00 sec 3.68 MBytes 30.9 Mbits/sec 0 294 KBytes
[ 4] 8.00-9.00 sec 3.74 MBytes 31.4 Mbits/sec 0 398 KBytes
[ 4] 9.00-10.00 sec 3.80 MBytes 31.9 Mbits/sec 0 526 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 35.7 MBytes 29.9 Mbits/sec 2 sender
[ 4] 0.00-10.00 sec 34.5 MBytes 29.0 Mbits/sec receiver
iperf Done.
linux-netem QoS¶
Pod can use annotation below to config linux-netem
type QoS: ovn.kubernetes.io/latency
、ovn.kubernetes.io/limit
and ovn.kubernetes.io/loss
。
ovn.kubernetes.io/latency
: Set the Pod traffic delay to an integer value in ms.ovn.kubernetes.io/limit
: Set the maximum number of packets that theqdisc
queue can hold, and takes an integer value, such as 1000.ovn.kubernetes.io/loss
: Set packet loss probability, the value is float type, for example, the value is 20, then it is set 20% packet loss probability.
Created: June 30, 2022