在K8S中部署业务压力测试工具tsung,修改并进行了自测验证
Running Tsung in Kubernetes
This project demonstrate one possible way to run Tsung in Kubernetes using StatefulSet
.
About Tsung
[Tsung] is an open-source multi-protocol distributed load testing tool written in [Erlang].
With proper setup, Tsung could generate millions of virtual users accessing target endpoints.
Typically we run Tsung in baremetal machines or virtual machines. In order to launch Tsung
in Kubernetes, we have to figure out a way to assign hostnames to Tsung pods because Tsung
master have to connect to slaves using their hostnames.
About StatefulSet
[StatefulSet] is a beta feature added to Kubernetes in 1.5. It is a controller used to
provide unique identity to its Pods. Together with a headless service, we could assign dns
name to each pods in the StatefulSet.
Demo
Here is a quick demo showing the process to launch a load test using Tsung in Kubernetes.
You could modify tsung-config.yaml
to test your own systems.
Create Namespace
1
| kubectl create namespace tsung
|
Launch test target
We use nginx as a demo target
target.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
| ---
apiVersion: v1
kind: Service
metadata:
labels:
app: target
name: target
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: target
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: target
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: target
template:
metadata:
labels:
app: target
spec:
containers:
- name: nginx
# image: nginx
image: 10.151.11.61:5000/com.inspur/nginx:1.17.7
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 4
# schedulerName: kube-batch
nodeSelector:
# node-role.kubernetes.io/node: "true"
node-role.kubernetes.io/master: "true"
# perf-test: "true"
|
1
| kubectl create -f target.yaml --namespace tsung
|
Set Tsung config
We will inject Tsung config to master pod using ConfigMap
. Modify the settings if you like.
可根据实际测试场景,进行测试如下配置内容调整
tsung-config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| apiVersion: v1
data:
config.xml: |
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE tsung SYSTEM "/usr/share/tsung/tsung-1.0.dtd" []>
<tsung loglevel="warning">
<clients>
<client host="tsung-slave-0.tsung-slave.tsung.svc.cluster.local" />
</clients>
<servers>
<server host="target" port="8000" type="tcp"/>
</servers>
<load>
<arrivalphase phase="1" duration="1" unit="minute">
<users arrivalrate="100" unit="second"/>
</arrivalphase>
</load>
<sessions>
<session name="es_load" weight="1" type="ts_http">
<for from="1" to="10" incr="1" var="counter">
<request> <http url="/" method="GET" version="1.1"></http> </request>
</for>
</session>
</sessions>
</tsung>
kind: ConfigMap
metadata:
name: tsung-config
|
1
| kubectl create -f tsung-config.yaml --namespace tsung
|
Launch Tsung slave
tsung-slave.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| ---
apiVersion: v1
kind: Service
metadata:
labels:
run: tsung-slave
name: tsung-slave
spec:
clusterIP: None
selector:
run: tsung-slave
ports:
- port: 22
type: ClusterIP
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: tsung-slave
spec:
serviceName: "tsung-slave"
replicas: 2
template:
metadata:
labels:
run: tsung-slave
spec:
containers:
- name: tsung
image: ddragosd/tsung-docker:1.6.0
imagePullPolicy: IfNotPresent
env:
- name: SLAVE
value: "true"
# schedulerName: kube-batch
nodeSelector:
# node-role.kubernetes.io/node: "true"
# node-role.kubernetes.io/master: "true"
perf-test: "true"
|
1
| kubectl create -f tsung-slave.yaml --namespace tsung
|
Launch Tsung master
Tsung master will begin the test as soon as the Pod boots up. When the test ended,
the master process will keep running so that user could access the test report using
Tsung web interface.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
| ---
apiVersion: v1
kind: Service
metadata:
labels:
run: tsung-master
name: tsung-master
spec:
# clusterIP: None # modify
selector:
run: tsung-master
ports:
- port: 8091
nodePort: 38091 # modify
sessionAffinity: None
# type: ClusterIP
type: NodePort # modify
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: tsung-master
spec:
serviceName: "tsung-master"
replicas: 1
template:
metadata:
labels:
run: tsung-master
spec:
containers:
- name: tsung
image: ddragosd/tsung-docker:1.6.0
imagePullPolicy: IfNotPresent
env:
- name: ERL_SSH_PORT
value: "22"
args:
- -k
- -f
- /tsung/config.xml
- -F
- start
volumeMounts:
- mountPath: /tsung
name: config-volume
volumes:
- configMap:
name: tsung-config
name: config-volume
# schedulerName: kube-batch
nodeSelector:
# node-role.kubernetes.io/node: "true"
node-role.kubernetes.io/master: "true"
# perf-test: "true"
|
1
| kubectl create -f tsung-master.yaml --namespace tsung
|
Access Tsung web interface
1
| kubectl port-forward tsung-master-0 -n tsung 8091:8091
|
If set node-port:38091, Then we could access the web interface at http://master-node-ip:38091
. Not need to set port-forward.
Cleanup
1
| kubectl delete namespace tsung
|
1
2
3
4
5
6
7
8
9
10
11
12
13
|
kubectl create namespace tsung
kubectl create -f target.yaml --namespace tsung
kubectl create -f tsung-config.yaml --namespace tsung
kubectl create -f tsung-slave.yaml --namespace tsung
kubectl create -f tsung-master.yaml --namespace tsung
# kubectl port-forward tsung-master-0 -n tsung 8091:8091
|
注意 tsung-config.yaml 的 需要跟 target.yaml 的svc配置的port相同
Then we could access the web interface at http://localhost:8091
or http://master-node-ip:38091
Cleanup
1
| kubectl delete namespace tsung
|
参考