adnan80/dbench
Benchmark Kubernetes persistent disk volumes with fio: Read/write IOPS, bandwidth MB/s and latency.
15
List down all Kubernetes storageClassName
using command kubectl get storageclasses
Note down the storage class that you want to bench mark and use as storageClassName
in below yaml file
Create a PVC claim and Job using below yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: dbench-pv-claim
spec:
storageClassName: ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: batch/v1
kind: Job
metadata:
name: dbench
spec:
template:
spec:
containers:
- name: dbench
image: logdna/dbench:latest
imagePullPolicy: Always
env:
- name: DBENCH_MOUNTPOINT
value: /data
# - name: DBENCH_QUICK
# value: "yes"
# - name: FIO_SIZE
# value: 1G
# - name: FIO_OFFSET_INCREMENT
# value: 256M
# - name: FIO_DIRECT
# value: "0"
volumeMounts:
- name: dbench-pv
mountPath: /data
restartPolicy: Never
volumes:
- name: dbench-pv
persistentVolumeClaim:
claimName: dbench-pv-claim
backoffLimit: 4
Deploy Dbench using: kubectl apply -f dbench.yaml
. Once deployed, the Dbench Job will:
Follow benchmarking progress using: kubectl logs -f job/dbench
(empty output means the Job not yet created, or storageClassName is invalid, see Troubleshooting below)
At the end of all tests, you'll see a summary that looks similar to this:
==================
= Dbench Summary =
==================
Random Read/Write IOPS: 75.7k/59.7k. BW: 523MiB/s / 500MiB/s
Average Latency (usec) Read/Write: 183.07/76.91
Sequential Read/Write: 536MiB/s / 512MiB/s
Mixed Random Read/Write IOPS: 43.1k/14.4k
Once the tests are finished, clean up using: kubectl delete -f dbench.yaml
and that should deprovision the persistent disk and delete it to minimize storage billing.
docker pull adnan80/dbench