Public | Automated Build

Last pushed: 10 months ago
Short Description
Mount lustre filesystem
Full Description


High-available Lustre filesystem concept with DRBD for Kubernetes.

Image Build Status


All project represents a few simple docker-images with shell-scripts, each one do some does its specific task.

Since lustre zfs and drbd work at the kernel level, which little bit does not fit into the docker's ideology, almost all actions executes directly on the host machine.
Docker and Kubernetes used here only as orchestration-system and ha-management framework.

What each image does?

Image Role
kube-lustre-configurator Reads config, then generates templates and assign resources to specific Kubernetes nodes
lustre Makes lustre target, then imports zpool and mounts lustre target
lustre-client Mounts lustre filesystem
lustre-install Installs lustre and zfs packages and dkms modules
drbd Makes and runs drbd resource
drbd-install Installs drbd packages and dkms modules


  • Kubernetes: >=1.9.1 version
  • Servers: Centos 7 with latest updates
  • Clients: Centos 7 with latest updates (or installed lustre kernel-module)
  • Selinux: disabled
  • Hostnames: Each node should reach each other by single hostname
  • Fixed IPs: Each node should have unchangeable IP-address

You need to understand that all packages will installed directly on your node.


  • Only ZFS Backend is supported.
  • Unmanaged ldev.conf file.
  • This is just concept please don't use it on production!

Quick Start

  • Create namespace, and clusterrolebinding:

    kubectl create namespace lustre
    kubectl create clusterrolebinding --user system:serviceaccount:lustre:default lustre-cluster-admin --clusterrole cluster-admin
  • Download and edit config:

    curl -O
    vim kube-lustre-config.yaml
  • In configuration.json you can specify configurations that will be identical for each part your daemons.

    • Option mountpoint requires only for clients.
    • You can remove drbd section, in this case server will be created without ha-pair.
    • If you have more than one drbd-target per physical server, specify different device, port.
    • Additional you can add protocol and syncer_rate options there.
  • In daemons.json you can specify four types of daemons, example:

    • mgs - Managment server
    • mdt3 - Metadata target (index:3)
    • ost4 - Object storage target (index:4)
    • mdt0-mgs - Metadata target (index:0) with managment server

    Only one management server can be specified

  • Apply your config:

    kubectl apply -f kube-lustre-config.yaml
  • Create job for label nodes and run daemons according your configuration:

    kubectl create -f


After installation you will have one common filesystem mounted to same mountpoint on each node.

You can use hostPath volumes for passthrough directories from lustre filesystem to your containers, or install special hostpath provisioner for Kubernetes for automate volumes allocation process.

In case of ha-installation if you want to migrate lustre resources from one node to another one, you can use simple command for achieve this:

kubectl drain <node> --ignore-daemonsets

Don't forget to enable node after it will able to run resources:

kubectl uncordon <node>

License information

  • Kube-lustre is under the Apache 2.0 license. (See the LICENSE file for details)
  • Lustre filesystem is under the GPL 2.0 license. (See this page for details)
  • DRBD is under the GPL 2.0 license. (See this file for details)
Docker Pull Command
Source Repository