LabelFusion is a pipeline to rapidly generate high quality RGBD data with pixelwise labels and object poses, developed by the Robot Locomotion Group at MIT CSAIL.
We used this pipeline to generate over 1,000,000 labeled object instances in multi-object scenes, with only a few days of data collection and without using any crowd sourcing platforms for human annotation.
Our goal is to enable researchers and practitioners to generate customized datasets, which for example can be used to train any of the available state-of-the-art image segmentation neural network architectures.
You can use this Docker container to get started generating your own dataset for your own objects. For more information please visit the LabelFusion website.
1) Install nvidia-docker
git clone https://github.com/RobotLocomotion/LabelFusion.git
docker_run.sh script calls
nvidia-docker to start the LabelFusion Docker container with an interactive bash session. The first time it runs the LabelFusion image will be downloaded from DockerHub automatically.
The script sets the required environment variables and mounts your local
LabelFusion source directory as a volume inside the Docker container. There is no additional code that needs to be compiled. The LabelFusion image already contains all the required binary dependencies.
You can optionally give a path to a data directory. If the path to a data directory is given then the data directory is also mounted as a volume inside the container. The paths inside the Docker container will be:
~/labelfusion <-- the mounted LabelFusion directory
~/labelfusion/data <-- the mounted data directory
When the Docker container starts it launches an interactive bash session. It automatically sources the file
~/labelfusion/setup_environment.sh inside the image to setup the required environment variables for using LabelFusion tools.