Image to run a SPAdes pipeline on fastq data in a mounted volume.
docker pull olcbio/vanillaspades
The container will require a mounted volume from the host machine containing or to contain the data sets to be run.
This can have any name on the host machine, but within the container must be named
/mnt/zvolume1/WGS_Spades until further notice.
You can put data folders in this volume at your leisure, both beforehand and while the container has it mounted. Each data folder inside the volume should contain fastq files and the three files:
See the spades pipeline README available at https://github.com/adamkoziol/SPAdesPipeline/blob/master/README.md
To create the container with the volume mounted, use the -v flag:
docker run -it -v /path/to/datavolume:/mnt/zvolume1/WGS_Spades olcbio/vanillaspades
Which will put you in a bash shell within the container. From the container shell, to run a data folder through the pipeline:
SPAdesPipeline.py -p /mnt/zvolume1/WGS_Spades/datafolder