Public | Automated Build

Last pushed: 8 days ago
Short Description
Ubuntu Core 14.04 + CUDA + Torch7 (including iTorch).
Full Description


cuda-torch

Ubuntu Core 14.04 + CUDA 7.0 + cuDNN v4 + Torch7 (including iTorch).

Requirements

Usage

Use NVIDIA Docker: nvidia-docker run -it kaixhin/cuda-torch.

For more information on CUDA on Docker, see the repo readme.

To use Jupyter/iTorch open up the appropriate port. For example, use nvidia-docker run -it -p 8888:8888 kaixhin/cuda-torch. Then run jupyter notebook --ip="0.0.0.0" --no-browser to open a notebook on localhost:8888.

Citation

If you find this useful in research please consider citing this work.

Docker Pull Command
Owner
kaixhin
Source Repository

Comments (5)
kaixhin
9 months ago

@nightseas These images do use NVIDIA's images, but there are a lot of dependencies that cause automatic builds to time out, hence I have had to split the images.

@bobliu20: Unfortunately this is an issue with cutorch build times. If you are able to look into it yourself then more details can be found at: https://github.com/Kaixhin/dockerfiles/issues/22

bobliu20
10 months ago

could you add a tag to support CUDA8.0 in ubuntu14.04 ? the same as cuda-mxnet. thank you.

nightseas
10 months ago

Hi,
Do you have any plan to create images that support CUDA8.0 & Ubuntu16.04?
Will it be better to base on a NVidia cuda docker image instead of adding thing yourself on Ubuntu base images?
Thanks//BR

kaixhin
2 years ago

Docker uses a union filesystem (currently Unionfs), and stack layers of files created by each build step - meaning that there is still a massive overhead from having to include CUDA in the first place which can't be reduced. Secondly, not deleting the CUDA Toolkit allows it to be used to either build more packages within containers running from this, e.g. fbcunn, or let this image be used as the basis for another image with extra packages (I'm working on documenting this use case right now - lots of potential).

hughperkins
2 years ago

Interesting. Good info that we can pass the nvidia gpu through to a container, and how to do this. Question: seems this container is huugggeee... pulling down 1500MB for me, compared to plain ubuntu image just 50-100MB or so. Is this because it includes the whole CUDA-toolbox bit? Since you've presumably already built cutorch and cunn, do we actually need the CUDA-toolkit inside this container?