tkopen/pycharm
PyCharm Docker: CPU Image or GPU-Ready Data Science with TensorFlow and Jupyter Notebook
100K+
Docker container to run PyCharm Community Edition
These containers are a quick way to run or try PyCharm, TensorFlow and Jupyter. The source is available on GitLab project.
In this README we explain how to install Docker-io and Docker Compose Tool for beginners. You can jump those sections and go directly to getdocker-compose.yml
if you are an advanced user.
Two images are available:
Images built after Mar 2023 are based on Ubuntu 22.04 LTS.
cpu-
tags come with only PyCharm pre-installed. Versioned tags contain their version.
gpu-
tags come with PyCharm, TensorFlow and Jupyter Notebook server pre-installed. Versioned tags contain their version.
-latest
tag is the latest release of CPU (excluding pre-releases like release candidates, alphas, and betas).
-devel
and -custom-op
tags are no longer supported.
Go back to contents...
gpu-
tags are based on Tensorflow official container which is based on NVidia CUDA. You need nvidia-docker to run them. NOTE: GPU versions of TensorFlow 1.13 and above (this includes the latest- tags) require an NVidia driver that supports CUDA 10. See NVidia's support matrix. These tags include Jupyter notebook server and some TensorFlow tutorial notebooks. They start a Jupyter Notebook server on boot. NOTE: Mount a volume to /tf/notebooks
to work on your notebooks.
You can also start a gpu-
tag image container launching PyCharm on boot and then run the Jupyter Notebook server inside a PyCharm terminal. Check the instructions below.
All newer images are Python 3 only (3.8 for CPU Ubuntu 22-based images; 3.10 for GPU Ubuntu 22-based images).
Go back to contents...
We highly recommend you go to the official Install Docker Engine webpage to get the most up-to-date instructions.
The difference between docker.io
and docker-ce
lies mainly in the source and maintenance of the packages.
Source:
docker.io
package is provided by the Ubuntu repositories.Version:
docker.io
may not be the latest stable release. It tends to lag behind the official Docker releases because it's maintained by the Ubuntu package maintainers, who may take time to test and approve new versions.Installation:
sudo apt-get install docker.io
Source:
docker-ce
(Community Edition) package is provided directly by Docker, Inc.Version:
docker-ce
package is typically the latest stable release. It ensures you have access to the most recent features, enhancements, and security updates.Installation:
docker-ce
:sudo apt-get update
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
docker-ce
is updated more frequently, ensuring you get the latest updates and security patches promptly.Use docker.io
if you prefer stability over the latest features and updates. It might be a better choice for production environments where stability is more critical than having the latest features.
Use docker-ce
if you want the latest features, improvements, and security updates. This is usually preferred for development environments or for users who need the latest features of Docker.
In summary, docker-ce
is the preferred choice for most users who need the latest Docker features and updates, while docker.io
can be chosen for environments where stability is paramount.
After installing docker engine you need to add yourself to the newly created docker group by doing sudo usermod -a -G docker $USER
and then logout and login again! To log out of your current session, you can use the logout command or press Ctrl+D
in your terminal window. After logging in again, or opening a new terminal.
To check the groups that your user belongs to you can run the id
command.
If the output of the id
command doesn't show the docker
group you might need to log out and log back in again for the group membership to take effect.
For sure you need to log out and log in again after you run the usermod
. After that, you can test it by running docker stats
.
For more details on how to use Docker Engine, you can refer to the official Docker Engine documentation.
Go back to contents...
To test and run quickly you can use any of the following instructions.
With volume persistence, it's better to use the docker-compose.yml
file included in the source GitLab project.
(you can now jump directly to the Docker Compose Tool section)
Run CPU launching PyCharm (no persistence)
docker run -it --rm \
-e DISPLAY=unix$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
tkopen/pycharm:cpu pycharm
Run a CPU-only container and launch the PyCharm window on boot.
This image does not have a Jupyter Notebook server installed.
Run GPU launching Pycharm (no persistence)
docker run -it --rm --runtime=nvidia \
-e DISPLAY=unix$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-p 8888:8888 \
tkopen/pycharm:gpu-tf-jupyter pycharm
Run a GPU container and launch the PyCharm window on boot.
You will need to start the Jupyter Notebook server yourself, see below.
Starting Jupyter Notebook server (no persistence)
You can start the Jupyter Notebook server from a GPU PyCharm terminal executing:
jupyter notebook \
--notebook-dir=/home/coder \
--ip 0.0.0.0 \
--no-browser \
--allow-root
After that, you can navigate to http://localhost:8888 in your browser.
To preserve your notebooks you have to mount /tf/notebooks
, like this:
docker run -it --rm --runtime=nvidia \
-e DISPLAY=unix$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/my_notebooks:/tf/notebooks \
-p 8888:8888 \
tkopen/pycharm:gpu-tf-jupyter
This will run a GPU container and start a Jupyter Notebook server instead of launching PyCharm. It mounts your notebook directory (assumed here to be your local ~/notebooks
). Navigate to http://localhost:8888 in your browser.
Note: Check and follow the instructions given in the terminal because they include the requested token necessary to log in.
To keep your code and PyCharm settings between executions, some directories must be preserved.
For example, to keep your work/code you need to preserve /home/coder/workspace
. But there are other directories you need to preserve between executions to keep PyCharm settings and others.
Those directories are:
Preserving these directories allows docker to start this container exactly how it was when you closed it previously. To make these directories persistent you need to mount these volumes.
docker run -it --rm \
-e DISPLAY=unix$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/.pycharm_cache:/home/coder/.cache \
-v $HOME/.pycharm_java:/home/coder/.java \
-v $HOME/.pycharm_config:/home/coder/.config/JetBrains \
-v $HOME/.pycharm_local:/home/coder/.local/share/JetBrains \
-v $HOME/.pycharm_idea:/home/coder/workspace/.idea \
-v $HOME/Documents/MyCode:/home/coder/workspace \
tkopen/pycharm:cpu pycharm
or with docker volumes...
docker volume create pycharm_cache \
docker volume create pycharm_java \
docker volume create pycharm_config \
docker volume create pycharm_local \
docker volume create pycharm_idea \
docker run -it --rm \
-e DISPLAY=unix$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v pycharm_cache:/home/coder/.cache \
-v pycharm_java:/home/coder/.java \
-v pycharm_config:/home/coder/.config/JetBrains \
-v pycharm_local:/home/coder/.local/share/JetBrains \
-v $HOME/Documents/MyCode:/home/coder/workspace \
-v pycharm_idea:/home/coder/workspace/.idea \
tkopen/pycharm:cpu pycharm
To simplify this we provide a docker-compose.yml
file with base configurations in our source GitLab project, please check below.
Go back to contents...
Docker Compose is a tool for running multi-container applications on Docker defined using the Compose file format. It uses a YML file to configure your application's services, networks, and volumes, and then you can manage them with a single command.
Once you have a Compose file, you can create and start your application with a single command: docker compose up pycharm
.
We highly recommend you go to the official Install the Compose Plugin webpage to get the most up-to-date instructions.
You can install the Docker Compose plugin by doing:
sudo apt-get update
sudo apt-get install docker-compose-plugin
Verify the installation with:
docker compose version
For more details on how to use Docker Compose, you can refer to the official Docker Compose documentation.
Now download the docker-compose tool configuration file. For that go to our source GitLab project, and download the file docker-compose.yml file.
Considering you have placed the file in your Downloads directory ~/Downloads/docker-compose.yml
, you can now launch Pycharm container simply by doing:
docker compose -f ~/Downloads/docker-compose.yml up pycharm
To create and run the GPU container:
docker compose -f ~/Downloads/docker-compose.yml up pycharm-gpu
Docker Compose will then download the necessary images (depending on the speed of your internet connection it can take a couple of minutes, just get yourself a cup of coffee), create and launch a GPU container, create and manage everything for you (networks, volumes, environments, ports, ...) according to the docker-compose.yml
configurations. You only need to start the Jupyter Notebook server from a PyCharm terminal as explained above.
On the first start, Pycharm will ask you to either "Create a new project" or "Open existing project". If you are unsure, it's recommended that you select "Open" and then specify the /home/coder/workspace
directory.
See the images below.
Go back to contents...
You can create and run the containers in daemon mode, freeing the current terminal for other commands, by adding -d
like the following example:
docker compose -f ~/Downloads/docker-compose.yml up -d pycharm
In case you receive an unauthorized error message, it usually means you have to log in to Docker Hub.
If you have not created a docker account before then you can sign up for a personal (free) plan here - https://hub.docker.com/ or use the docker account which your organisation has given you (if you have one).
Use the docker login command from the terminal.
docker login
After that provide a username and password for Docker Hub.
If the PyCharm window doesn't pop up, or you receive a [...] Failed to initialize graphics environment [...]
error, it means that the docker is going to need permission to mount or access the X11 socket. The following command should do the trick:
xhost +local:docker
Go back to contents...
For support create a new issue in our source GitLab project.
Go back to contents...
We are open to contributions to improve the container user experience and use in other operating systems.
Go back to contents...
Go back to contents...
docker pull tkopen/pycharm