Public | Automated Build

Last pushed: 7 months ago
Short Description
A generic Docker image for Ghost based on Alpine Linux.
Full Description

Alpine Ghost

TL;DR

A generic Docker image for Ghost based on Alpine Linux.

$ docker run -it --rm -p 2368:2368 ashald/alpine-docker

Generic?..

This image is designed with two concepts in mind: simple and powerful. It should suit any possible use-case that
involves Ghost and is possible or impossible with other images. On the other hand its interface should be intuitive
enough so that one cans tart using it without reading any documentation.

What's inside?

Philosophy

Encourage one way, allow many

The fundamental concepts are:

  • keep it secure (e.g., don't run anything as root)
  • keep things in common place (where everybody else keeps them usually)
  • keep config volatile (generate it upon start from environment variables)
  • persist only those data that you care about (don't just mindlessly mount a volume as the Ghost data dir)
  • try to provide sane defaults (so next to 0 configuration required to kickstart the thing)

Configs often contain sensitive data. The Twelve-Factor App offers a nice idea of injecting
such data via environment variables. Given that Ghost config is not that big, why not to inject it entirely via
environment variables (minus some boilerplate stuff)?

Ghost blog works with persistent data that can be split into couple of categories: the data needed to make it run
(such as themes) and the data generated by it (or its users - database, uploads and so on). It's a common
practice to persist all of that. Instead, this image proposes an idea of keeping only those pieces that are really
important. Or, in other words, only the generated content. Everything that "was there" before blog started is considered
unimportant as it's assumed that blog is set-up and provisioned in a reproducible way and can be re-provisioned at any
point of time. It's expected that themes are stored on data volumes on inside data containers (maybe even available on
Docker Hub) and are attached to the Ghost image instance.

Internals

The Ghost instance within container is run with NODE_ENV=production by default.

When the container started "as is", the entrypoint is usually executed under the root account. This is the case for
this image as well. Since running things under the root is not the best thing in terms of security, the Ghost
instance inside the container is never run like that. Instead, the entrypoint takes care of ensuring proper access to
the filesystem and then drops privileges by switching to a regular account using su-exec.

Out of the box, the Ghost instance is run under the account called ghost that is member of a group ghost. This can
be overridden with environment variables GHOST_USER and GHOST_GROUP respectively. If user or group doesn't exist it
will be created.

Given that it's expected that some dirs will be mounted as volumes the problem of managing access across container boundary
arises. In this case it might a good idea to create a user and a group on the host beforehand and to mount /etc/passwd
together with /etc/group from the host into container in the read-only mode. This will give an added benefit of seeing
proper entries in process tree with ps and so on. When using command line, this can be done with
-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro.

It's possible to run the image with the

-u, --user string                           Username or UID (format: <name|uid>[:<group|gid>])

Docker option but in this case numeric values must be used (if user and group do not exist within container already)
as mounts (if /etc/passwd and/or /etc/group are mounted from outside) happen after the user switch.

Unless overridden, everything inside container is run under tini - "a tiny but valid init for containers" (c).

As it was mentioned above, it's expected that Ghost config will be injected via environment variables. The one can use
GHOST_OPTIONS environment variable that must contain valid a JSON serialized as a string. It will be used to populate
config key equal to NODE_ENV (which means that there is always only one entry in config). Alternatively, the config
can be generated from multiple environment variables using a simple naming conventions that says that GHOST_OPTIONS_foo
corresponds to the key foo in the config.

This way the combination of NODE_ENV=production and GHOST_OPTIONS_url='"http://localhost:2368"' with
GHOST_OPTIONS_server='{ "host": "0.0.0.0", "port": "2368" }' produces a config like:

module.exports = {
    production: {
        url: "http://localhost:2368",
        server: { "host": "0.0.0.0", "port": "2368" }
    }
};

The same effect can be achieved by setting:
GHOST_OPTIONS={"url": "http://localhost:2368", "server" { "host": "0.0.0.0", "port": "2368" }}

When GHOST_OPTIONS is set GHOST_OPTIONS_* are ignored.

If static config file is preferred, such a behavior can be achieved by setting GHOST_CONFIG environment variable. If
it's set and file exists (for instance, mounted from outside) it will be used. The GHOST_CONFIG=/var/lib/ghost/config.js
can be used in order to activate the config provided as an example as part of Ghost distribution.

The last part of bootstrap procedure is initialization of GHOST_CONTENT, which is by default set to /var/lib/ghost.
If it's empty - it will be populated from content dir inside $GHOST_SOURCE, which by default points to
/usr/src/ghost.

Usage

Given all what was said above, the recommended way to start Ghost with this image is:

$ sudo useradd -r ghost
$ docker volume create ghost-db
$ docker volume create ghost-images
$ docker run -d -p 127.0.0.1:2368:2368 \
    --restart unless-stopped \
    -e GHOST_OPTIONS_url='"http://blog.ashald.net"' \
    -v ghost-db:/var/lib/ghost/data \
    -v ghost-images:/var/lib/ghost/images \
    -v /etc/passwd:/etc/passwd:ro \
    -v /etc/group:/etc/group:ro \
    ashald/alpine-docker

Please not that this command makes Ghost available on port 2368 only on loopback interface (localhost). It's not the
best idea to expose the Ghost instance directly to the internet. Instead it's better to put an nginx in front and
configure TLS. This can be done relatively easily with nginx-proxy and docker-letsencrypt-nginx-proxy-companion
images.

Contributing

Contributions are welcome. Please feel free to create pull requests into this repo or create issues.

License

The contents of this repo is released into the public domain without warranty of any kind. For more details please
see LICENSE inside this repo.

Docker Pull Command
Owner
ashald
Source Repository

Comments (0)