Supported tags and respective
Where to get help:
the Solr Community
Where to file issues:
the Solr Community
Supported Docker versions:
the latest release (down to 1.6 on a best-effort basis)
What is Solr?
Solr is highly reliable, scalable and fault tolerant, providing distributed indexing, replication and load-balanced querying, automated failover and recovery, centralized configuration and more. Solr powers the search and navigation features of many of the world's largest internet sites.
How to use this Docker image
Run Solr and index example data
To run a single Solr server:
$ docker run --name my_solr -d -p 8983:8983 -t s390x/solr
Then with a web browser go to
http://localhost:8983/ to see the Admin Console (adjust the hostname for your docker host).
To use Solr, you need to create a "core", an index for your data. For example:
$ docker exec -it --user=solr my_solr bin/solr create_core -c gettingstarted
In the web UI if you click on "Core Admin" you should now see the "gettingstarted" core.
If you want to load some of the example data that is included in the container:
$ docker exec -it --user=solr my_solr bin/post -c gettingstarted example/exampledocs/manufacturers.xml
In the UI, find the "Core selector" popup menu and select the "gettingstarted" core, then select the "Query" menu item. This gives you a default search for
*:* which returns all docs. Hit the "Execute Query" button, and you should see a few docs with data. Congratulations!
For convenience, there is a single command that starts Solr, creates a collection called "demo", and loads sample data into it:
$ docker run --name solr_demo -d -P s390x/solr solr-demo
Loading your own data
If you want load your own data, you'll have to make it available to the container, for example by copying it into the container:
$ docker cp $HOME/mydata/mydata.xml my_solr:/opt/solr/mydata.xml $ docker exec -it --user=solr my_solr bin/post -c gettingstarted mydata.xml
or by using Docker host volumes:
$ docker run --name my_solr -d -p 8983:8983 -t -v $HOME/mydata:/opt/solr/mydata s390x/solr $ docker exec -it --user=solr my_solr bin/solr create_core -c gettingstarted $ docker exec -it --user=solr my_solr bin/post -c gettingstarted mydata/mydata.xml
To learn more about Solr, see the Apache Solr Reference Guide.
In addition to the
docker exec method explained above, you can create a core automatically at start time, in several ways.
If you run:
$ docker run -d -P s390x/solr solr-create -c mycore
the container will:
- run Solr in the background, on the loopback interface
- wait for it to start
- run the "solr create" command with the arguments you passed
- stop the background Solr
- start Solr in the foreground
You can combine this with mounted volumes to pass in core configuration from your host:
$ docker run -d -P -v $PWD/myconfig:/myconfig s390x/solr solr-create -c mycore -d /myconfig
When using the
solr-create command, Solr will log to the standard docker log (inspect with
docker logs), and the collection creation will happen in the background and log to
This first way closely mirrors the manual core creation steps and uses Solr's own tools to create the core, so should be reliable.
The second way of creating a core at start time is using the
solr-precreate command. This will create the core in the filesystem before running Solr. You should pass it the core name, and optionally the directory to copy the config from (this defaults to Solr's built-in "basic_configs"). For example:
$ docker run -d -P s390x/solr solr-precreate mycore $ docker run -d -P -v $PWD/myconfig:/myconfig s390x/solr solr-precreate mycore /myconfig
This method stores the core in an intermediate subdirectory called "mycores". This allows you to use mounted volumes:
$ mkdir mycores $ sudo chown 8983:8983 mycores $ docker run -d -P -v $PWD/mycores:/opt/solr/server/solr/mycores s390x/solr solr-precreate mycore
This second way is quicker, easier to monitor because it logs to the docker log, and can fail immediately if something is wrong. But, because it makes assumptions about Solr's "basic_configs", future upstream changes could break that.
The third way of creating a core at startup is to use the image extension mechanism explained in the next section.
Using Docker Compose
With Docker Compose you can create a Solr container with the index stored in a named data volume. Create a
version: '2' services: solr: image: s390x/solr ports: - "8983:8983" volumes: - data:/opt/solr/server/solr/mycores entrypoint: - docker-entrypoint.sh - solr-precreate - mycore volumes: data:
and just run
Extending the image
The docker-solr image has an extension mechanism. At run time, before starting Solr, the container will execute scripts in the
/docker-entrypoint-initdb.d/ directory. You can add your own scripts there either by using mounted volumes or by using a custom Dockerfile. These scripts can for example copy a core directory with pre-loaded data for continuous integration testing, or modify the Solr configuration.
Here is a simple example. With a
set-heap.sh script like:
#!/bin/bash set -e cp /opt/solr/bin/solr.in.sh /opt/solr/bin/solr.in.sh.orig sed -e 's/SOLR_HEAP=".*"/SOLR_HEAP="1024m"/' </opt/solr/bin/solr.in.sh.orig >/opt/solr/bin/solr.in.sh grep '^SOLR_HEAP=' /opt/solr/bin/solr.in.sh
you can run:
$ docker run --name solr_heap1 -d -P -v $PWD/docs/set-heap.sh:/docker-entrypoint-initdb.d/set-heap.sh s390x/solr $ sleep 5 $ docker logs solr_heap1 | head /opt/docker-solr/scripts/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/set-heap.sh SOLR_HEAP="1024m" Starting Solr on port 8983 from /opt/solr/server
With this extension mechanism it can be useful to see the shell commands that are being executed by the
docker-entrypoint.sh script in the docker log. To do that, set an environment variable using Docker's
You can also run a distributed Solr configuration.
The recommended and most flexible way to do that is to use Docker networking. See the Can I run ZooKeeper and Solr clusters under Docker FAQ, and this example.
You can also use legacy links, see the Can I run ZooKeeper and Solr with Docker Links FAQ.
About this repository
This repository is based on (and replaces)
makuk66/docker-solr, and has been sponsored by Lucidworks.
s390x/solr images come in many flavors, each designed for a specific use case.
This is the defacto image. If you are unsure about what your needs are, you probably want to use this one. It is designed to be used both as a throw away container (mount your source code and start the container to start your app), as well as the base to build other images off of.
This image is based on the popular Alpine Linux project, available in the
alpine official image. Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use musl libc instead of glibc and friends, so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See this Hacker News comment thread for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
To minimize image size, it's uncommon for additional related tools (such as
bash) to be included in Alpine-based images. Using this image as a base, add the things you need in your own Dockerfile (see the
alpine image description for examples of how to install packages if you are unfamiliar).
This image does not contain the common packages contained in the default tag and only contains the minimal packages needed to run
s390x/solr. Unless you are working in an environment where only the
s390x/solr image will be deployed and you have space constraints, we highly recommend using the default image of this repository.
Solr is licensed under the Apache License, Version 2.0.
This repository is also licensed under the Apache License, Version 2.0.
Copyright 2015 Martijn Koster
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
Some additional license information which was able to be auto-detected might be found in the
As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.