#Apache Hadoop 2.7.0 Docker image
Note: this is the master branch - for a particular Hadoop version always check the related branch
Following the success of our previous Hadoop Docker images, the feedback and feature requests we received aligned with the Hadoop release cycle, so we have released an Apache Hadoop 2.7.0 Docker image - same as the previous version, it's available as a trusted and automated build on the official Docker registry.
FYI: All the former Hadoop releases (2.3, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.5.2, 2.6.0) are available in the GitHub branches or our Docker Registry - check the tags.
Build the image
If you'd like to try directly from the Dockerfile you can build the image as:
docker build -t sequenceiq/hadoop-docker:2.7.0 .
Pull the image
The image is also released as an official Docker image from Docker's automated build repository - you can always pull or refer the image when launching containers.
docker pull sequenceiq/hadoop-docker:2.7.0
Start a container
In order to use the Docker image you have just build or pulled use:
Make sure that SELinux is disabled on the host. If you are using boot2docker you don't need to do anything.
docker run -it sequenceiq/hadoop-docker:2.7.0 /etc/bootstrap.sh -bash
You can run one of the stock examples:
cd $HADOOP_PREFIX # run the mapreduce bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar grep input output 'dfs[a-z.]+' # check the output bin/hdfs dfs -cat output/*
Hadoop native libraries, build, Bintray, etc
The Hadoop build process is no easy task - requires lots of libraries and their right version, protobuf, etc and takes some time - we have simplified all these, made the build and released a 64b version of Hadoop nativelibs on this Bintray repo. Enjoy.
As we have mentioned previousely, a Docker file was created and released in the official Docker repository
When i try to pull the image image gets download but fails at the end with unauthorized exception. Which authorization i need to set
The latest image for 2.7.1:
I try to create a directory in hdfs, but Name node always is in safe mode.
bash-4.1# bin/hdfs dfs -mkdir /input
mkdir: Cannot create directory /input. Name node is in safe mode.
The latest Dockerfile and image for 2.7.1 in Docker Hub is missing changes committed in Github. In my case, I was looking for port 8020 to be exposed shown in the link below:
Love the image, but thought you might want to clear up the discrepancy.
thank you very much
I was just curious is there any tutorial on building multi-node hadoop cluster . As i very keen in learn . i'm trying out an experience with 1 master 2 slave node using centos . hope to heard from you soon ! Thanks ! :)
宿主机是ubuntu x86_64 , docker run --net=host -it sequenceiq/hadoop-docker:2.7.0 /etc/bootstrap.sh -bash
After submitting the example, the job remains in PREP state. Is this expected ?
./hadoop job -list
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.
16/01/07 04:44:52 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
JobId State StartTime UserName Queue Priority UsedContainers RsvdContainers UsedMem RsvdMem NeededMem AM info
job_1452158980050_0001 PREP 1452159025481 root default NORMAL 0 0 0M 0M 0M http://4b8dae616769:8088/proxy/application_1452158980050_0001/
I am using this image to create a local hadoop cluster, however when i try to ssh in it always prompt me for root password which is unknown. I was expecting ssh to be a password less as mentioned in the Dockerfile but it doesn't work.
Any comment/feedback would be really appreciated.
I think 50030 should be added in EXPOSE