Public | Automated Build

Last pushed: 2 years ago
Short Description
For hadoop starters, including one NameNode and serval DataNodes. Configure and run in 5 mins.
Full Description

We are the cloud computing research team. We're trying to do something for the Cloud-Relative Starters.

This release is FOR Hadoop Cluster QuickStarters.

Hadoop 2.5.2 Only.

You can use our Dockerfile building your own images, or use docker poll directly.
We recommend the former.


In whatever ways, before you start, you have to do some configurations as follows:

  1. Assuming your namenode info like this "namenode", modify core-site.xml with your IP, default port is 9000;
  2. Do the same work to hdfs-site.xml,yarn-site.xml and mapred-site.xml, modify replication property if you need;
  3. Modify hosts file, add ALL the host names and IPs. With default configuration we have 1 namenode and 2datanode;
  4. Add all hostnames or IPs to the file slave.


OK, well, alomost done now.
After steps above, you can start building image.
Then start containers like this:

>docker run -itd --name datanode1 -h datanode1 --net=none YOUR.IMAGE.NAME /etc/ -d

>pipework br0 datanode1

>docker run -itd --name datanode2 -h datanode2 --net=none YOUR.IMAGE.NAME /etc/ -d

>pipework br0 datanode2

>sleep 10

>docker run -itd --name namenode -h namenode --net=none YOUR.IMAGE.NAME /etc/ -dmaster

>pipework br0 namenode

By using docker logs -f namenode you can see these:

Container's IP
HDFS report


Now you get a cluster with one namenode and two datanodes.

Contact us : Zhongliang .

Docker Pull Command