Public | Automated Build

Last pushed: 2 years ago
Short Description
For hadoop starters, including one NameNode and serval DataNodes. Configure and run in 5 mins.
Full Description

We are the cloud computing research team. We're trying to do something for the Cloud-Relative Starters.

This release is FOR Hadoop Cluster QuickStarters.

Hadoop 2.5.2 Only.

You can use our Dockerfile building your own images, or use docker poll directly.
We recommend the former.

Configuration

In whatever ways, before you start, you have to do some configurations as follows:

  1. Assuming your namenode info like this "namenode 10.0.0.111", modify core-site.xml with your IP, default port is 9000;
  2. Do the same work to hdfs-site.xml,yarn-site.xml and mapred-site.xml, modify replication property if you need;
  3. Modify hosts file, add ALL the host names and IPs. With default configuration we have 1 namenode and 2datanode;
  4. Add all hostnames or IPs to the file slave.

##RUN

OK, well, alomost done now.
After steps above, you can start building image.
Then start containers like this:

>docker run -itd --name datanode1 -h datanode1 --net=none YOUR.IMAGE.NAME /etc/bootstrap.sh -d

>pipework br0 datanode1 10.0.1.11/16

>docker run -itd --name datanode2 -h datanode2 --net=none YOUR.IMAGE.NAME /etc/bootstrap.sh -d

>pipework br0 datanode2 10.0.1.12/16

>sleep 10

>docker run -itd --name namenode -h namenode --net=none YOUR.IMAGE.NAME /etc/bootstrap.sh -dmaster

>pipework br0 namenode 10.0.0.111/16

By using docker logs -f namenode you can see these:

Container's IP
Hosts
HDFS report

Congras!

Now you get a cluster with one namenode and two datanodes.

Contact us : Zhongliang xiaozhongliang@h2comm.com.cn .

Docker Pull Command
Owner
h2comm

Comments (0)