Wednesday, November 10, 2010

sample configuration xml files for Hadoop 0.20.x

In Hadoop 0.19.x or earlier, there were only one xml file to modify - hadoop-site.xml.
From Hadoop 0.20.x, there are 3 xml files that you have to configure.
They are (1) core-site.xml (2) mapred-site.xml (3) hdfs-site.xml.
Here are sample xml files that set only the minimal and required settings.


NOTE : they are found in your HADOOP_HOME/conf directory.


1. core-site.xml






hadoop.tmp.dir
/home/hadoop/hadoop-0.20.2/hdfs-tmp
A base for other temporary directories.



fs.default.name
hdfs://203.235.211.195:54310
The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.







2. mapred-site.xml






mapred.local.dir
/home/hadoop/hadoop-0.20.2/mapred-tmp
Comma-separated list of paths on the local
filesystem where temporary Map/Reduce data is written.




mapred.job.tracker
203.235.211.195:54311
The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.









3. hdfs-site.xml






hadoop.tmp.dir
/home/hadoop/hadoop-0.20.2/tmp
A base for other temporary directories.



dfs.data.dir
/home/hadoop/hadoop-0.20.2/dfs_blk/${user.name}
Comma separated list of paths on the local filesystem of a
DataNode where it should store its blocks.




dfs.default.name
hdfs://203.235.211.195:54310
The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.




dfs.replication
3
Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.






9 comments:

Viju said...

thank you.this is useful stuff

Key Labs said...

Nice post thanks for sharing. If any one need Hadoop Interview Questions & Answers and Free Material Click Here

Sai Santosh said...

As we also follow this blog along with attending hadoop online training center, our knowledge about the hadoop increased in manifold ways. Thanks for the way information is presented on this blog.

Gokul Ravi said...

very nice interview questions
vlsi interview questions
extjs interview questions
laravel interview questions
sap bi/bw interview questions
pcb interview questions
unix shell scripting interview questions

Gokul Ravi said...

really awesome blog
hr interview questions
hibernate interview questions
selenium interview questions
c interview questions
c++ interview questions
linux interview questions

Gokul Ravi said...

thanks for this blog
spring mvc interview questions
machine learning online training
servlet interview questions mytectra.in
wcf interview questions

Gokul Ravi said...

nice blog
android training in bangalore
ios training in bangalore

Irene Hynes said...

Greetings Mate,

Brilliant article, glad I slogged through the #sample configuration xml files for Hadoop 0.20.x it seems that a whole lot of the details really come back to from my past project.

why is saying c programming is a based on java?

Very useful article, if I run into challenges along the way, I will share them here.

Obrigado,
Irene Hynes

Abhiram Sharma said...

Hi There,


Hot! That was HOT! Glued to the "sample configuration xml files for Hadoop 0.20.x" your proficiency and style!

Unfortunately I still don't anything about Linux, I am just a newbie.
For what I know about Linux it is good for these kind of operation because it is fast. And also I am interested in learning how to use it because I red that it gives you more opportunity to solve different kind of problems.
Great effort, I wish I saw it earlier. Would have saved my day :)


Thanks,
Abhiram