Apache Hadoop is a java software framework. It supports data intensive distributed applications. Hadoop consists of a distributed file system, HDFS and a system for provisioning virtual Hadoop clusters a bulk physical cluster called Hadoop on demand or HOD. Sabalcore’s Hadoop on demand assign nodes and generates the appropriate configuration files for the Hadoop daemons and client. For allocating nodes HOD uses Torque resource manager. Hadoop on demand provide the facility to distribute Hadoop to assign the nodes and it automatically creates and separates the instances. HOD helps it easy for users to rapidly setup and use Hadoop.
Properties of Hadoop at Sabalcore:-
HOD works on three different guides.
1) HOD admin guide:-
This gives the overview of the architecture of HOD, prerequisites, installing various components and dependent software and configuring HOD to get it up and running.
The HOD project is a system for rendering and handling independent Hadoop MapReduce and HDFS instances on a shared cluster of nodes. The HOD tool helps for administrators and users to rapidly setup and use Hadoop easily and this tool useful for Hadoop developers and testers for sharing a physical cluster for testing their own Hadoop versions.
The HOD architecture includes components from a Resource manager (possibly together with a scheduler), HOD components, Hadoop Map /Reduce and HDFS daemons.
Cluster nodes compressing of two sets of nodes
For assigning a cluster and running jobs we are using the following steps
The following steps are used for installing HOD.
After installing HOD it has to be configured before using it. Step used for configuring HOD are given below.
a) On the node from where you want to run hod, edit the file hodrc which can be found in the <install dir>/conf directory. This file contains the minimal set of values required for running hod.
b) Specify values suitable to your environment for the following variables defined in the configuration file. Note that some of these variables are defined at more than one place in the file.
c) The following environment variables *may* need to be set depending on your environment. These variables must be defined where you run the HOD client, and also be specified in the HOD configuration file as the value of the key resource_manager.env-vars. Multiple variables can be specified as a comma separated list of key=value pairs.
The command line user interface utility is called hod. By using the hod users can override the configuration file. The configuration file can be defined in two ways.
1) Specify it on command line, using the -c option. Such as hod <operation> <required-args> -c path-to-the-configuration-file [other-options]
2) Set up an environment variable HOD_CONF_DIR where hod will be run. This should be pointed to a directory on the local file system, containing a file called hodrc. Note that this is analogous to the HADOOP_CONF_DIR and hadoop-site.xml file for Hadoop. If no configuration file is specified on the command line, hod shall look for the HOD_CONF_DIR environment variable and a hodrc file under that.
3) HOD configuration guide:- This guide helps to describe configuration of HOD and describe various configuration sections, parameters and their purpose.
Trouble shooting happens in three ways.
1) HOD hangs during allocation
Here three different errors can happen.
a) HOD or Hadoop components failed to come up. At the time of generating this error the hod command will return after a few times with an error code of either 7 or 8 as defined in the error codes section.
b) A large allocation is fired with a tarball. Sometimes due to load in the network, or on the allocated nodes, the tarball distribution might be significantly slow and take a couple of minutes to come back. Wait for completion. Also check that the tarball does not have the Hadoop sources or documentation.
c) A Torque related error: – At this time the hod command will not return for more than 5 minutes.
2) HOD hangs during deallocation
A Torque related problem: – Load on the Torque server, or the allocation is very large. Generally, waiting for the command to complete is the only option.
3) HOD fails with an error code and error message. If the exit code of the hod command is not 0, then refer to the following table of error exit codes to determine why the code may have occurred and how to debug the situation.
Sections in the HOD configuration are,
1) Common configuration options
a) temp-dir: – Temporary directory for usage by the HOD processes. Make sure that the users who will run hod have rights to create directories under the directory specified here.
b) Debug: – A numeric value from 1-4. 4 produces the most log information, and 1 the least.
c) log-dir: – Directory where log files is stored. By default, this is <install-location>/logs/. The restrictions and notes for the temp-dir variable apply here too.
d) xrs-port-range: – A range of ports, among which an available port shall be picked for use to run an XML-RPC server.
e) http-port-range: – A range of ports, among which an available port shall be picked for use to run an HTTP server.
f) java-home: – Location of Java to be used by Hadoop.
g) syslog-address: – Address to which a syslog daemon is bound to. The format of the value is host:port. If configured, HOD log messages will be logged to syslog using this value
2) hod options
a) cluster: – A descriptive name given to the cluster. For Torque, this is specified as a ‘Node property’ for every node in the cluster. HOD uses this value to compute the number of available nodes.
b) client-params: – A comma-separated list of hadoop config parameters specified as key-value pairs. These will be used to generate a hadoop-site.xml on the submit node that should be used for running MapReduce jobs.
3) Resource_manager options
a) Queue: – Name of the queue configured in the resource manager to which jobs are to be submitted.
b) batch-home: – Install directory to which ‘bin’ is appended and under which the executables of the resource manager can be found.
c) env-vars: – This is a comma separated list of key-value pairs, expressed as key=value, which would be passed to the jobs, launched on the compute nodes. For example, if the python installation is in a non-standard location, one can set the environment variable ‘HOD_PYTHON_HOME’ to the path to the python executable. The HOD processes launched on the compute nodes can then use this variable.
4) ringmaster options
a) work-dirs: – These are a list of comma separated paths that will serve as the root for directories that HOD generates and passes to Hadoop for use to store DFS / MapReduce data. For e.g. this is where DFS data blocks will be stored. Typically, as many paths are specified as there are disks available to ensure all disks are being utilized. The restrictions and notes for the temp-dir variable apply here too.
5) gridservice-hdfs options
a) external: – If false, this indicates that a HDFS cluster must be bought up by the HOD system, on the nodes which it allocates via the allocate command. Note that in that case, when the cluster is de-allocated, it will bring down the HDFS cluster, and all the data will be lost. If true, it will try and connect to an externally configured HDFS system. Typically, because input for jobs are placed into HDFS before jobs are run, and also the output from jobs in HDFS is required to be persistent, an internal HDFS cluster is of little value in a production system. However, it allows for quick testing.
b) Host:- Hostname of the externally configured NameNode, if any
c) fs_port:- Port to which NameNode RPC server is bound.
d) info_port:- Port to which the NameNode web UI server is bound.
e) Pkgs: – Installation directory, under which bin/hadoop executable is located. This can be used to use a pre-installed version of Hadoop on the cluster.
f) server-params: – A comma-separated list of hadoop config parameters specified key-value pairs. These will be used to generate a hadoop-site.xml that will be used by the NameNode and DataNodes.
g) Final-server-params: – Same as above, except they will be marked final.
6) gridservice-mapred options
a) external: – If false, this indicates that a MapReduce cluster must be bought up by the HOD system on the nodes which it allocates via the allocate command. If true, if will try and connect to an externally configured MapReduce system.
b) Host:- Hostname of the externally configured JobTracker, if any
tracker_port: Port to which the JobTracker RPC server is bound
c) info_port:- Port to which the JobTracker web UI server is bound.
d) Pkgs:- Installation directory, under which bin/hadoop executable is located
e) server-params:- A comma-separated list of hadoop config parameters specified
f) key-value pairs:- These will be used to generate a hadoop-site.xml that will be used by the JobTracker and TaskTrackers
g) final-server-params: – Same as above, except they will be marked final.
7) hodring options
a) mapred-system-dir-root: – Directory in the DFS under which HOD will generate sub-directory names and pass the full path as the value of the ‘mapred.system.dir’ configuration parameter to Hadoop daemons. The format of the full path will be value-of-this-option/userid/mapredsystem/cluster-id. Note that the directory specified here should be such that all users can create directories under this, if permissions are enabled in HDFS. Setting the value of this option to /user will make HOD use the user’s home directory to generate the mapred.system.dir value.
b) log-destination-uri: – URL describing a path in an external, static DFS or the cluster node’s local files system where HOD will upload Hadoop logs when a cluster is deallocated. To specify a DFS path, use the format ‘hdfs://path’. To specify a cluster node’s local file path, use the format ‘file://path’. When clusters are deallocated by HOD, the hadoop logs will be deleted as part of HOD’s cleanup process. In order to persist these logs, you can use this configuration option. The format of the path is value-of-this-option/userid/hod-logs/cluster-id Note that the directory you specify here must be such that all users can create sub-directories under this. Setting this value to hdfs://user will make the logs come in the user’s home directory in DFS.
c) Pkgs: – Installation directory, under which bin/hadoop executable is located. This will be used by HOD to upload logs if a HDFS URL is specified in log-destination-uri option. Note that this is useful if the users are using a tarball whose version may differ from the external, static HDFS version.