Home | Mirror | Search | 雜文 | ITEYE 博客 | OSChina 博客 | 51CTO 博客

66.5. Hadoop - HDFS

http://hadoop.apache.org/

66.5.1. 單機安裝

這種安裝方式僅僅適用於做實驗,快速搭建Hadoop環境,不適合生產環境。

Ubuntu 環境

$ sudo apt-get install openjdk-7-jre
		

過程 66.1. Master configure

  1. Download and Installing Software

    				
    $ cd /usr/local/src/
    $ wget http://apache.etoak.com/hadoop/core/hadoop-0.20.0/hadoop-0.20.0.tar.gz
    $ tar zxvf hadoop-0.20.0.tar.gz
    $ sudo cp -r hadoop-0.20.0 ..
    $ sudo ln -s hadoop-0.20.0 hadoop
    $ cd hadoop
    				
    				
  2. Configuration

    hadoop-env.sh

    				
    $ vim conf/hadoop-env.sh
    export JAVA_HOME=/usr
    				
    				

    conf/core-site.xml

    				
    $ vim conf/core-site.xml
    
    <configuration>
      <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
      </property>
    </configuration>
    				
    				

    conf/hdfs-site.xml

    				
    $ vim conf/hdfs-site.xml
    
    <configuration>
      <property>
        <name>dfs.replication</name>
        <value>1</value>
      </property>
    </configuration>
    				
    				

    conf/mapred-site.xml

    				
    $ vim conf/mapred-site.xml
    
    <configuration>
      <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>
      </property>
    </configuration>
    				
    				
  3. Setup passphraseless ssh

    				
    Now check that you can ssh to the localhost without a passphrase:
    $ ssh localhost
    
    If you cannot ssh to localhost without a passphrase, execute the following commands:
    $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
    $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    				
    				
  4. Execution

     Format a new distributed-filesystem:
    $ bin/hadoop namenode -format
    
    Start the hadoop daemons:
    $ bin/start-all.sh
    
    When you're done, stop the daemons with:
    $ bin/stop-all.sh
    				
  5. Monitor

    Browse the web interface for the NameNode and the JobTracker; by default they are available at:

    • NameNode - http://localhost:50070/

    • JobTracker - http://localhost:50030/

  6. Test

    				
    $ bin/hadoop dfs -mkdir test
    $ echo helloworld > testfile
    $ bin/hadoop dfs -copyFromLocal testfile test/
    $ bin/hadoop dfs -ls
    Found 1 items
    drwxr-xr-x   - neo supergroup          0 2009-07-10 14:18 /user/neo/test
    
    $ bin/hadoop dfs -ls test
    
    $ bin/hadoop dfs –cat test/file
    				
    				

過程 66.2. slave config

  1. SSH

    				
    $ scp neo@master:~/.ssh/id_dsa.pub .ssh/master.pub
    $ cat .ssh/master.pub >> .ssh/authorized_keys
    				
    				
  2. Hadoop

    				
    $ scp neo@master:/usr/local/hadoop /usr/local/hadoop
    				
    				

66.5.2. 分散式安裝


HDFS:
      NameNode  :管理節點
      DataNode  :數據節點
      SecondaryNamenode : 數據源信息備份整理節點

MapReduce
       JobTracker  :任務管理節點
       Tasktracker :任務運行節點

66.5.2.1. 準備工作

準備4台伺服器,操作系統為 Centos 6.4 最小化安裝


NameNode   192.168.2.10 hostname namenode
DataNode    192.168.2.11 hostname:datanode1
DataNode    192.168.2.12 hostname:datanode2

JobTracker  192.168.2.10 (也可單獨配置一台,也可以與NameNode公用,這裡只用到了HDFS,這台可有可無,準備上面4台即可)
TaskTracker (與DataNode共用)

設置網絡使其可以互訪,然後關閉防火牆與selinux

# yum update -y
# lokkit --disabled --selinux=disabled
			

Hadoop 重要的連接埠


1.Job Tracker 管理界面: 50030
2.HDFS 管理界面 :  50070
3.HDFS通信連接埠:  9000
4.MapReduce通信連接埠:  9001

過程 66.3. Hadoop - 準備工作

  1. 為所有伺服器安裝Java運行環境

    以 CentOS 6.4 為例

    # yum install java-1.7.0-openjdk
    					
  2. 在所有伺服器上安裝 Hadoop

    安裝方案有下面兩種 RPM與YUM,選擇其中一種

    # rpm -ivh http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    Retrieving http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    Preparing...                ########################################### [100%]
       1:hadoop                 ########################################### [100%]
    					
    yum localinstall http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    					

    如果網絡比較慢,可以使用Wget或axel下載後安裝

    wget http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    yum localinstall hadoop-1.1.2-1.x86_64.rpm
    					

    Hadoop 用戶

    # cat /etc/passwd | grep Hadoop
    mapred:x:202:123:Hadoop MapReduce:/tmp:/bin/bash
    hdfs:x:201:123:Hadoop HDFS:/tmp:/bin/bash
    					
  3. 配置/etc/hosts檔案

    					
    cat >> /etc/hosts <<EOD
    
    ###############################
    # Hadoop Host
    ###############################
    #NameNode
    192.168.2.10 	namenode.example.com
    
    #DataNode
    192.168.2.11 	datanode1.example.com
    192.168.2.12 	datanode2.example.com
    
    EOD
    					
    					
  4. 生成其密鑰

    					
    # ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    cc:6f:30:76:82:28:96:13:c8:e6:bc:d7:5b:2d:11:d7 root@images-upload
    The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |..        .      |
    |.o.    . . E     |
    |+  o . +o        |
    | o= . ..S .      |
    | ..o.  .o*       |
    | . . . o .o      |
    |  .   o ..       |
    |     .           |
    +-----------------+
    					
    					
  5. 植入公鑰證書

    向DataNode節點所有的伺服器植入公鑰證書

    					
    ssh-copy-id root@datanode1.example.com
    ssh-copy-id root@datanode2.example.com
    					
    					

    只需要輸入yes後,再輸入密碼即可完成公鑰證書的植入。過程類似下面:

    # ssh-copy-id root@datanode1.example.com
    The authenticity of host 'datanode1.example.com (192.168.2.11)' can't be established.
    RSA key fingerprint is f1:0b:b1:63:1a:f6:ac:a3:da:4f:14:b5:f0:cc:df:67.
    Are you sure you want to continue connecting (yes/no)? yes 輸入yes
    Warning: Permanently added 'datanode1.example.com' (RSA) to the list of known hosts.
    root@datanode1.example.com's password: 輸入密碼
    Now try logging into the machine, with "ssh 'root@datanode1.example.com'", and check in:
    
      .ssh/authorized_keys
    
    to make sure we haven't added extra keys that you weren't expecting.
    
    # ssh-copy-id root@datanode2.example.com
    The authenticity of host 'datanode2.example.com (192.168.2.12)' can't be established.
    RSA key fingerprint is f1:0b:b1:63:1a:f6:ac:a3:da:4f:14:b5:f0:cc:df:67.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'datanode2.example.com,192.168.2.12' (RSA) to the list of known hosts.
    root@datanode2.example.com's password:
    Now try logging into the machine, with "ssh 'root@datanode2.example.com'", and check in:
    
      .ssh/authorized_keys
    
    to make sure we haven't added extra keys that you weren't expecting.
    
    					

    完成後測試登陸,如果沒有提示密碼直接進入表示正確

    # ssh root@datanode1.example.com
    # exit
    					

66.5.2.2. NameNode 配置名稱節點

配置檔案

core-site.xml	 common屬性配置
hdfs-site.xml    HDFS屬性配置
mapred-site.xml  MapReduce屬性配置
hadoop-env.sh    hadooop 環境變數配置
			

過程 66.4. Hadoop - NameNode

  1. 配置檔案 hadoop-env.sh

    將 /usr/java/default 改為 /usr

    # cp hadoop-env.sh hadoop-env.sh.original
    # sed -i "s:/usr/java/default:/usr:" hadoop-env.sh
    					
  2. 配置檔案 core-site.xml

    					
    # cp core-site.xml core-site.xml.original
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
             <name>fs.default.name</name>
             <value>hdfs://namenode.example.com:9000</value>
        </property>
        <property>
             <name>hadoop.tmp.dir</name>
             <value>/var/tmp/hadoop</value>
        </property>
    </configuration>
    					
    					

    fs.default.name: NameNode的URI。hdfs://主機名:連接埠/

    hadoop.tmp.dir: Hadoop的預設臨時路徑,

  3. 配置檔案 mapred-site.xml

    					
    # cp mapred-site.xml mapred-site.xml.original
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>namenode.example.com:9001</value>
        </property>
        <property>
            <name>mapred.local.dir</name>
            <value>/var/tmp/hadoop</value>
        </property>
    </configuration>
    					
    					

    mapred.job.tracker: JobTracker的主機和連接埠。

  4. 配置檔案 hdfs-site.xml

    					
    # cp hdfs-site.xml hdfs-site.xml.original
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.name.dir</name>
            <value>/var/hadoop/name1</value>
            <description>  </description>
        </property>
        <property>
            <name>dfs.data.dir</name>
            <value>/var/hadoop/hdfs/data1</value>
            <description> </description>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
        </property>
    </configuration>
    					
    					
    dfs.name.dir: NameNode持久存儲名字空間及事務日誌的本地檔案系統路徑。 當這個值是一個逗號分割的目錄列表時,nametable數據將會被覆制到所有目錄中做冗餘備份。
    2)   dfs.data.dir是DataNode存放塊數據的本地檔案系統路徑,逗號分割的列表。 當這個值是逗號分割的目錄列表時,數據將被存儲在所有目錄下,通常分佈在不同設備上。
    3)dfs.replication是數據需要備份的數量,預設是3,如果此數大於集群的機器數會出錯。
    					
  5. 配置masters和slaves主從結點

    備份masters與slaves配置檔案

     cp masters masters.original
     cp slaves slaves.original
    					
    					
    cat > /etc/hadoop/masters <<EOD
    namenode.example.com
    EOD
    					
    					
    					
    cat > /etc/hadoop/slaves <<EOD
    datanode1.example.com
    datanode2.example.com
    EOD
    					
    					
  6. 複製配置檔案

    # cd /etc/hadoop/
    # scp hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves root@datanode1.example.com:/etc/hadoop/
    # scp hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves root@datanode2.example.com:/etc/hadoop/
    					

    控制台輸出類似下面表示覆製成功。

    # scp hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves root@datanode1.example.com:/etc/hadoop/
    hadoop-env.sh                                                                          100% 2116     2.1KB/s   00:00
    core-site.xml                                                                          100%  412     0.4KB/s   00:00
    mapred-site.xml                                                                        100%  406     0.4KB/s   00:00
    hdfs-site.xml                                                                          100%  595     0.6KB/s   00:00
    masters                                                                                100%   21     0.0KB/s   00:00
    slaves
    					

    將 NameNode 上的配置檔案複製給 DataNode

  7. 啟動 Hadoop

    創建工作目錄

    # mkdir /var/hadoop/
    # mkdir /var/hadoop/name{1,2}
    # su - hdfs -c  "mkdir -p  /var/hadoop/hdfs/data{1,2}"
    					
    # hadoop namenode -format
    13/04/23 14:35:33 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = namenode.example.com/192.168.2.10
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 1.1.2
    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:06:43 UTC 2013
    ************************************************************/
    Re-format filesystem in /var/hadoop/name1 ? (Y or N) Y
    13/04/23 14:35:37 INFO util.GSet: VM type       = 64-bit
    13/04/23 14:35:37 INFO util.GSet: 2% max memory = 2.475 MB
    13/04/23 14:35:37 INFO util.GSet: capacity      = 2^18 = 262144 entries
    13/04/23 14:35:37 INFO util.GSet: recommended=262144, actual=262144
    13/04/23 14:35:37 INFO namenode.FSNamesystem: fsOwner=root
    13/04/23 14:35:37 INFO namenode.FSNamesystem: supergroup=supergroup
    13/04/23 14:35:37 INFO namenode.FSNamesystem: isPermissionEnabled=true
    13/04/23 14:35:37 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
    13/04/23 14:35:37 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
    13/04/23 14:35:38 INFO namenode.NameNode: Caching file names occuring more than 10 times
    13/04/23 14:35:38 INFO common.Storage: Image file of size 110 saved in 0 seconds.
    13/04/23 14:35:38 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/name1/current/edits
    13/04/23 14:35:38 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/name1/current/edits
    13/04/23 14:35:38 INFO common.Storage: Storage directory /var/hadoop/name1 has been successfully formatted.
    13/04/23 14:35:38 INFO common.Storage: Image file of size 110 saved in 0 seconds.
    13/04/23 14:35:38 INFO namenode.FSEditLog: closing edit log: position=4, editlog= /var/hadoop/name2/current/edits
    13/04/23 14:35:38 INFO namenode.FSEditLog: close success: truncate to 4, editlog= /var/hadoop/name2/current/edits
    13/04/23 14:35:38 INFO common.Storage: Storage directory  /var/hadoop/name2 has been successfully formatted.
    13/04/23 14:35:38 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at namenode.example.com/192.168.2.10
    ************************************************************/
    					
    # chown hdfs:hadoop -R /var/hadoop
    					
    # /etc/init.d/hadoop-namenode start
    # /etc/init.d/hadoop-datanode start
    					

http://192.168.2.10:50070/

66.5.2.3. DataNode 配置數據節點

過程 66.5. Hadoop - DataNode

  1. 創建hadoop數據存儲目錄

    					
    mkdir /var/hadoop/
    chown hdfs:hadoop -R /var/hadoop
    su - hdfs -c  "mkdir -p  /var/hadoop/hdfs/data1"
    					
    					
  2. 啟動 Hadoop

    					
    # /etc/init.d/hadoop-datanode start
    					
    					

66.5.2.4. Hadoop UI (WEB界面)

常用訪問頁面


1. HDFS 界面
        http://hostname:50070
2. MapReduce 管理界面
        http://hostname:50030
        

66.5.2.5. 測試Hadoop

將install.log檔案拷貝到分散式檔案系統

hadoop fs -mkdir test
hadoop fs -put install.log test
			

顯示檔案內容

# hadoop dfs -cat test/install.log
			

查看目錄結構

# hadoop dfs -ls
Found 1 items
drwxr-xr-x   - root supergroup          0 2013-04-23 15:20 /user/root/test
[root@namenode ~]# hadoop dfs -ls test
Found 1 items
-rw-r--r--   2 root supergroup      10278 2013-04-23 15:20 /user/root/test/install.log
			

66.5.3. 二進制包安裝

66.5.3.1. 安裝 JRE

如果你使用的是Oracle網站下載的JRE需要配置 java classpath

			
$ sudo vim /etc/profile.d/java.sh
################################################
### Java environment by neo
################################################
export JAVA_HOME=/usr
export JRE_HOME=/usr
export PATH=$PATH:/usr/local/apache-tomcat/bin/:/usr/local/jetty-6.1.18/bin:/usr/local/apache-nutch/bin
export CLASSPATH="./:/usr/share/java/:/usr/local/apache-solr/example/multicore/lib"
export JAVA_OPTS="-Xms128m -Xmx1024m"
			
			

66.5.3.2. 安裝 Hadoop

wget http://apache.communilink.net/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-bin.tar.gz
			

啟動與停止 hadoop


啟動/關閉所有服務 start-all.sh/stop-all.sh
啟動/關閉HDFS: start-dfs.sh/stop-dfs.sh
啟動/關閉MapReduce:  start-mapred.sh/stop-mapred.sh

66.5.3.3. 創建用戶

如果使用tar包安裝,需要創建用戶,這裡請忽略

# adduser hadoop
			

			

66.5.4. FAQ

66.5.4.1. hadoop-1.1.2-1.x86_64.rpm 包含哪些檔案內容

查看安裝檔案

# rpm -ql hadoop-1.1.2-1
/etc/hadoop/capacity-scheduler.xml
/etc/hadoop/configuration.xsl
/etc/hadoop/core-site.xml
/etc/hadoop/fair-scheduler.xml
/etc/hadoop/hadoop-env.sh
/etc/hadoop/hadoop-metrics2.properties
/etc/hadoop/hadoop-policy.xml
/etc/hadoop/hdfs-site.xml
/etc/hadoop/log4j.properties
/etc/hadoop/mapred-queue-acls.xml
/etc/hadoop/mapred-site.xml
/etc/hadoop/masters
/etc/hadoop/slaves
/etc/hadoop/ssl-client.xml.example
/etc/hadoop/ssl-server.xml.example
/etc/hadoop/taskcontroller.cfg
/etc/rc.d/init.d
/etc/rc.d/init.d/hadoop-datanode
/etc/rc.d/init.d/hadoop-historyserver
/etc/rc.d/init.d/hadoop-jobtracker
/etc/rc.d/init.d/hadoop-namenode
/etc/rc.d/init.d/hadoop-secondarynamenode
/etc/rc.d/init.d/hadoop-tasktracker
/usr
/usr/bin
/usr/bin/hadoop
/usr/bin/task-controller
/usr/include
/usr/include/hadoop
/usr/include/hadoop/Pipes.hh
/usr/include/hadoop/SerialUtils.hh
/usr/include/hadoop/StringUtils.hh
/usr/include/hadoop/TemplateFactory.hh
/usr/lib
/usr/lib64
/usr/lib64/libhadoop.a
/usr/lib64/libhadoop.la
/usr/lib64/libhadoop.so
/usr/lib64/libhadoop.so.1
/usr/lib64/libhadoop.so.1.0.0
/usr/lib64/libhadooppipes.a
/usr/lib64/libhadooputils.a
/usr/lib64/libhdfs.a
/usr/lib64/libhdfs.la
/usr/lib64/libhdfs.so
/usr/lib64/libhdfs.so.0
/usr/lib64/libhdfs.so.0.0.0
/usr/libexec
/usr/libexec/hadoop-config.sh
/usr/libexec/jsvc.amd64
/usr/man
/usr/native
/usr/sbin
/usr/sbin/hadoop-create-user.sh
/usr/sbin/hadoop-daemon.sh
/usr/sbin/hadoop-daemons.sh
/usr/sbin/hadoop-setup-applications.sh
/usr/sbin/hadoop-setup-conf.sh
/usr/sbin/hadoop-setup-hdfs.sh
/usr/sbin/hadoop-setup-single-node.sh
/usr/sbin/hadoop-validate-setup.sh
/usr/sbin/rcc
/usr/sbin/slaves.sh
/usr/sbin/start-all.sh
/usr/sbin/start-balancer.sh
/usr/sbin/start-dfs.sh
/usr/sbin/start-jobhistoryserver.sh
/usr/sbin/start-mapred.sh
/usr/sbin/stop-all.sh
/usr/sbin/stop-balancer.sh
/usr/sbin/stop-dfs.sh
/usr/sbin/stop-jobhistoryserver.sh
/usr/sbin/stop-mapred.sh
/usr/sbin/update-hadoop-env.sh
/usr/share
/usr/share/doc
/usr/share/doc/hadoop
/usr/share/doc/hadoop/CHANGES.txt
/usr/share/doc/hadoop/LICENSE.txt
/usr/share/doc/hadoop/NOTICE.txt
/usr/share/doc/hadoop/README.txt
/usr/share/hadoop
/usr/share/hadoop/contrib
/usr/share/hadoop/contrib/datajoin
/usr/share/hadoop/contrib/datajoin/hadoop-datajoin-1.1.2.jar
/usr/share/hadoop/contrib/failmon
/usr/share/hadoop/contrib/failmon/hadoop-failmon-1.1.2.jar
/usr/share/hadoop/contrib/gridmix
/usr/share/hadoop/contrib/gridmix/hadoop-gridmix-1.1.2.jar
/usr/share/hadoop/contrib/hdfsproxy
/usr/share/hadoop/contrib/hdfsproxy/README
/usr/share/hadoop/contrib/hdfsproxy/bin
/usr/share/hadoop/contrib/hdfsproxy/bin/hdfsproxy
/usr/share/hadoop/contrib/hdfsproxy/bin/hdfsproxy-config.sh
/usr/share/hadoop/contrib/hdfsproxy/bin/hdfsproxy-daemon.sh
/usr/share/hadoop/contrib/hdfsproxy/bin/hdfsproxy-daemons.sh
/usr/share/hadoop/contrib/hdfsproxy/bin/hdfsproxy-slaves.sh
/usr/share/hadoop/contrib/hdfsproxy/bin/start-hdfsproxy.sh
/usr/share/hadoop/contrib/hdfsproxy/bin/stop-hdfsproxy.sh
/usr/share/hadoop/contrib/hdfsproxy/build.xml
/usr/share/hadoop/contrib/hdfsproxy/conf
/usr/share/hadoop/contrib/hdfsproxy/conf/configuration.xsl
/usr/share/hadoop/contrib/hdfsproxy/conf/hdfsproxy-default.xml
/usr/share/hadoop/contrib/hdfsproxy/conf/hdfsproxy-env.sh
/usr/share/hadoop/contrib/hdfsproxy/conf/hdfsproxy-env.sh.template
/usr/share/hadoop/contrib/hdfsproxy/conf/hdfsproxy-hosts
/usr/share/hadoop/contrib/hdfsproxy/conf/log4j.properties
/usr/share/hadoop/contrib/hdfsproxy/conf/ssl-server.xml
/usr/share/hadoop/contrib/hdfsproxy/conf/tomcat-forward-web.xml
/usr/share/hadoop/contrib/hdfsproxy/conf/tomcat-web.xml
/usr/share/hadoop/contrib/hdfsproxy/conf/user-certs.xml
/usr/share/hadoop/contrib/hdfsproxy/conf/user-permissions.xml
/usr/share/hadoop/contrib/hdfsproxy/hdfsproxy-2.0.jar
/usr/share/hadoop/contrib/hdfsproxy/logs
/usr/share/hadoop/contrib/hod
/usr/share/hadoop/contrib/hod/CHANGES.txt
/usr/share/hadoop/contrib/hod/README
/usr/share/hadoop/contrib/hod/bin
/usr/share/hadoop/contrib/hod/bin/VERSION
/usr/share/hadoop/contrib/hod/bin/checknodes
/usr/share/hadoop/contrib/hod/bin/hod
/usr/share/hadoop/contrib/hod/bin/hodcleanup
/usr/share/hadoop/contrib/hod/bin/hodring
/usr/share/hadoop/contrib/hod/bin/ringmaster
/usr/share/hadoop/contrib/hod/bin/verify-account
/usr/share/hadoop/contrib/hod/build.xml
/usr/share/hadoop/contrib/hod/conf
/usr/share/hadoop/contrib/hod/conf/hodrc
/usr/share/hadoop/contrib/hod/config.txt
/usr/share/hadoop/contrib/hod/getting_started.txt
/usr/share/hadoop/contrib/hod/hodlib
/usr/share/hadoop/contrib/hod/hodlib/AllocationManagers
/usr/share/hadoop/contrib/hod/hodlib/AllocationManagers/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/AllocationManagers/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/AllocationManagers/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/AllocationManagers/goldAllocationManager.py
/usr/share/hadoop/contrib/hod/hodlib/AllocationManagers/goldAllocationManager.pyc
/usr/share/hadoop/contrib/hod/hodlib/AllocationManagers/goldAllocationManager.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common
/usr/share/hadoop/contrib/hod/hodlib/Common/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/Common/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/allocationManagerUtil.py
/usr/share/hadoop/contrib/hod/hodlib/Common/allocationManagerUtil.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/allocationManagerUtil.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/desc.py
/usr/share/hadoop/contrib/hod/hodlib/Common/desc.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/desc.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/descGenerator.py
/usr/share/hadoop/contrib/hod/hodlib/Common/descGenerator.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/descGenerator.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/hodsvc.py
/usr/share/hadoop/contrib/hod/hodlib/Common/hodsvc.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/hodsvc.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/logger.py
/usr/share/hadoop/contrib/hod/hodlib/Common/logger.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/logger.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/miniHTMLParser.py
/usr/share/hadoop/contrib/hod/hodlib/Common/miniHTMLParser.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/miniHTMLParser.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/nodepoolutil.py
/usr/share/hadoop/contrib/hod/hodlib/Common/nodepoolutil.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/nodepoolutil.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/setup.py
/usr/share/hadoop/contrib/hod/hodlib/Common/setup.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/setup.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/socketServers.py
/usr/share/hadoop/contrib/hod/hodlib/Common/socketServers.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/socketServers.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/tcp.py
/usr/share/hadoop/contrib/hod/hodlib/Common/tcp.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/tcp.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/threads.py
/usr/share/hadoop/contrib/hod/hodlib/Common/threads.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/threads.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/types.py
/usr/share/hadoop/contrib/hod/hodlib/Common/types.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/types.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/util.py
/usr/share/hadoop/contrib/hod/hodlib/Common/util.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/util.pyo
/usr/share/hadoop/contrib/hod/hodlib/Common/xmlrpc.py
/usr/share/hadoop/contrib/hod/hodlib/Common/xmlrpc.pyc
/usr/share/hadoop/contrib/hod/hodlib/Common/xmlrpc.pyo
/usr/share/hadoop/contrib/hod/hodlib/GridServices
/usr/share/hadoop/contrib/hod/hodlib/GridServices/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/GridServices/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/GridServices/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/GridServices/hdfs.py
/usr/share/hadoop/contrib/hod/hodlib/GridServices/hdfs.pyc
/usr/share/hadoop/contrib/hod/hodlib/GridServices/hdfs.pyo
/usr/share/hadoop/contrib/hod/hodlib/GridServices/mapred.py
/usr/share/hadoop/contrib/hod/hodlib/GridServices/mapred.pyc
/usr/share/hadoop/contrib/hod/hodlib/GridServices/mapred.pyo
/usr/share/hadoop/contrib/hod/hodlib/GridServices/service.py
/usr/share/hadoop/contrib/hod/hodlib/GridServices/service.pyc
/usr/share/hadoop/contrib/hod/hodlib/GridServices/service.pyo
/usr/share/hadoop/contrib/hod/hodlib/Hod
/usr/share/hadoop/contrib/hod/hodlib/Hod/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/Hod/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/Hod/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/Hod/hadoop.py
/usr/share/hadoop/contrib/hod/hodlib/Hod/hadoop.pyc
/usr/share/hadoop/contrib/hod/hodlib/Hod/hadoop.pyo
/usr/share/hadoop/contrib/hod/hodlib/Hod/hod.py
/usr/share/hadoop/contrib/hod/hodlib/Hod/hod.pyc
/usr/share/hadoop/contrib/hod/hodlib/Hod/hod.pyo
/usr/share/hadoop/contrib/hod/hodlib/Hod/nodePool.py
/usr/share/hadoop/contrib/hod/hodlib/Hod/nodePool.pyc
/usr/share/hadoop/contrib/hod/hodlib/Hod/nodePool.pyo
/usr/share/hadoop/contrib/hod/hodlib/HodRing
/usr/share/hadoop/contrib/hod/hodlib/HodRing/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/HodRing/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/HodRing/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/HodRing/hodRing.py
/usr/share/hadoop/contrib/hod/hodlib/HodRing/hodRing.pyc
/usr/share/hadoop/contrib/hod/hodlib/HodRing/hodRing.pyo
/usr/share/hadoop/contrib/hod/hodlib/NodePools
/usr/share/hadoop/contrib/hod/hodlib/NodePools/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/NodePools/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/NodePools/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/NodePools/torque.py
/usr/share/hadoop/contrib/hod/hodlib/NodePools/torque.pyc
/usr/share/hadoop/contrib/hod/hodlib/NodePools/torque.pyo
/usr/share/hadoop/contrib/hod/hodlib/RingMaster
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/idleJobTracker.py
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/idleJobTracker.pyc
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/idleJobTracker.pyo
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/ringMaster.py
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/ringMaster.pyc
/usr/share/hadoop/contrib/hod/hodlib/RingMaster/ringMaster.pyo
/usr/share/hadoop/contrib/hod/hodlib/Schedulers
/usr/share/hadoop/contrib/hod/hodlib/Schedulers/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/Schedulers/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/Schedulers/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/Schedulers/torque.py
/usr/share/hadoop/contrib/hod/hodlib/Schedulers/torque.pyc
/usr/share/hadoop/contrib/hod/hodlib/Schedulers/torque.pyo
/usr/share/hadoop/contrib/hod/hodlib/ServiceProxy
/usr/share/hadoop/contrib/hod/hodlib/ServiceProxy/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/ServiceProxy/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/ServiceProxy/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/ServiceProxy/serviceProxy.py
/usr/share/hadoop/contrib/hod/hodlib/ServiceProxy/serviceProxy.pyc
/usr/share/hadoop/contrib/hod/hodlib/ServiceProxy/serviceProxy.pyo
/usr/share/hadoop/contrib/hod/hodlib/ServiceRegistry
/usr/share/hadoop/contrib/hod/hodlib/ServiceRegistry/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/ServiceRegistry/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/ServiceRegistry/__init__.pyo
/usr/share/hadoop/contrib/hod/hodlib/ServiceRegistry/serviceRegistry.py
/usr/share/hadoop/contrib/hod/hodlib/ServiceRegistry/serviceRegistry.pyc
/usr/share/hadoop/contrib/hod/hodlib/ServiceRegistry/serviceRegistry.pyo
/usr/share/hadoop/contrib/hod/hodlib/__init__.py
/usr/share/hadoop/contrib/hod/hodlib/__init__.pyc
/usr/share/hadoop/contrib/hod/hodlib/__init__.pyo
/usr/share/hadoop/contrib/hod/ivy
/usr/share/hadoop/contrib/hod/ivy.xml
/usr/share/hadoop/contrib/hod/ivy/libraries.properties
/usr/share/hadoop/contrib/hod/support
/usr/share/hadoop/contrib/hod/support/checklimits.sh
/usr/share/hadoop/contrib/hod/support/logcondense.py
/usr/share/hadoop/contrib/hod/support/logcondense.pyc
/usr/share/hadoop/contrib/hod/support/logcondense.pyo
/usr/share/hadoop/contrib/hod/testing
/usr/share/hadoop/contrib/hod/testing/__init__.py
/usr/share/hadoop/contrib/hod/testing/__init__.pyc
/usr/share/hadoop/contrib/hod/testing/__init__.pyo
/usr/share/hadoop/contrib/hod/testing/helper.py
/usr/share/hadoop/contrib/hod/testing/helper.pyc
/usr/share/hadoop/contrib/hod/testing/helper.pyo
/usr/share/hadoop/contrib/hod/testing/lib.py
/usr/share/hadoop/contrib/hod/testing/main.py
/usr/share/hadoop/contrib/hod/testing/main.pyc
/usr/share/hadoop/contrib/hod/testing/main.pyo
/usr/share/hadoop/contrib/hod/testing/testHadoop.py
/usr/share/hadoop/contrib/hod/testing/testHadoop.pyc
/usr/share/hadoop/contrib/hod/testing/testHadoop.pyo
/usr/share/hadoop/contrib/hod/testing/testHod.py
/usr/share/hadoop/contrib/hod/testing/testHod.pyc
/usr/share/hadoop/contrib/hod/testing/testHod.pyo
/usr/share/hadoop/contrib/hod/testing/testHodCleanup.py
/usr/share/hadoop/contrib/hod/testing/testHodCleanup.pyc
/usr/share/hadoop/contrib/hod/testing/testHodCleanup.pyo
/usr/share/hadoop/contrib/hod/testing/testHodRing.py
/usr/share/hadoop/contrib/hod/testing/testHodRing.pyc
/usr/share/hadoop/contrib/hod/testing/testHodRing.pyo
/usr/share/hadoop/contrib/hod/testing/testModule.py
/usr/share/hadoop/contrib/hod/testing/testModule.pyc
/usr/share/hadoop/contrib/hod/testing/testModule.pyo
/usr/share/hadoop/contrib/hod/testing/testRingmasterRPCs.py
/usr/share/hadoop/contrib/hod/testing/testRingmasterRPCs.pyc
/usr/share/hadoop/contrib/hod/testing/testRingmasterRPCs.pyo
/usr/share/hadoop/contrib/hod/testing/testThreads.py
/usr/share/hadoop/contrib/hod/testing/testThreads.pyc
/usr/share/hadoop/contrib/hod/testing/testThreads.pyo
/usr/share/hadoop/contrib/hod/testing/testTypes.py
/usr/share/hadoop/contrib/hod/testing/testTypes.pyc
/usr/share/hadoop/contrib/hod/testing/testTypes.pyo
/usr/share/hadoop/contrib/hod/testing/testUtil.py
/usr/share/hadoop/contrib/hod/testing/testUtil.pyc
/usr/share/hadoop/contrib/hod/testing/testUtil.pyo
/usr/share/hadoop/contrib/hod/testing/testXmlrpc.py
/usr/share/hadoop/contrib/hod/testing/testXmlrpc.pyc
/usr/share/hadoop/contrib/hod/testing/testXmlrpc.pyo
/usr/share/hadoop/contrib/index
/usr/share/hadoop/contrib/index/hadoop-index-1.1.2.jar
/usr/share/hadoop/contrib/streaming
/usr/share/hadoop/contrib/streaming/hadoop-streaming-1.1.2.jar
/usr/share/hadoop/contrib/vaidya
/usr/share/hadoop/contrib/vaidya/bin
/usr/share/hadoop/contrib/vaidya/bin/vaidya.sh
/usr/share/hadoop/contrib/vaidya/conf
/usr/share/hadoop/contrib/vaidya/conf/postex_diagnosis_tests.xml
/usr/share/hadoop/contrib/vaidya/hadoop-vaidya-1.1.2.jar
/usr/share/hadoop/hadoop-ant-1.1.2.jar
/usr/share/hadoop/hadoop-client-1.1.2.jar
/usr/share/hadoop/hadoop-core-1.1.2.jar
/usr/share/hadoop/hadoop-examples-1.1.2.jar
/usr/share/hadoop/hadoop-minicluster-1.1.2.jar
/usr/share/hadoop/hadoop-test-1.1.2.jar
/usr/share/hadoop/hadoop-tools-1.1.2.jar
/usr/share/hadoop/lib
/usr/share/hadoop/lib/asm-3.2.jar
/usr/share/hadoop/lib/aspectjrt-1.6.11.jar
/usr/share/hadoop/lib/aspectjtools-1.6.11.jar
/usr/share/hadoop/lib/commons-beanutils-1.7.0.jar
/usr/share/hadoop/lib/commons-beanutils-core-1.8.0.jar
/usr/share/hadoop/lib/commons-cli-1.2.jar
/usr/share/hadoop/lib/commons-codec-1.4.jar
/usr/share/hadoop/lib/commons-collections-3.2.1.jar
/usr/share/hadoop/lib/commons-configuration-1.6.jar
/usr/share/hadoop/lib/commons-daemon-1.0.1.jar
/usr/share/hadoop/lib/commons-digester-1.8.jar
/usr/share/hadoop/lib/commons-el-1.0.jar
/usr/share/hadoop/lib/commons-httpclient-3.0.1.jar
/usr/share/hadoop/lib/commons-io-2.1.jar
/usr/share/hadoop/lib/commons-lang-2.4.jar
/usr/share/hadoop/lib/commons-logging-1.1.1.jar
/usr/share/hadoop/lib/commons-logging-api-1.0.4.jar
/usr/share/hadoop/lib/commons-math-2.1.jar
/usr/share/hadoop/lib/commons-net-3.1.jar
/usr/share/hadoop/lib/core-3.1.1.jar
/usr/share/hadoop/lib/hadoop-capacity-scheduler-1.1.2.jar
/usr/share/hadoop/lib/hadoop-fairscheduler-1.1.2.jar
/usr/share/hadoop/lib/hadoop-thriftfs-1.1.2.jar
/usr/share/hadoop/lib/hsqldb-1.8.0.10.LICENSE.txt
/usr/share/hadoop/lib/hsqldb-1.8.0.10.jar
/usr/share/hadoop/lib/jackson-core-asl-1.8.8.jar
/usr/share/hadoop/lib/jackson-mapper-asl-1.8.8.jar
/usr/share/hadoop/lib/jasper-compiler-5.5.12.jar
/usr/share/hadoop/lib/jasper-runtime-5.5.12.jar
/usr/share/hadoop/lib/jdeb-0.8.jar
/usr/share/hadoop/lib/jdiff
/usr/share/hadoop/lib/jdiff/hadoop_0.17.0.xml
/usr/share/hadoop/lib/jdiff/hadoop_0.18.1.xml
/usr/share/hadoop/lib/jdiff/hadoop_0.18.2.xml
/usr/share/hadoop/lib/jdiff/hadoop_0.18.3.xml
/usr/share/hadoop/lib/jdiff/hadoop_0.19.0.xml
/usr/share/hadoop/lib/jdiff/hadoop_0.19.1.xml
/usr/share/hadoop/lib/jdiff/hadoop_0.19.2.xml
/usr/share/hadoop/lib/jdiff/hadoop_0.20.1.xml
/usr/share/hadoop/lib/jdiff/hadoop_0.20.205.0.xml
/usr/share/hadoop/lib/jdiff/hadoop_1.0.0.xml
/usr/share/hadoop/lib/jdiff/hadoop_1.0.1.xml
/usr/share/hadoop/lib/jdiff/hadoop_1.0.2.xml
/usr/share/hadoop/lib/jdiff/hadoop_1.0.3.xml
/usr/share/hadoop/lib/jdiff/hadoop_1.0.4.xml
/usr/share/hadoop/lib/jdiff/hadoop_1.1.0.xml
/usr/share/hadoop/lib/jdiff/hadoop_1.1.1.xml
/usr/share/hadoop/lib/jdiff/hadoop_1.1.2.xml
/usr/share/hadoop/lib/jersey-core-1.8.jar
/usr/share/hadoop/lib/jersey-json-1.8.jar
/usr/share/hadoop/lib/jersey-server-1.8.jar
/usr/share/hadoop/lib/jets3t-0.6.1.jar
/usr/share/hadoop/lib/jetty-6.1.26.jar
/usr/share/hadoop/lib/jetty-util-6.1.26.jar
/usr/share/hadoop/lib/jsch-0.1.42.jar
/usr/share/hadoop/lib/jsp-2.1
/usr/share/hadoop/lib/jsp-2.1/jsp-2.1.jar
/usr/share/hadoop/lib/jsp-2.1/jsp-api-2.1.jar
/usr/share/hadoop/lib/junit-4.5.jar
/usr/share/hadoop/lib/kfs-0.2.2.jar
/usr/share/hadoop/lib/kfs-0.2.LICENSE.txt
/usr/share/hadoop/lib/log4j-1.2.15.jar
/usr/share/hadoop/lib/mockito-all-1.8.5.jar
/usr/share/hadoop/lib/oro-2.0.8.jar
/usr/share/hadoop/lib/servlet-api-2.5-20081211.jar
/usr/share/hadoop/lib/slf4j-api-1.4.3.jar
/usr/share/hadoop/lib/slf4j-log4j12-1.4.3.jar
/usr/share/hadoop/lib/xmlenc-0.52.jar
/usr/share/hadoop/templates
/usr/share/hadoop/templates/conf
/usr/share/hadoop/templates/conf/capacity-scheduler.xml
/usr/share/hadoop/templates/conf/commons-logging.properties
/usr/share/hadoop/templates/conf/core-site.xml
/usr/share/hadoop/templates/conf/hadoop-env.sh
/usr/share/hadoop/templates/conf/hadoop-metrics2.properties
/usr/share/hadoop/templates/conf/hadoop-policy.xml
/usr/share/hadoop/templates/conf/hdfs-site.xml
/usr/share/hadoop/templates/conf/log4j.properties
/usr/share/hadoop/templates/conf/mapred-queue-acls.xml
/usr/share/hadoop/templates/conf/mapred-site.xml
/usr/share/hadoop/templates/conf/taskcontroller.cfg
/usr/share/hadoop/webapps
/usr/share/hadoop/webapps/datanode
/usr/share/hadoop/webapps/datanode/WEB-INF
/usr/share/hadoop/webapps/datanode/WEB-INF/web.xml
/usr/share/hadoop/webapps/hdfs
/usr/share/hadoop/webapps/hdfs/WEB-INF
/usr/share/hadoop/webapps/hdfs/WEB-INF/web.xml
/usr/share/hadoop/webapps/hdfs/index.html
/usr/share/hadoop/webapps/history
/usr/share/hadoop/webapps/history/WEB-INF
/usr/share/hadoop/webapps/history/WEB-INF/web.xml
/usr/share/hadoop/webapps/job
/usr/share/hadoop/webapps/job/WEB-INF
/usr/share/hadoop/webapps/job/WEB-INF/web.xml
/usr/share/hadoop/webapps/job/analysejobhistory.jsp
/usr/share/hadoop/webapps/job/gethistory.jsp
/usr/share/hadoop/webapps/job/index.html
/usr/share/hadoop/webapps/job/job_authorization_error.jsp
/usr/share/hadoop/webapps/job/jobblacklistedtrackers.jsp
/usr/share/hadoop/webapps/job/jobconf.jsp
/usr/share/hadoop/webapps/job/jobconf_history.jsp
/usr/share/hadoop/webapps/job/jobdetails.jsp
/usr/share/hadoop/webapps/job/jobdetailshistory.jsp
/usr/share/hadoop/webapps/job/jobfailures.jsp
/usr/share/hadoop/webapps/job/jobhistory.jsp
/usr/share/hadoop/webapps/job/jobhistoryhome.jsp
/usr/share/hadoop/webapps/job/jobqueue_details.jsp
/usr/share/hadoop/webapps/job/jobtasks.jsp
/usr/share/hadoop/webapps/job/jobtaskshistory.jsp
/usr/share/hadoop/webapps/job/jobtracker.jsp
/usr/share/hadoop/webapps/job/legacyjobhistory.jsp
/usr/share/hadoop/webapps/job/loadhistory.jsp
/usr/share/hadoop/webapps/job/machines.jsp
/usr/share/hadoop/webapps/job/taskdetails.jsp
/usr/share/hadoop/webapps/job/taskdetailshistory.jsp
/usr/share/hadoop/webapps/job/taskstats.jsp
/usr/share/hadoop/webapps/job/taskstatshistory.jsp
/usr/share/hadoop/webapps/secondary
/usr/share/hadoop/webapps/secondary/WEB-INF
/usr/share/hadoop/webapps/static
/usr/share/hadoop/webapps/static/hadoop-logo.jpg
/usr/share/hadoop/webapps/static/hadoop.css
/usr/share/hadoop/webapps/static/jobconf.xsl
/usr/share/hadoop/webapps/static/jobtracker.js
/usr/share/hadoop/webapps/static/sorttable.js
/usr/share/hadoop/webapps/task
/usr/share/hadoop/webapps/task/WEB-INF
/usr/share/hadoop/webapps/task/WEB-INF/web.xml
/usr/share/hadoop/webapps/task/index.html
/var/log/hadoop
/var/run/hadoop
			
comments powered by Disqus