温馨提示×

Debian如何配置HDFS

小樊
42
2025-11-09 16:00:54
栏目: 智能运维

Prerequisites
Before configuring HDFS on Debian, ensure the following prerequisites are met:

  • A Debian system (preferably Debian 10/11) with root or sudo access.
  • Java Development Kit (JDK) 8 or higher installed (OpenJDK is recommended):
    sudo apt update && sudo apt install -y openjdk-11-jdk
    
  • SSH service enabled and configured for passwordless login (required for Hadoop node communication):
    sudo apt install -y openssh-server
    ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    chmod 600 ~/.ssh/authorized_keys
    

Step 1: Download and Install Hadoop

  1. Download the latest stable Hadoop release from the Apache website. For example:
    wget https://downloads.apache.org/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz
    
  2. Extract the tarball to a dedicated directory (e.g., /usr/local/hadoop):
    sudo tar -xzvf hadoop-3.3.6.tar.gz -C /usr/local/
    sudo mv /usr/local/hadoop-3.3.6 /usr/local/hadoop
    
  3. Set directory ownership to the current user (replace your_username with your actual username):
    sudo chown -R your_username:your_username /usr/local/hadoop
    

Step 2: Configure Environment Variables

  1. Edit the ~/.bashrc file to add Hadoop-specific environment variables:
    nano ~/.bashrc
    
  2. Append the following lines to the end of the file:
    export HADOOP_HOME=/usr/local/hadoop
    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
    export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64  # Adjust based on your JDK installation path
    
  3. Apply the changes to the current session:
    source ~/.bashrc
    

Step 3: Configure HDFS Core Files
All Hadoop configuration files are located in $HADOOP_HOME/etc/hadoop. Modify the following files to define HDFS behavior:

  • core-site.xml: Specifies the default file system and temporary directory.

    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://namenode:9000</value>  <!-- Replace 'namenode' with the actual hostname/IP of the NameNode -->
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/var/lib/hadoop/tmp</value>  <!-- Temporary directory for Hadoop operations -->
        </property>
    </configuration>
    
  • hdfs-site.xml: Configures HDFS replication, NameNode/DataNode directories, and optional secondary NameNode settings.

    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>3</value>  <!-- Replication factor (adjust based on cluster size; 3 is standard for production) -->
        </property>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>/data/hadoop/hdfs/namenode</value>  <!-- Persistent storage for NameNode metadata -->
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>/data/hadoop/hdfs/datanode</value>  <!-- Storage directory for DataNode blocks -->
        </property>
        <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>secondarynamenode:50090</value>  <!-- Optional: Secondary NameNode address -->
        </property>
    </configuration>
    
  • mapred-site.xml: Configures MapReduce framework (use YARN as the resource manager).

    <configuration>
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
    </configuration>
    
  • yarn-site.xml: Configures YARN (Yet Another Resource Negotiator) for resource management.

    <configuration>
        <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>resourcemanager</value>  <!-- Replace with the actual hostname/IP of the ResourceManager -->
        </property>
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>  <!-- Enables shuffle service for MapReduce -->
        </property>
        <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
    </configuration>
    

Step 4: Format the NameNode
The NameNode must be formatted once before starting HDFS (this initializes the metadata storage). Run the following command on the NameNode machine:

hdfs namenode -format

Note: Formatting erases all existing HDFS data. Only run this command on a new cluster or if you need to reset the NameNode.

Step 5: Start HDFS Services

  1. Start the HDFS daemons (NameNode and DataNodes) using the start-dfs.sh script (run from the NameNode):
    $HADOOP_HOME/sbin/start-dfs.sh
    
  2. Verify that the services are running by checking the process list:
    jps
    
    You should see the following processes:
    • NameNode: Manages HDFS metadata.
    • DataNode: Stores actual data blocks.
    • SecondaryNameNode (optional): Assists with NameNode metadata management.

Step 6: Validate the Configuration

  1. Check the HDFS status using the web interface (default port: 9870 for Hadoop 3.x):
    Open a browser and navigate to http://<namenode-ip>:9870.
  2. Run basic HDFS commands to verify functionality:
    hdfs dfs -mkdir -p /test  # Create a test directory
    hdfs dfs -put /path/to/local/file.txt /test  # Upload a local file to HDFS
    hdfs dfs -ls /test  # List contents of the test directory
    hdfs dfs -cat /test/file.txt  # View the uploaded file
    

Troubleshooting Tips

  • Permission Denied: Ensure the Hadoop directories (e.g., /data/hadoop/hdfs/namenode) have the correct ownership (hadoop:hadoop or your username).
  • Port Conflicts: Verify that ports like 9000 (NameNode), 50070 (Web UI), and 50090 (Secondary NameNode) are not blocked by the firewall.
  • Java Issues: Confirm JAVA_HOME is set correctly in hadoop-env.sh (located in $HADOOP_HOME/etc/hadoop).
  • NameNode Not Starting: Check the NameNode logs (located in $HADOOP_HOME/logs) for errors—common issues include formatting errors or incorrect fs.defaultFS configurations.

0