温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

hadoop伪分布式的安装步骤

发布时间:2021-08-27 16:19:35 来源:亿速云 阅读:188 作者:chen 栏目:开发技术

本篇内容主要讲解“hadoop伪分布式的安装步骤”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“hadoop伪分布式的安装步骤”吧!

1. 解压缩 /opt/software/hadoop-2.8.1.tar.gz 文件 
    [root@hadoop002 software]$ cd   /opt/software/
    [root@hadoop002 software]$ pwd
    /opt/software
    [root@hadoop002 software]$ tar -xzvf  hadoop-2.8.1.tar.gz
    [root@hadoop002 software]$ ll
    total 208784
    drwxr-xr-x.  6 root   root        4096 Nov 10  2015 apache-maven-3.3.9
    -rw-r--r--.  1 root   root     8617253 Apr 23 15:14 apache-maven-3.3.9-bin.zip
    drwxr-xr-x.  7 root   root        4096 Aug 21  2009 findbugs-1.3.9
    -rw-r--r--.  1 root   root     7546219 Apr 23 15:14 findbugs-1.3.9.zip
    drwxr-xr-x. 10 root   root   4096 Apr 23 16:36 hadoop-2.8.1
    -rw-r--r--.  1 root   root   194976866 Apr 23 15:40 hadoop-2.8.1.tar.gz
    drwxr-xr-x. 10 109965   5000      4096 Apr 23 15:26 protobuf-2.5.0
    -rw-r--r--.  1 root   root     2401901 Apr 23 15:15 protobuf-2.5.0.tar.gz

2. 添加软连接  
    [root@hadoop002 software]$ ln -s /opt/software/hadoop-2.8.1 /opt/software/hadoop
    [root@hadoop002 software]$ ll
    total 208784
    drwxr-xr-x.  6 root   root        4096 Nov 10  2015 apache-maven-3.3.9
    -rw-r--r--.  1 root   root     8617253 Apr 23 15:14 apache-maven-3.3.9-bin.zip
    drwxr-xr-x.  7 root   root        4096 Aug 21  2009 findbugs-1.3.9
    -rw-r--r--.  1 root   root     7546219 Apr 23 15:14 findbugs-1.3.9.zip
    lrwxrwxrwx.  1 root    root   26 Apr 23 15:41 hadoop -> /opt/software/hadoop-2.8.1
    drwxr-xr-x. 10 root    root   4096 Apr 23 16:36 hadoop-2.8.1
    -rw-r--r--.  1 root   root   194976866 Apr 23 15:40 hadoop-2.8.1.tar.gz
    drwxr-xr-x. 10 109965   5000      4096 Apr 23 15:26 protobuf-2.5.0
    -rw-r--r--.  1 root   root     2401901 Apr 23 15:15 protobuf-2.5.0.tar.gz


3.  设置环境变量
    [root@hadoop002 software]$  vi /etc/profile
    export HADOOP_HOME=/opt/software/hadoop
    export PATH=$HADOOP_HOME/bin:$FINDBUGS_HOME/bin:$PROTOC_HOME/bin:$MAVEN_HOME/bin:$MYSQL_HOME/bin:$JAVA_HOME/bin:$PATH

    --  环境变量生效
    [root@hadoop002 software]$ source /etc/profile


4. 修改软连接所属的用户和用户组
    [root@hadoop002 software]$ chown -R hadoop:hadoop hadoop/*

5. 将hadoop-2.8.1文件的所属的用户和用户组 修改为hadoop
    [root@hadoop002 software]$ chown -R  hadoop:hadoop hadoop-2.8.1
    [root@hadoop002 software]$ chown -R  hadoop:hadoop hadoop-2.8.1/*

6.  切换hadoop用户
    [root@hadoop002 hadoop]# su - hadoop
    [hadoop@hadoop002 ~]$ cd /opt/software/hadoop
    [hadoop@hadoop002 hadoop]$ ll
    total 148
    drwxr-xr-x. 2 hadoop hadoop  4096 Apr 18 14:11 bin
    drwxr-xr-x. 3 hadoop hadoop  4096 Apr 18 14:10 etc
    drwxr-xr-x. 2 hadoop hadoop  4096 Apr 18 14:11 include
    drwxr-xr-x. 3 hadoop hadoop  4096 Apr 18 14:10 lib
    drwxr-xr-x. 2 hadoop hadoop  4096 Apr 18 14:11 libexec
    drwxr-xr-x. 2 hadoop hadoop  4096 Apr 18 14:11 sbin
    drwxr-xr-x. 3 hadoop hadoop  4096 Apr 18 14:10 share

    bin:可执行文件
    etc:配置文件
    sbin: shell脚本  启动,关闭hdfs,yarn服务

7. # 删除txt文件
    [hadoop@hadoop002 hadoop]$ rm -f *.txt

8. 配置hadoop用户的ssh信任关系
    [hadoop@hadoop002 hadoop]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/hadoop/.ssh'.
    Your identification has been saved in /home/hadoop/.ssh/id_rsa.
    Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
    The key fingerprint is:
    29:06:4c:1a:a4:74:b4:f4:c2:43:8a:07:68:00:fa:e4 hadoop@hadoop002 
    The key's randomart image is:
    +--[ RSA 2048]----+
    |Bo+=.         |
    |=+*=o         |
    |=.+=o.        |
    | =  o.   .      |
    |  E   o S      |
    |     . .         |
    |                 |
    |                 |
    |                 |
    +-----------------+


    [hadoop@hadoop002 hadoop]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    [hadoop@hadoop002 hadoop]$ chmod 0600 ~/.ssh/authorized_keys

    [hadoop@hadoop002 hadoop]$ ssh hadoop002 date  # 第一次回车需要输入yes
    The authenticity of host 'hadoop002 (192.168.90.164)' can't be established.
    RSA key fingerprint is 3a:51:6d:9b:94:d3:91:bf:fd:ab:da:0a:5b:8c:f2:6c.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop002 ,192.168.90.164' (RSA) to the list of known hosts.
    Tue May 22 10:46:32 CST 2018

[hadoop@hadoop002 hadoop]$ ssh hadoop002 date # 不需要回车输入yes,即OK
    Tue May 22 10:46:37 CST 2018

9. 配置文件
    [hadoop@hadoop002 hadoop]$ pwd
    /opt/software/hadoop
    [hadoop@hadoop002 hadoop]$  vi etc/hadoop/core-site.xml
    
        
            fs.defaultFS
            hdfs://localhost:9000
        
    


    [hadoop@hadoop002 hadoop]$  etc/hadoop/hdfs-site.xml

   
        dfs.replication
        1
   


10.   格式化 和 启动
    # 添加环境变量
    [hadoop@hadoop002 hadoop]$ vi etc/hadoop/hadoop-env.sh
    export JAVA_HOME=/usr/java/jdk1.8.0_11

    #查看hdfs命令的路径
    [hadoop@hadoop002 hadoop]$ which hdfs
    /opt/sofeware/hadoop/bin/hdfs

    #格式化
    [hadoop@hadoop002 hadoop]$  bin/hdfs namenode -format
    ........
    18/04/19 12:53:42 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.

    # 启动hdfs
    [hadoop@hadoop002 hadoop]$ sbin/start-dfs.sh
    Starting namenodes on [192.168.90.164]
    192.168.90.164: namenode running as process 15421. Stop it first.
    localhost: datanode running as process 15523. Stop it first.
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: secondarynamenode running as process 15717. Stop it first.

    #通过 jps查看进程
    [hadoop@hadoop002 hadoop]$ jps
    15523 DataNode
    15717 SecondaryNameNode
    15421 NameNode
    16877 Jps


11  修改dfs启动的进程以hadoop002启动
    #修改前的启动
    [hadoop@hadoop002 hadoop]$ sbin/start-dfs.sh 
    Starting namenodes on [192.168.90.164]
    192.168.90.164: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-localhost.localdomain.out
    localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-localhost.localdomain.out
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out


    配置 在hadoop002上启动
    三个进程:
    namenode:         192.168.90.164    bin/hdfs getconf -namenodes    etc/hadoop/core-site.xml        
    datanode:         localhost         using default slaves file     etc/hadoop/slaves  # 修改slaves文件里面的内容为hadoop002
    secondarynamenode: 0.0.0.0


    #修改文件:etc/hadoop/core-site.xml:
    
       
            fs.defaultFS
            hdfs://hadoop002:9000
        
    

    # 修改slaves文件里面的内容为hadoop002
    [hadoop@hadoop002 hadoop]$ vi  etc/hadoop/slaves
    #localhost
    hadoop002


    #修改文件:etc/hadoop/hdfs-site.xml 
    添加:
    
       
                dfs.namenode.secondary.http-address
                hadoop002:50090 #默认是0.0.0.0:50090
        
        
                dfs.namenode.secondary.https-address
                hadoop002:50091#默认是0.0.0.0:50091
        
    

    0.0.0.0 是一个非常特殊的IP,代表的是当前机器的IP。

    # 配置好后,再次启动
    [hadoop@hadoop002 sbin]$  sbin/start-dfs.sh
    Starting namenodes on [hadoop002]
    hadoop002: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop002.out
    hadoop002: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop002.out
    Starting secondary namenodes [hadoop002]
    hadoop002: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop002.out

    到这里就安装完成了。

到此,相信大家对“hadoop伪分布式的安装步骤”有了更深的了解,不妨来实际操作一番吧!这里是亿速云网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI