温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

基于zookeeper的高可用集群

发布时间:2020-07-23 22:43:53 来源:网络 阅读:371 作者:素颜猪 栏目:大数据

1.准备zookeeper服务器

#node1,node2,node3
#安装请参考http://suyanzhu.blog.51cto.com/8050189/1946580


2.准备NameNode节点

#node1,node4


3.准备JournalNode节点

#node2,node3,node4


4.准备DataNode节点

#node2,node3,node4
#启动DataNode节点命令hadoop-daemon.sh start datanode


5.修改hadoop的hdfs-site.xml配置文件

<configuration>
        <property>
                <name>dfs.nameservices</name>
                <value>yunshuocluster</value>
        </property>
        <property>
                <name>dfs.ha.namenodes.yunshuocluster</name>
                <value>nn1,nn2</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.yunshuocluster.nn1</name>
                <value>node1:8020</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.yunshuocluster.nn2</name>
                <value>node4:8020</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.yunshuocluster.nn1</name>
                <value>node1:50070</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.yunshuocluster.nn2</name>
                <value>node4:50070</value>
        </property>
        <property>
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://node2:8485;node3:8485;node4:8485/yunshuocluste
r</value>
        </property>
        <property>
                <name>dfs.client.failover.proxy.provider.mycluster</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailo
verProxyProvider</value>
        </property>
        <property>
                <name>dfs.ha.fencing.methods</name>
                <value>sshfence</value>
        </property>
        <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/root/.ssh/id_dsa</value>
        </property>
        <property>
                <name>dfs.journalnode.edits.dir</name>
                <value>/opt/journalnode/</value>
        </property>
        <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
</configuration>


6.修改hadoop的core-site.xml配置文件

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://yunshuocluster</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop-2.5</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>
</configuration>


7.配置slaves配置文件

node2
node3
node4


8.启动zookeeper(node1,node2,node3)

zkServer.sh start


9.启动Journalnode(node2,node3,node4上分别执行下面的命令)

#启动命令 停止命令hadoop-daemon.sh stop journalnode
hadoop-daemon.sh start journalnode


10.检查Journalnode,通过查看日志

cd /home/hadoop-2.5.1/logs
ls
tail -200 hadoop-root-journalnode-node2.log


11.格式化NameNode(两台中的一台,这里格式化node4这台NameNode节点)

hdfs namenode -format

cd /opt/hadoop-2.5
#两台NameNode同步完成
scp -r /opt/hadoop-2.5/* root@node1:/opt/hadoop-2.5/


12.初始化zkfc

hdfs zkfc -formatZK


13.启动服务

start-dfs.sh
#stop-dfs.sh表示停止服务


向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI