温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

namenode ha by zookeeper

发布时间:2020-07-03 22:27:48 来源:网络 阅读:511 作者:dantangkai 栏目:大数据

tickTime=2000

initLimit=10

syncLimit=5

clientPort=2181

dataDir=/home/tim/zkdata

server.1=tim-dn1:2888:3888

server.2=tim-dn2:2888:3888

server.3=tim-dn3:2888:3888




core-site.xml

<configuration>

   <!--

   <property>

      <name>fs.defaultFS</name>

      <value>hdfs://tim-nn:9000</value> 

   </property>

   -->

  <property>

  <name>fs.defaultFS</name>

  <value>hdfs://mycluster</value>

  </property>

</configuration>


hdfs-site.xml

tim@tim-dn2:~/hadoop/etc/hadoop$ cat hdfs-site.xml 

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at


    http://www.apache.org/licenses/LICENSE-2.0


  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->


<!-- Put site-specific property overrides in this file. -->


<configuration>

<property>

  <name>dfs.replication</name>

  <value>3</value>

</property>

<property>

  <name>dfs.nameservices</name>

  <value>mycluster</value>

</property>

<property>

  <name>dfs.ha.namenodes.mycluster</name>

  <value>nn1,nn2</value>

</property>

<property>

  <name>dfs.namenode.rpc-address.mycluster.nn1</name>

  <value>tim-nn:8020</value>

</property>

<property>

  <name>dfs.namenode.rpc-address.mycluster.nn2</name>

  <value>tim-2nn:8020</value>

</property>

<property>

  <name>dfs.namenode.http-address.mycluster.nn1</name>

  <value>tim-nn:50070</value>

</property>

<property>

  <name>dfs.namenode.http-address.mycluster.nn2</name>

  <value>tim-2nn:50070</value>

</property>

<property>

  <name>dfs.namenode.shared.edits.dir</name>

  <value>qjournal://tim-nn:8485;tim-2nn:8485/mycluster</value>

</property>

<property>

  <name>dfs.client.failover.proxy.provider.mycluster</name>

  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

<property>

  <name>dfs.journalnode.edits.dir</name>

  <value>/home/tim/hadoop/journal</value>

</property>

</configuration>

停掉hdfs集群后

启用自动容灾
hdfs-site
 <property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
 </property>

his specifies that the cluster should be set up for automatic failover. In your core-site.xml file, add:

 <property>
   <name>ha.zookeeper.quorum</name>
   <value>zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181</value>
 </property>



tim@tim-dn2:~/hadoop/etc/hadoop$ nano core-site.xml 


  GNU nano 2.5.3                                           File: core-site.xml                                                                                   Modified  


-->


<!-- Put site-specific property overrides in this file. -->


<configuration>

   <!--

   <property>

      <name>fs.defaultFS</name>

      <value>hdfs://tim-nn:9000</value> 

   </property>

   -->

  <property>

  <name>fs.defaultFS</name>

  <value>hdfs://mycluster</value>

  </property>

<property>

   <name>ha.zookeeper.quorum</name>

   <value>tim-dn1:2181,tim-dn2:2181,tim-dn3:2181</value>

 </property>

</configuration>




启动zk集群后

tim@tim-nn:~$ hdfs zkfc -formatZK

用于在zk集群创建/hadoop-ha/mycluster目录

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI