配置 Ubuntu Hadoop 的高可用性(HA)涉及多个步骤,包括设置多个 NameNode、配置故障转移控制器(ZooKeeper Failover Controller, ZKFC)以及确保数据的一致性和可靠性。以下是一个基本的指南:
core-site.xml<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>zk1:2181,zk2:2181,zk3:2181</value>
</property>
</configuration>
hdfs-site.xml<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>namenode1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>namenode2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>namenode1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>namenode2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://journalnode1:8485;journalnode2:8485;journalnode3:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/path/to/ssh/key</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/path/to/journalnode/data</value>
</property>
</configuration>
yarn-site.xml<configuration>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yarn-cluster</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>resourcemanager1</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>resourcemanager2</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>zk1:2181,zk2:2181,zk3:2181</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm1</name>
<value>resourcemanager1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm2</name>
<value>resourcemanager2:8032</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>resourcemanager1:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>resourcemanager2:8088</value>
</property>
</configuration>
在每个 NameNode 和 ResourceManager 节点上初始化 ZKFC:
hdfs zkfc -formatZK
在所有 JournalNode 节点上启动 JournalNode:
start-dfs.sh
在其中一个 NameNode 上启动 NameNode:
hdfs --daemon start namenode
在另一个 NameNode 上启动备用 NameNode:
hdfs --daemon start namenode -failover
在其中一个 ResourceManager 上启动 ResourceManager:
start-yarn.sh
在另一个 ResourceManager 上启动备用 ResourceManager:
start-yarn.sh
使用 hdfs haadmin 命令验证 HA 配置是否正确:
hdfs haadmin -getServiceState nn1
hdfs haadmin -getServiceState nn2
确保两个 NameNode 都处于活动状态。
手动停止一个 NameNode 或 ResourceManager,观察另一个节点是否能够自动接管。
通过以上步骤,你应该能够成功配置 Ubuntu Hadoop 的高可用性。请根据你的具体环境和需求进行调整。