Hadoop HA 高可用

| 标签 Hadoop  ZooKeeper  HDFS  YARN  HA 

1. HA概述

(1)所谓 HA(High Availablity),即高可用(7*24 小时不中断服务)。 ​

(2)实现高可用最关键的策略是消除单点故障。HA 严格来说应该分成各个组件的 HA机制:HDFS 的 HA 和 YARN 的 HA。 ​

(3)NameNode 主要在以下两个方面影响 HDFS 集群

  • NameNode 机器发生意外,如宕机,集群将无法使用,直到管理员重启
  • NameNode 机器需要升级,包括软件、硬件升级,此时集群也将无法使用

HDFS HA 功能通过配置多个 NameNodes(Active/Standby)实现在集群中对 NameNode 的 热备来解决上述问题。如果出现故障,如机器崩溃或机器需要升级维护,这时可通过此种方 式将 NameNode 很快的切换到另外一台机器。 ​

2. HDFS-HA集群搭建

当前HDFS集群的规划

hadoop102 hadoop103 hadoop104
NameNode   SecondaryNameNode
DataNode DataNode DataNode

HA的主要目的是消除NameNode的单点故障,故需要将HDFS集群规划成以下模样

hadoop102 hadoop103 hadoop104
NameNode NameNode NameNode
DataNode DataNode DataNode

2.1 HDFS-HA核心问题

(1)怎么保证三台NameNode的数据一致? a.Fsimage:让一台NN生成数据,让其他机器NN同步 b.Edits:需要引进新的模块 JournalNode 来保证 edtis 的文件的数据一致性 ​

(2)怎么让同时只有一台NN是 active,其他所有是 standby 的? a.手动分配 b.自动分配 ​

(3)2nn 在HA架构中并不存在,定期合并 fsimage 和 edtis 的活谁来干? 由 standby 的NN来干 ​

(4)如果NN真的发生了问题,怎么让其他的NN上位干活? a.手动故障转移 b.自动故障转移 ​

3. HDFS-HA手动模式

3.1 环境准备

(1)修改 IP (2)修改主机名及主机名和 IP 地址的映射 (3)关闭防火墙 (4)ssh 免密登录 (5)安装 JDK,配置环境变量等 ​

3.2 规划集群

| hadoop102 | hadoop103 | hadoop104 | | — | — | — | | NameNode | NameNode | NameNode | | JournalNode | JournalNode | JournalNode | | DataNode | DataNode | DataNode |

3.3 配置HDFS-HA集群

(1)官方地址:https://hadoop.apache.org/

(2)在opt目录下新建一个ha文件夹

[mhk@hadoop102 ~]$ cd /opt/
[mhk@hadoop102 opt]$ sudo mkdir ha
[mhk@hadoop102 opt]$ sudo chown mhk:mhk /opt/ha/
[mhk@hadoop102 opt]$ ll
总用量 0
drwxr-xr-x. 2 mhk mhk   6 1  12 11:48 ha
drwxr-xr-x. 5 mhk mhk  69 1  10 16:15 module
drwxr-xr-x. 2 mhk mhk 108 1  10 16:12 software

(3)将/opt/module/下的 hadoop-3.1.3 拷贝到/opt/ha 目录下(记得删除 data 和 log 目录)

[mhk@hadoop102 opt]$ cp -r /opt/module/hadoop-3.1.3 /opt/ha/

(4)配置core-site.xml

<!-- 把多个 NameNode 的地址组装成一个集群 mycluster -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<!-- 指定 hadoop 运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/ha/hadoop-3.1.3/data</value>
</property>

(5)配置hdfs-site.xml

<!-- NameNode 数据存储目录 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>file://${hadoop.tmp.dir}/name</value>
</property>
<!-- DataNode 数据存储目录 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file://${hadoop.tmp.dir}/data</value>
</property>
<!-- JournalNode 数据存储目录 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>${hadoop.tmp.dir}/jn</value>
</property>
<!-- 完全分布式集群名称 -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!-- 集群中 NameNode 节点都有哪些 -->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2,nn3</value>
</property>
<!-- NameNode 的 RPC 通信地址 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>hadoop102:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hadoop103:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn3</name>
<value>hadoop104:8020</value>
</property>
<!-- NameNode 的 http 通信地址 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>hadoop102:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hadoop103:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn3</name>
<value>hadoop104:9870</value>
</property>
<!-- 指定 NameNode 元数据在 JournalNode 上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop102:8485;hadoop103:8485;hadoop104:8485/myclus
ter</value>
</property>
<!-- 访问代理类:client 用于确定哪个 NameNode 为 Active -->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyP
rovider</value>
</property>
<!-- 配置隔离机制,即同一时刻只能有一台服务器对外响应 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<!-- 使用隔离机制时需要 ssh 秘钥登录-->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/atguigu/.ssh/id_rsa</value>
</property>

(6)分发配置好的hadoop环境到其他节点

[mhk@hadoop102 opt]$ xsync ha/hadoop-3.1.3/

3.4 启动HDFS-HA集群

(1)将HADOOP_HOME环境变量更改到HA目录(三台机器)

[mhk@hadoop102 ~]$ sudo vim /etc/profile.d/my_env.sh

将HADOOP_HOME部分改为如下

#HADOOP_HOME
export HADOOP_HOME=/opt/ha/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

去三台机器上source环境变量

(2)在各个 JournalNode 节点上,输入以下命令启动 journalnode 服务

[mhk@hadoop102 opt]$ hdfs --daemon start journalnode
[mhk@hadoop103 opt]$ hdfs --daemon start journalnode
[mhk@hadoop104 opt]$ hdfs --daemon start journalnode

注:一定要先启动JNs,不然NameNode起不来,本人血的教训

10.211.55.11:8485: Call From hadoop102/10.211.55.11 to hadoop102:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:287)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:282)
	at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:209)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1198)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1645)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1755)
2022-01-12 14:29:10,430 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 2 successful responses:
10.211.55.13:8485: false
10.211.55.12:8485: false

(3)在NN1上,对其进行格式化,并启动

[mhk@hadoop102 hadoop-3.1.3]$ hdfs namenode -format
[mhk@hadoop102 hadoop-3.1.3]$ hdfs --daemon start namenode

(4)在NN2和NN3上,同步NN1上的元数据信息

[mhk@hadoop103 hadoop-3.1.3]$ hdfs namenode -bootstrapStandby
[mhk@hadoop104 hadoop-3.1.3]$ hdfs namenode -bootstrapStandby

(5)启动NN2和NN3

[mhk@hadoop103 hadoop-3.1.3]$ hdfs --daemon start namenode
[mhk@hadoop104 hadoop-3.1.3]$ hdfs --daemon start namenode

(6)查看web界面显示 image image image

(7)在所有节点上启动DataNode

[mhk@hadoop102 hadoop-3.1.3]$ hdfs --daemon start datanode
[mhk@hadoop103 hadoop-3.1.3]$ hdfs --daemon start datanode
[mhk@hadoop104 hadoop-3.1.3]$ hdfs --daemon start datanode

(8)将NN1切换为Active

[mhk@hadoop102 hadoop-3.1.3]$ hdfs haadmin -transitionToActive nn1

(9)查看是否Active

[mhk@hadoop102 hadoop-3.1.3]$ hdfs haadmin -getServiceState nn1
active

image

3.5 问题提出

当hadoop102的NameNode挂掉了怎么办? 测试:

[mhk@hadoop102 hadoop-3.1.3]$ jps
14064 DataNode
13443 NameNode
14391 Jps
13292 JournalNode
[mhk@hadoop102 hadoop-3.1.3]$ kill -9 13443
[mhk@hadoop102 hadoop-3.1.3]$ jps
14064 DataNode
14401 Jps
13292 JournalNode

web端: image

此时,hadoop103和hadoop104仍然是从节点

[mhk@hadoop103 hadoop-3.1.3]$ hdfs haadmin -getServiceState nn2
standby
[mhk@hadoop104 hadoop-3.1.3]$ hdfs haadmin -getServiceState nn2
standby

也就是说,当hadoop102(主节点)挂了之后,103和104两个从节点并不会变成主节点,现在手动尝试将hadoop103变为主节点(没有启动102的情况下):

[mhk@hadoop103 hadoop-3.1.3]$ hdfs haadmin -transitionToActive nn2
2022-01-12 15:50:01,222 INFO ipc.Client: Retrying connect to server: hadoop102/10.211.55.11:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Unexpected error occurred  Call From hadoop103/10.211.55.12 to hadoop102:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
Usage: haadmin [-ns <nameserviceId>] [-transitionToActive [--forceactive] <serviceId>]

结果报错。现在我们将hadoop102中的NameNode重新启动,再次尝试将hadoop103变成主节点:

[mhk@hadoop102 hadoop-3.1.3]$ hdfs --daemon start namenode
[mhk@hadoop102 hadoop-3.1.3]$ jps
14064 DataNode
14496 Jps
13292 JournalNode
14462 NameNode

image

hadoop102显示变成standby 尝试将hadoop103变成主节点:

[mhk@hadoop103 hadoop-3.1.3]$ hdfs haadmin -transitionToActive nn2
[mhk@hadoop103 hadoop-3.1.3]$ hdfs haadmin -getServiceState nn2
active

结果成功! ​

所以结果就是,当一个节点挂掉之后,就算是手动模式启动其他节点,也要等挂掉的节点起来之后,才能去手动将其他节点的机器变成主节点。这样,就失去了高可用的特性。 所以,需要配置自动模式。 ​

4. HDFS-HA自动模式

4.1 HDFS-HA 自动故障转移工作机制

自动故障转移为 HDFS 部署增加了两个新组件:ZooKeeper 和 ZKFailoverController(ZKFC)进程,如图所示。ZooKeeper 是维护少量协调数据,通知客户端这些数据的改变和监视客户端故障的高可用服务。

image

  1. 故障检测:集群中的每个NameNode在ZooKeeper中维护了一个持久会话,如果机器崩溃,ZooKeeper中的会话将终止,ZooKeeper通知另一个NameNode需要触发故障转移。

  2. 现役NameNode选择:ZooKeeper提供了一个简单的机制用于唯一的选择一个节点为active状态。如果目前现役NameNode崩溃,另一个节点可能从ZooKeeper获得特殊的排外锁以表明它应该成为现役NameNode.

ZKFC是自动故障转移中的另一个新组件,是ZooKeeper的客户端,也监视和管理NameNode的状态。每个运行NameNode的主机也运行了一个ZKFC进程,ZKFC负责: ​

  1. 健康监测:ZKFC使用一个健康检查命令定期地ping与之在相同主机的NameNode,只要该NameNode及时地回复健康状态,ZKFC认为该节点是健康的。如果该节点崩溃,冻结或进入不健康状态,健康监测器标识该节点为非健康的。
  2. ZooKeeper会话管理:当本地NameNode是健康的,ZKFC保持一个在ZooKeeper中打开的会话。如果本地NameNode处于active状态,ZKFC也保持一个特殊的znode锁,该锁使用了ZooKeeper对短暂节点的支持,如果会话终止,锁节点将自动删除。
  3. 基于ZooKeeper的选择:如果本地NameNode是健康的,且ZKFC发现没有其它的节点当前持有znode锁,它将为自己获取该锁。如果成功,则它已经赢得了选择,并负责运行故障转移进程以使它的本地NameNode为active。故障转移进程与前面描述的手动故障转移相似,首先如果必要保护之前的现役NameNode,然后本地NameNode转换为active状态。

4.2 HDFS-HA 自动故障转移的集群规划

| hadoop102 | hadoop103 | hadoop104 | | — | — | — | | NameNode | NameNode | NameNode | | JournalNode | JournalNode | JournalNode | | DataNode | DataNode | DataNode | | ZooKeeper | ZooKeeper | ZooKeeper | | ZKFC | ZKFC | ZFFC |

4.3 配置 HDFS-HA 自动故障转移

4.3.1 具体配置

(1)在hdfs-site.xml中增加

<!-- 启用 nn 故障自动转移 -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>

(2)在core-site.xml中增加

<!-- 指定 zkfc 要连接的 zkServer 地址 -->
	<property>
		<name>ha.zookeeper.quorum</name>
		<value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>

(3)修改后分发配置文件

[mhk@hadoop102 etc]$ xsync hadoop/
==================== hadoop102 ====================
sending incremental file list

sent 1,066 bytes  received 18 bytes  722.67 bytes/sec
total size is 111,246  speedup is 102.63
==================== hadoop103 ====================
sending incremental file list
hadoop/
hadoop/core-site.xml
hadoop/hdfs-site.xml

sent 1,994 bytes  received 101 bytes  1,396.67 bytes/sec
total size is 111,246  speedup is 53.10
==================== hadoop104 ====================
sending incremental file list
hadoop/
hadoop/core-site.xml
hadoop/hdfs-site.xml

sent 1,994 bytes  received 101 bytes  1,396.67 bytes/sec
total size is 111,246  speedup is 53.10

4.3.2 启动

(1)关闭所有HDFS服务

[mhk@hadoop102 etc]$ stop-dfs.sh

(2)启动ZooKeeper集群

[mhk@hadoop102 zookeeper-3.5.7]$ zk.sh start
---------- zookeeper hadoop102 启动 ------------
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
---------- zookeeper hadoop103 启动 ------------
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
---------- zookeeper hadoop104 启动 ------------
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[mhk@hadoop102 zookeeper-3.5.7]$ jpsall
=============== hadoop102 ===============
16081 QuorumPeerMain
=============== hadoop103 ===============
12780 QuorumPeerMain
=============== hadoop104 ===============
12303 QuorumPeerMain

(3)启动ZooKeeper以后,然后再初始化HA在ZooKeeper中的状态:

[mhk@hadoop102 zookeeper-3.5.7]$ hdfs zkfc -formatZK

(4)启动HDFS服务

[mhk@hadoop102 zookeeper-3.5.7]$ start-dfs.sh
Starting namenodes on [hadoop102 hadoop103 hadoop104]
Starting datanodes
Starting journal nodes [hadoop102 hadoop103 hadoop104]
Starting ZK Failover Controllers on NN hosts [hadoop102 hadoop103 hadoop104]

[mhk@hadoop102 zookeeper-3.5.7]$ jpsall
=============== hadoop102 ===============
16368 NameNode
16081 QuorumPeerMain
16471 DataNode
16904 DFSZKFailoverController
16702 JournalNode
=============== hadoop103 ===============
13200 DFSZKFailoverController
12897 NameNode
12963 DataNode
13076 JournalNode
12780 QuorumPeerMain
=============== hadoop104 ===============
12412 NameNode
12717 DFSZKFailoverController
12478 DataNode
12303 QuorumPeerMain
12591 JournalNode

(5)可以去zkCli.sh客户端查看NameNode选举锁节点内容:

[zk: localhost:2181(CONNECTED) 0] get -s /hadoop-ha/mycluster/ActiveStandbyElectorLock

	myclusternn2	hadoop103 �>(�>
cZxid = 0x1b00000007
ctime = Wed Jan 12 17:18:16 CST 2022
mZxid = 0x1b00000007
mtime = Wed Jan 12 17:18:16 CST 2022
pZxid = 0x1b00000007
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x3000a8a7ca20000
dataLength = 33
numChildren = 0

4.3.3 验证

(1)将ActiveNameNode进程kill,查看网页端三台NameNode的状态变化

[mhk@hadoop103 ~]$ kill -9 12897
[mhk@hadoop103 ~]$ jps
13200 DFSZKFailoverController
12963 DataNode
13076 JournalNode
12780 QuorumPeerMain
13342 Jps

image hadoop103无法访问

image

hadoop102依然是standby

image hadoop104升级成Active ​

此时单点启动hadoop103的NameNode,观察网页端状态

[mhk@hadoop103 ~]$ hdfs --daemon start namenode

image 其状态变成standby ​

5. YARN-HA配置

5.1 YARN-HA工作机制

5.1.1 官方文档

https://hadoop.apache.org/docs/r3.1.3/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

5.1.2 YARN-HA集群

YARN的单点故障在RM上。YARN的HA架构和HDFS HA类似,需要启动两个ResourceManager,这两个ResourceManager会向ZooKeeper集群注册,通过ZooKeeper管理它们的状态(Active或Standby)并进行自动故障转移。 ​

image

(1)环境准备 ​

  1. 修改 IP
  2. 修改主机名及主机名和 IP 地址的映射
  3. 关闭防火墙
  4. ssh 免密登录
  5. 安装 JDK,配置环境变量等

(2)规划集群

hadoop102 hadoop103 hadoop104
ResourceManager ResourceManager ResourceManager
NodeManager NodeManager NodeManager
ZooKeeper ZooKeeper ZooKeeper

(3)核心问题 a .如果当前 active rm 挂了,怎么将其他 standby rm 上位? 核心原理跟 hdfs 一样,利用了 zk 的临时节点 ​

b. 当前 rm 上有很多的计算程序在等待运行,其他的 rm 怎么将这些程序接手过来接着跑? rm 会将当前的所有计算程序的状态存储在 zk 中,其他 rm 上位后会去读取,然后接着跑 ​

(4)具体配置 yarn-site.xml

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 启用 resourcemanager ha -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 声明两台 resourcemanager 的地址 -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster-yarn1</value>
</property>
<!--指定 resourcemanager 的逻辑列表-->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2,rm3</value>
</property>
<!-- ========== rm1 的配置 ========== -->
<!-- 指定 rm1 的主机名 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop102</value>
</property>
<!-- 指定 rm1 的 web 端地址 -->
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>hadoop102:8088</value>
</property>
<!-- 指定 rm1 的内部通信地址 -->
<property>
<name>yarn.resourcemanager.address.rm1</name>
<value>hadoop102:8032</value>
</property>
<!-- 指定 AM 向 rm1 申请资源的地址 -->
<property>
<name>yarn.resourcemanager.scheduler.address.rm1</name> 
<value>hadoop102:8030</value>
</property>
<!-- 指定供 NM 连接的地址 --> 
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm1</name>
<value>hadoop102:8031</value>
</property>
<!-- ========== rm2 的配置 ========== -->
<!-- 指定 rm2 的主机名 -->
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop103</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>hadoop103:8088</value>
</property>
<property>
<name>yarn.resourcemanager.address.rm2</name>
<value>hadoop103:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm2</name>
<value>hadoop103:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
<value>hadoop103:8031</value>
</property>
<!-- ========== rm3 的配置 ========== -->
<!-- 指定 rm1 的主机名 -->
<property>
<name>yarn.resourcemanager.hostname.rm3</name>
<value>hadoop104</value>
</property>
<!-- 指定 rm1 的 web 端地址 -->
<property>
<name>yarn.resourcemanager.webapp.address.rm3</name>
<value>hadoop104:8088</value>
</property>
<!-- 指定 rm1 的内部通信地址 -->
<property>
<name>yarn.resourcemanager.address.rm3</name>
<value>hadoop104:8032</value>
</property>
<!-- 指定 AM 向 rm1 申请资源的地址 -->
<property>
<name>yarn.resourcemanager.scheduler.address.rm3</name> 
<value>hadoop104:8030</value>
</property>
<!-- 指定供 NM 连接的地址 --> 
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm3</name>
<value>hadoop104:8031</value>
</property>
<!-- 指定 zookeeper 集群的地址 --> 
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>
<!-- 启用自动恢复 --> 
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!-- 指定 resourcemanager 的状态信息存储在 zookeeper 集群 --> 
<property>
<name>yarn.resourcemanager.store.class</name> 
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!-- 环境变量的继承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>

分发配置信息

[mhk@hadoop102 etc]$ xsync hadoop/
==================== hadoop102 ====================
sending incremental file list

sent 1,066 bytes  received 18 bytes  722.67 bytes/sec
total size is 112,919  speedup is 104.17
==================== hadoop103 ====================
sending incremental file list
hadoop/
hadoop/yarn-site.xml

sent 5,226 bytes  received 64 bytes  10,580.00 bytes/sec
total size is 112,919  speedup is 21.35
==================== hadoop104 ====================
sending incremental file list
hadoop/
hadoop/yarn-site.xml

(5)启动yarn

  1. 在hadoop102或hadoop103中执行
    [mhk@hadoop102 etc]$ start-yarn.sh
    Starting resourcemanagers on [ hadoop102 hadoop103 hadoop104]
    Starting nodemanagers
    
  2. 查看服务状态
    [mhk@hadoop103 ~]$ yarn rmadmin -getServiceState rm1
    standby
    [mhk@hadoop103 ~]$ yarn rmadmin -getServiceState rm2
    standby
    [mhk@hadoop103 ~]$ yarn rmadmin -getServiceState rm3
    active
    
  3. 可以去zkCli.sh客户端查看ResourceManager选举锁节点内容 ```shell [zk: localhost:2181(CONNECTED) 0] get -s /yarn-leader-election/cluster-yarn1/ActiveStandbyElectorLock

cluster-yarn1rm3 cZxid = 0x1c0000000e ctime = Wed Jan 12 18:19:17 CST 2022 mZxid = 0x1c0000000e mtime = Wed Jan 12 18:19:17 CST 2022 pZxid = 0x1c0000000e cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x4000a9de5cc0000 dataLength = 20 numChildren = 0

# 6. 最终集群规划
将整个HA搭建完成后,集群将形成以下模样

| hadoop102 | hadoop103 | hadoop104 |
| --- | --- | --- |
| NameNode | NameNode | NameNode |
| JournalNode | JournalNode | JournalNode |
| DataNode | DataNode | DataNode |
| ZooKeeper | ZooKeeper | ZooKeeper |
| ZKFC | ZKFC | ZFFC |
| ResourceManager | ResourceManager | ResourceManager |
| NodeManager | NodeManager | NodeManager |

​

​

```shell
[mhk@hadoop102 zookeeper-3.5.7]$ jpsall
=============== hadoop102 ===============
16368 NameNode
16081 QuorumPeerMain
18306 NodeManager
16471 DataNode
16904 DFSZKFailoverController
18202 ResourceManager
16702 JournalNode
=============== hadoop103 ===============
13200 DFSZKFailoverController
14304 ResourceManager
14369 NodeManager
12963 DataNode
13076 JournalNode
13416 NameNode
12780 QuorumPeerMain
=============== hadoop104 ===============
13465 ResourceManager
13530 NodeManager
12412 NameNode
12717 DFSZKFailoverController
12478 DataNode
12303 QuorumPeerMain
12591 JournalNode

1


上一篇     下一篇