温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

HDFS 实验 (四) 集群操作

发布时间:2020-07-07 14:36:04 来源:网络 阅读:692 作者:pcdog 栏目:大数据

集群设置

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html

用户手册

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html

Daemon

Environment Variable

NameNode

HDFS_NAMENODE_OPTS

DataNode

HDFS_DATANODE_OPTS

Secondary NameNode

HDFS_SECONDARYNAMENODE_OPTS

ResourceManager

YARN_RESOURCEMANAGER_OPTS

NodeManager

YARN_NODEMANAGER_OPTS

WebAppProxy

YARN_PROXYSERVER_OPTS

Map Reduce Job History Server

MAPRED_HISTORYSERVER_OPTS

To start a Hadoop cluster you will need to start both the HDFS and YARN cluster.

The first time you bring up HDFS, it must be formatted. Format a new distributed filesystem as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs namenode -format <cluster_name>

Start the HDFS NameNode with the following command on the designated node as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start namenode

Start a HDFS DataNode with the following command on each designated node as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start datanode

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes can be started with a utility script. As hdfs:

[hdfs]$ $HADOOP_HOME/sbin/start-dfs.sh

Start the YARN with the following command, run on the designated ResourceManager as yarn:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon start resourcemanager

Run a script to start a NodeManager on each designated host as yarn:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon start nodemanager

Start a standalone WebAppProxy server. Run on the WebAppProxy server as yarn. If multiple servers are used with load balancing it should be run on each of them:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon start proxyserver

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the YARN processes can be started with a utility script. As yarn:

[yarn]$ $HADOOP_HOME/sbin/start-yarn.sh

Start the MapReduce JobHistory Server with the following command, run on the designated server as mapred:

[mapred]$ $HADOOP_HOME/bin/mapred --daemon start historyserver

Hadoop Shutdown

Stop the NameNode with the following command, run on the designated NameNode as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon stop namenode

Run a script to stop a DataNode as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon stop datanode

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes may be stopped with a utility script. As hdfs:

[hdfs]$ $HADOOP_HOME/sbin/stop-dfs.sh

Stop the ResourceManager with the following command, run on the designated ResourceManager as yarn:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon stop resourcemanager

Run a script to stop a NodeManager on a worker as yarn:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon stop nodemanager

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the YARN processes can be stopped with a utility script. As yarn:

[yarn]$ $HADOOP_HOME/sbin/stop-yarn.sh

Stop the WebAppProxy server. Run on the WebAppProxy server as yarn. If multiple servers are used with load balancing it should be run on each of them:

[yarn]$ $HADOOP_HOME/bin/yarn stop proxyserver

Stop the MapReduce JobHistory Server with the following command, run on the designated server as mapred:

[mapred]$ $HADOOP_HOME/bin/mapred --daemon stop historyserver

Web Interfaces

Once the Hadoop cluster is up and running check the web-ui of the components as described below:

Daemon

Web Interface

Notes

NameNode

http://nn_host:port/

Default HTTP port is 9870.

ResourceManager

http://rm_host:port/

Default HTTP port is 8088.

MapReduce JobHistory Server

http://jhs_host:port/

Default HTTP port is 19888.

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI