1. Hadoop安装
1.1 HDFS配置
fs.defaultFS
hdfs://localhost:9000
hadoop.tmp.dir
file:/home/local/data/hadoop/tmp
dfs.replication
1
dfs.namenode.name.dir
file:/home/local/data/hadoop/tmp/dfs/name
dfs.datanode.data.dir
file:/home/local/data/hadoop/tmp/dfs/data
编辑Hadoop下etc/hadoop/hadoop-env.sh,指定JAVA_HOME环境变量(可选)
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/home/local/jdk1.8.0_231
1.2 HDFS启动
格式化文件系统,执行Hadoop下bin/hdfs
sh hdfs namenode -format
启动hdfs文件系统,执行Hadoop下sbin/start-dfs.sh
sh start-dfs.sh
1.3 HDFS验证
控制台输入jps,检查 DataNode、NameNode、SecondaryNameNode、NodeManager、ResourceManager这几个进程是否启动
18642 SecondaryNameNode
18793 Jps
1835 HMaster
18475 DataNode
18349 NameNode
打开Hadoop WebUI 默认端口50070后,访问该地址
firewall-cmd --zone=public --add-port=50070/tcp --permanent
firewall-cmd --reload
1.3 HDFS Yarn资源管理模块(可选)
mapreduce.framework.name
yarn
yarn.nodemanager.aux-services
mapreduce_shuffle
sh start-yarn.sh
打开Hadoop yarn 默认端口8088后,访问该地址
firewall-cmd --zone=public --add-port=8088/tcp --permanent
firewall-cmd --reload
2. HBase安装
hbase.rootdir
hdfs://localhost:9000/hbase
hbase.cluster.distributed
true
The mode the clusterwill be in. Possible values are
false: standalone and pseudo-distributedsetups with managed Zookeeper
true: fully-distributed with unmanagedZookeeper Quorum (see hbase-env.sh)
hbase.zookeeper.property.dataDir
/home/local/data/zookeeper
修改hbase conf目录下hbase-env.sh,使用hbase自己维护的 zookeeeper
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true
3. 配置环境变量
# 编辑/etc/profile文件,增加HBase配置
vim /etc/profile
# 增加以下配置
# hbase
export HBASE_HOME=/home/local/hbase-1.4.11
export PATH=$PATH:$HBASE_HOME/bin
# Hadoop
export HADOOP_HOME=/home/local/hadoop-2.7.7
export PATH=$PATH:$HADOOP_HOME/bin
4. 配置SSH的免key登陆
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys