-
官网解读
CDH5.12.1官网文档链接
spark2与kafka之类的安装文档- Linux
本次搭建采用Centos7.2,而官网提示RHEL / CentOS / OEL 7.0 is not supported.,centos7.0是不支持5.12.1版本的安装 - JDK
Only 64 bit JDKs from Oracle are supported. Oracle JDK 7 is supported across all versions of Cloudera Manager 5 and CDH 5. Oracle JDK 8 is supported in C5.3.x and higher. JDK要使用Oracle版本的 - database
Use UTF8 encoding for all custom databases.
Cloudera Manager installation fails if GTID-based replication is enabled in MySQL. 如果用GTID会安转失败的
MySQL 5.6 and 5.7 - 磁盘空间
5 GB on the partition hosting /var.
500 MB on the partition hosting /usr.
- Linux
-
部署安装
-
购买阿里云
-
集群基础配置
提醒:集群的每个机器都要操作一次1 关闭防火墙
执行命令 service iptables stop
验证: service iptables status2 关闭防火墙的自动运行
执行命令 chkconfig iptables off
验证: chkconfig --list | grep iptablesvi /etc/selinux/config
SELINUX=disabled清空防火墙策略:
iptables -L 查看一下规则 是否还在
iptables -F 清空3 设置主机名-- linux运维
执行命令 (1)hostname hadoopcm-01
(2)vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=hadoopcm-014 ip与hostname绑定(关键,每个机器)
执行命令 (1)vi /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.101.54 hadoopcm-01.xxx.com hadoopcm-01 172.16.101.55 hadoopnn-01.xxx.com hadoopnn-01 172.16.101.56 hadoopnn-02.xxx.com hadoopnn-02 172.16.101.58 hadoopdn-01.xxx.com hadoopdn-01 172.16.101.59 hadoopdn-02.xxx.com hadoopdn-02 172.16.101.60 hadoopdn-03.xxx.com hadoopdn-03
如果公司机器有dn解析需要添加上面的第二列
验证: ping hadoopcm-01集群每台机器同步 scp /etc/hosts root@hadoopnn-01:/etc/hosts
hadoop001
hadoop0025 安装oracle jdk,不要安装
(1)下载,指定目录解压
[root@hadoopcm-01 tmp]# tar -xzvf jdk-7u79-linux-x64.gz -C /usr/java/
[root@hadoop001 java]# chown -R root:root jdk1.8.0_144(2)vi /etc/profile 增加内容如下:
export JAVA_HOME=/usr/java/jdk1.8.0_45 export PATH=.:$JAVA_HOME/bin:$PATH
(3)source /etc/profile
验证: java -versionwhich java
6 创建hadoop用户,密码admin (三个文件/etc/passwd, /etc/shadow, /etc/group) (此步可以省略,可以直接用root安装,最后CDH集群环境的各个进程是以各自的用户管理的)
要求: root或者sudo无密码 user6.1 没LDAP,root-->happy 6.2 刚开始给你机器,root,这时候拿root用户安装,过了一个月机器加上公司LDAP-->安装开心,要一个sudo user 6.3 始终不会加LDAP认证,都有root用户,但是想要用另外一个用户安装管理,必须sudo 6.4 给你的机器,就是有LDAP-->不要怕 ,搞个sudo user [root@hadoopcm-01 ~]# adduser hadoop [root@hadoopcm-01 ~]# passwd hadoop Changing password for user hadoop. New password: BAD PASSWORD: it is too short BAD PASSWORD: is too simple Retype new password: passwd: all authentication tokens updated successfully. [root@hadoopcm-01 etc]# vi /etc/sudoers ## Allow root to run any commands anywhere root ALL=(ALL) ALL hadoop ALL=(root) NOPASSWD:ALL hadoop ALL=(ALL) ALL jpwu ALL=(root) NOPASSWD:ALL jpwu ALL=(ALL) NOPASSWD:ALL ###验证sudo权限 [root@hadoopcm-01 etc]# sudo su hadoop [hadoop@hadoopcm-01 ~]$ sudo ls -l /root total 4 -rw------- 1 root root 8 Apr 2 09:45 dead.letter
7 检查python:
cdh4.x系列 系统默认python2.6.6 --> 升级2.7.5–>hdfs ha,过不去. (2个月)
cdh5.x系列 python2.6.6 or python2.7
#建议是python2.6.6版本
python --versioncentos6.x python2.6.x
centos7.x python2.7.x但是 假如以后你们集群是2.7.x 跑Python服务需要3.5.1
8 时区+时钟同步
https://www.cloudera.com/documentation/enterprise/5-10-x/topics/install_cdh_enable_ntp.html[root@hadoopcm-01 cdh5.7.0]# grep ZONE /etc/sysconfig/clock ZONE="Asia/Shanghai" 运维: 时区一致 + 时间同步 ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime yum install -y ntpdate 配置集群时间同步服务:通过ntp服务配置
172.16.101.54-90
172.16.101.0
172.16.101.1~255ntp主节点配置:
cp /etc/ntp.conf /etc/ntp.conf.bak
cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd.bak
echo “restrict 172.16.101.0 mask 255.255.255.0 nomodify notrap” >> /etc/ntp.conf
echo “SYNC_HWCLOCK=yes” >> /etc/sysconfig/ntpdservice ntpd restart
ntp客户端节点配置:
然后在所有节点都设置定时任务 crontab –e 添加如下内容:
*/30 * * * * /usr/sbin/ntpdate 172.16.101.54[root@hadoop002 ~]# /usr/sbin/ntpdate 192.168.1.131
16 Sep 11:44:06 ntpdate[5027]: no server suitable for synchronization found
防火墙没有关闭 清空9 关闭大页面
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabledecho ‘echo never > /sys/kernel/mm/transparent_hugepage/defrag’>> /etc/rc.local
echo ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’>> /etc/rc.local10 swap 物理磁盘空间 作为内存
echo ‘vm.swappiness = 10’ >> /etc/sysctl.conf
sysctl -p 生效swap=0-100
0不代表禁用 而是惰性最高
100表示 使用积极性最高集群计算对实时性 要求高的 swap=0 允许job挂 迅速的加内存或调大参数 重启job
集群计算对实时性 要求不高的 swap=10/30 不允许job挂 慢慢的运行4G内存 8Gswap
0: 3.5G–》3.9G 0
30: 3G 2G -
mysql安装
1.解压及创建目录
[root@hadoop39 local]# tar xzvf mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz
[root@hadoop39 local]# mv mysql-5.7.11-linux-glibc2.5-x86_64 mysql[root@hadoop39 local]# mkdir mysql/arch mysql/data mysql/tmp
2.创建my.cnf(见文件)
[root@hadoop39 local]# vi /etc/my.cnf[client] port = 3306 socket = /usr/local/mysql/data/mysql.sock default-character-set=utf8mb4 [mysqld] port = 3306 socket = /usr/local/mysql/data/mysql.sock skip-slave-start skip-external-locking key_buffer_size = 256M sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M query_cache_size= 32M max_allowed_packet = 16M myisam_sort_buffer_size=128M tmp_table_size=32M table_open_cache = 512 thread_cache_size = 8 wait_timeout = 86400 interactive_timeout = 86400 max_connections = 600 # Try number of CPU's*2 for thread_concurrency #thread_concurrency = 32 #isolation level and default engine default-storage-engine = INNODB transaction-isolation = READ-COMMITTED server-id = 1739 basedir = /usr/local/mysql datadir = /usr/local/mysql/data pid-file = /usr/local/mysql/data/hostname.pid #open performance schema log-warnings sysdate-is-now binlog_format = ROW log_bin_trust_function_creators=1 log-error = /usr/local/mysql/data/hostname.err log-bin = /usr/local/mysql/arch/mysql-bin expire_logs_days = 7 innodb_write_io_threads=16 relay-log = /usr/local/mysql/relay_log/relay-log relay-log-index = /usr/local/mysql/relay_log/relay-log.index relay_log_info_file= /usr/local/mysql/relay_log/relay-log.info #need to sync tables replicate-wild-do-table=omsprd.% replicate_wild_do_table=wmsb01.% replicate_wild_do_table=wmsb02.% replicate_wild_do_table=wmsb03.% replicate_wild_do_table=wmsb04.% replicate_wild_do_table=wmsb05.% replicate_wild_do_table=wmsb06.% replicate_wild_do_table=wmsb07.% replicate_wild_do_table=wmsb08.% replicate_wild_do_table=wmsb08.% replicate_wild_do_table=wmsb09.% replicate_wild_do_table=wmsb10.% replicate_wild_do_table=wmsb11.% replicate_wild_do_table=wmsb27.% replicate_wild_do_table=wmsb31.% replicate_wild_do_table=wmsb32.% replicate_wild_do_table=wmsb33.% replicate_wild_do_table=wmsb34.% replicate_wild_do_table=wmsb35.% log_slave_updates=1 gtid_mode=OFF enforce_gtid_consistency=OFF # slave slave-parallel-type=LOGICAL_CLOCK slave-parallel-workers=4 master_info_repository=TABLE relay_log_info_repository=TABLE relay_log_recovery=ON #other logs #general_log =1 #general_log_file = /usr/local/mysql/data/general_log.err #slow_query_log=1 #slow_query_log_file=/usr/local/mysql/data/slow_log.err #for replication slave sync_binlog = 500 #for innodb options innodb_data_home_dir = /usr/local/mysql/data/ innodb_data_file_path = ibdata1:1G;ibdata2:1G:autoextend innodb_log_group_home_dir = /usr/local/mysql/arch innodb_log_files_in_group = 4 innodb_log_file_size = 1G innodb_log_buffer_size = 200M innodb_buffer_pool_size = 8G #innodb_additional_mem_pool_size = 50M #deprecated in 5.6 tmpdir = /usr/local/mysql/tmp innodb_lock_wait_timeout = 1000 #innodb_thread_concurrency = 0 innodb_flush_log_at_trx_commit = 2 innodb_locks_unsafe_for_binlog=1 #innodb io features: add for mysql5.5.8 performance_schema innodb_read_io_threads=4 innodb-write-io-threads=4 innodb-io-capacity=200 #purge threads change default(0) to 1 for purge innodb_purge_threads=1 innodb_use_native_aio=on #case-sensitive file names and separate tablespace innodb_file_per_table = 1 lower_case_table_names=1 [mysqldump] quick max_allowed_packet = 128M [mysql] no-auto-rehash default-character-set=utf8mb4 [mysqlhotcopy] interactive-timeout [myisamchk] key_buffer_size = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M
3.创建用户组及用户
[root@hadoop39 local]# groupadd -g 101 dba
[root@hadoop39 local]# useradd -u 514 -g dba -G root -d /usr/local/mysql mysqladmin
[root@hadoop39 local]# id mysqladmin
uid=514(mysqladmin) gid=101(dba) groups=101(dba),0(root)##一般不需要设置mysqladmin的密码,直接从root或者LDAP用户sudo切换
#[root@hadoop39 local]# passwd mysqladmin
Changing password for user mysqladmin.
New UNIX password:
BAD PASSWORD: it is too simplistic/systematic
Retype new UNIX password:
passwd: all authentication tokens updated successfully.##if user mysqladmin is existing,please execute the following command of usermod.
#[root@hadoop39 local]# usermod -u 514 -g dba -G root -d /usr/local/mysql mysqladmin4.copy 环境变量配置文件至mysqladmin用户的home目录中,为了以下步骤配置个人环境变量
[root@hadoop39 local]# cp /etc/skel/.* /usr/local/mysql ###important5.配置环境变量
[root@hadoop39 local]# vi mysql/.bash_profile
# .bash_profile
# Get the aliases and functionsif [ -f ~/.bashrc ]; then
. ~/.bashrc
fi#User specific environment and startup programs
export MYSQL_BASE=/usr/local/mysql export PATH=${MYSQL_BASE}/bin:$PATH
unset USERNAME
#stty erase ^H
set umask to 022
umask 022
PS1=uname -n
":"‘ U S E R ′ " : " ′ USER'":"' USER′":"′PWD’":>"; export PS1##end
6.赋权限和用户组,切换用户mysqladmin,安装
[root@hadoop39 local]# chown mysqladmin:dba /etc/my.cnf
[root@hadoop39 local]# chmod 640 /etc/my.cnf[root@hadoop39 local]# chown -R mysqladmin:dba /usr/local/mysql
[root@hadoop39 local]# chmod -R 755 /usr/local/mysql7.配置服务及开机自启动
[root@hadoop39 local]# cd /usr/local/mysql
#将服务文件拷贝到init.d下,并重命名为mysql
[root@hadoop39 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysql
#赋予可执行权限
[root@hadoop39 mysql]# chmod +x /etc/rc.d/init.d/mysql
#删除服务
[root@hadoop39 mysql]# chkconfig --del mysql
#添加服务
[root@hadoop39 mysql]# chkconfig --add mysql
[root@hadoop39 mysql]# chkconfig --level 345 mysql on8.安装libaio及安装mysql的初始db
[root@hadoop39 mysql]# yum -y install libaio
[root@hadoop39 mysql]# sudo su - mysqladminhadoop39.jiuye:mysqladmin:/usr/local/mysql:> bin/mysqld
–defaults-file=/etc/my.cnf
–user=mysqladmin
–basedir=/usr/local/mysql/
–datadir=/usr/local/mysql/data/
–initialize在初始化时如果加上 –initial-insecure,则会创建空密码的 root@localhost 账号,否则会创建带密码的 root@localhost 账号,密码直接写在 log-error 日志文件中
(在5.6版本中是放在 ~/.mysql_secret 文件里,更加隐蔽,不熟悉的话可能会无所适从)9.查看临时密码
hadoop39.jiuye:mysqladmin:/usr/local/mysql/data:>cat hostname.err |grep password
2017-07-22T02:15:29.439671Z 1 [Note] A temporary password is generated for root@localhost: kFCqrXeh2y(0
hadoop39.jiuye:mysqladmin:/usr/local/mysql/data:>10.启动
/usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/my.cnf &11.登录及修改用户密码
hadoop39.jiuye:mysqladmin:/usr/local/mysql/data:>mysql -uroot -p’kFCqrXeh2y(0’
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.11-logCopyright © 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.
mysql> alter user root@localhost identified by ‘syncdb123!’;
Query OK, 0 rows affected (0.05 sec)mysql> GRANT ALL PRIVILEGES ON . TO ‘root’@’%’ IDENTIFIED BY ‘syncdb123!’ ;
Query OK, 0 rows affected, 1 warning (0.02 sec)mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)mysql> exit;
Bye12.重启
hadoop39.jiuye:mysqladmin:/usr/local/mysql:> service mysql restarthadoop39.jiuye:mysqladmin:/usr/local/mysql/data:>mysql -uroot -psyncdb123!
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.11-log MySQL Community Server (GPL)Copyright © 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.
mysql>
-
Configure http(rpm+parcels)
1.安装 http 和启动 http 服务
[root@hadoop-01 ~]# rpm -qa|grep httpd [root@hadoop-01 ~]# yum install -y httpd
[root@hadoop-01 ~]# chkconfig --list|grep httpd httpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@hadoop-01 ~]# chkconfig httpd on [root@hadoop-01 ~]# chkconfig --list|grep httpd
httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@hadoop-01 ~]# service httpd start2.创建 parcels 文件
[root@hadoop-01 rpminstall]# cd /var/www/html [root@hadoop-01 html]# mkdir parcels [root@hadoop-01 html]# cd parcels3.将 parcel 文件下载
在 http://archive.cloudera.com/cdh5/parcels/5.7.0/, 将
CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha1 manifest.json
这三个文件下载到 window 桌面上(在网络比较通畅情况下,可以直接 wget),
然后通过 rz 命令上传到 /var/www/html/parcels/目录中,
然后将 CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha1 重命名(去掉名称结尾"1",不然 cdh 在装的时候,会一直认为 在下载,是未完成的)
[root@hadoop-01 parcels]# wget http://archive.cloudera.com/cdh5/parcels/5.7.0/CDH-5.7.0- 1.cdh5.7.0.p0.45-el6.parcel
[root@hadoop-01 parcels]# wget http://archive.cloudera.com/cdh5/parcels/5.7.0/CDH-5.7.0- 1.cdh5.7.0.p0.45-el6.parcel.sha1
[root@hadoop-01 parcels]# wget http://archive.cloudera.com/cdh5/parcels/5.7.0/manifest.json
[root@hadoop-01 parcels]# ll
total 1230064
-rw-r–r-- 1 root root 1445356350 Nov 16 21:14 CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel
-rw-r–r-- 1 root root 41 Sep 22 04:25 CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha1
-rw-r–r-- 1 root root 56892 Sep 22 04:27 manifest.json
[root@hadoop-01 parcels]# mv CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha1 CDH-5.7.0-1.cdh5.7.0.p0.45- el6.parcel.sha
校验文件下载未损坏:
[root@sht-sgmhadoopcm-01 parcels]# sha1sum CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel b6d4bafacd1cfad6a9e1c8f951929c616ca02d8f CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel [root@sht-sgmhadoopcm-01 parcels]#
[root@sht-sgmhadoopcm-01 parcels]# cat CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha b6d4bafacd1cfad6a9e1c8f951929c616ca02d8f
[root@sht-sgmhadoopcm-01 parcels]#4.在 http://archive.cloudera.com/cm5/repo-as-tarball/5.7.0/,下载 cm5.7.0-centos6.tar.gz 包
[root@hadoop-01 ~]$ cd /opt/rpminstall
[root@hadoop-01 rpminstall]$ wget http://archive.cloudera.com/cm5/repo-as-tarball/5.7.0/cm5.7.0- centos6.tar.gz
[root@hadoop-01 rpminstall]$ ll
total 1523552
-rw-r–r-- 1 root root 815597424 Sep 22 02:00 cm5.7.0-centos6.tar.gz5.解压 cm5.7.0-centos6.tar.gz 包,必须要和官网的下载包的路径地址一致
[root@hadoop-01 rpminstall]$ tar -zxf cm5.7.0-centos6.tar.gz -C /var/www/html/ [root@hadoop-01 rpminstall]$ cd /var/www/html/
[root@hadoop-01 html]$ ll
total 8
drwxrwxr-x 3 1106 592 4096 Oct 27 10:09 cm drwxr-xr-x 2 root root 4096 Apr 2 15:55 parcels6.创建和官网一样的目录路径,然后 mv
[root@hadoop-01 html]$ mkdir -p cm5/redhat/6/x86_64/ [root@hadoop-01 html]$ mv cm cm5/redhat/6/x86_64/ [root@hadoop-01 html]$7.配置本地的 yum 源,cdh 集群在安装时会就从本地 down 包,不会从官网了
[root@hadoop-01 html]$ vim /etc/yum.repos.d/cloudera-manager.repo [cloudera-manager]
name = Cloudera Manager, Version 5.7.0
baseurl = http://172.16.101.54/cm5/redhat/6/x86_64/cm/5/
gpgcheck = 0
###提醒: 每个机器都要配置 cloudera-manager.repo 文件8.浏览器查看下面两个网址是否出来,假如有,就配置成功 http://172.16.101.54/parcels/ 这里用阿里云 采用的是内网ip
http://172.16.101.54/cm5/redhat/6/x86_64/cm/5/
cm: deamons+server+agent 闭源
parcel: 后缀为 parcel,是将 Apache hadoop zk hbase hue oozie 等等,打包一个文件,后缀取名为包 裹 英文 parcel9.在什么时候用这两个地址???(重点)
参考 CDH5.7.0 Installation.docx 文档的第二.05 选择存储库的界面时:
a.单击"更多选项",弹出界面中,在"远程 Parcel 存储库 URL"的右侧地址,
b.删除 url,只保留一个,然后替换为 http://172.16.101.54/parcels/
c.然后单击"save changes",稍微等一下,页面会自动刷新一下,
d.然后在"选择 CDH 的版本",选择 cdh
f.“其他 Parcels”,全部选择"无"
g.选择"自定义存储库",将 http://172.16.101.54/cm5/redhat/6/x86_64/cm/5/ 粘贴上去10.官网参考链接 http://archive.cloudera.com/cdh5/parcels/5.8.2/
http://archive.cloudera.com/cm5/repo-as-tarball/5.8.2/ http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5.8.2/ -
Using RPM Packages Installation and Start CM Server
1.install server rpm in cm instance
cd /var/www/html/cm5/redhat/6/x86_64/cm/5/RPMS/x86_64
yum install -y cloudera-manager-daemons-5.7.0-1.cm570.p0.76.el6.x86_64.rpm yum install -y cloudera-manager-server-5.7.0-1.cm570.p0.76.el6.x86_64.rpm2.configure the jdbc driver of mysql-connector-java.jar
cd /usr/share/java
wget http://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.37.zip [invaild] unzip mysql-connector-java-5.1.37.zip
cd mysql-connector-java-5.1.37
cp mysql-connector-java-5.1.37-bin.jar …/mysql-connector-java.jar
Must rename to “mysql-connector-java.jar”3.on the CM machine,install MySQL and configure cmf user and db
create database cmf DEFAULT CHARACTER SET utf8;
grant all on cmf.* TO ‘cmf’@‘localhost’ IDENTIFIED BY ‘cmf_password’;
flush privileges;
mysql> drop database cmf;
mysql> CREATE DATABASEcmf
/*!40100 DEFAULT CHARACTER SET utf8 */;4.modify cloudera-scm-server connect to MySQL
[root@hadoopcm-01 cloudera-scm-server]# cd /etc/cloudera-scm-server/ [root@hadoopcm-01 cloudera-scm-server]# vi db.properties#Copyright (c) 2012 Cloudera, Inc. All rights reserved. # # This file describes the database connection. # # The database type # Currently 'mysql', 'postgresql' and 'oracle' are valid databases. com.cloudera.cmf.db.type=mysql # The database host # If a non standard port is needed, use 'hostname:port' com.cloudera.cmf.db.host=localhost # The database name com.cloudera.cmf.db.name=cmf # The database user com.cloudera.cmf.db.user=cmf # The database user's password com.cloudera.cmf.db.password=cmf_password
注意:> CDH5.9.1/5.10 多加一行 db=init----->db=external 5.start cm server
service cloudera-scm-server start
6.real-time checking server log in cm instance
cd /var/log/cloudera-scm-server/
tail -f cloudera-scm-server.log
2017-03-17 21:32:05,253 INFO WebServerImpl:org.mortbay.log: Started SelectChannelConnector@0.0.0.0:7180 2017-03-17 21:32:05,253 INFO WebServerImpl:com.cloudera.server.cmf.WebServerImpl: Started Jetty server.
#mark: configure clouder manager metadata to saved in the cmf db.
7.waiting 1 minute,open http://172.16.101.54:7180 #Log message:
User:admin Password:admin
至此安装完毕
-