卸载Ambari

* 如果是非Ubuntu系统,下面的apt-get命令要换成yum命令。
apt-cache search all | grep -> yum list installed | grep
apt-get purge -y -> yum remove -y
其中的参数-y是过程中所有提示都自动选yes的作用。


* 以下命令均可批量复制粘贴到远程终端运行,不用一条一条手打或复制。


# 先登录root用户
su


一、停止服务
ambari-agent stop
ambari-server stop


二、删除主组件
apt-get purge -y ambari-agent
apt-get purge -y ambari-server


三、删除服务组件
apt-cache search all | grep ambari
apt-cache search all | grep hdp

apt-get purge -y hdfs*
apt-get purge -y hive*
apt-get purge -y yarn*
apt-get purge -y mapreduce2*
apt-get purge -y tez*
apt-get purge -y hbase*
apt-get purge -y pig*
apt-get purge -y sqoop*
apt-get purge -y oozie*
apt-get purge -y zookeeper*
apt-get purge -y falcon*
apt-get purge -y storm*
apt-get purge -y spark*
apt-get purge -y flume*
apt-get purge -y falcon*
apt-get purge -y accumulo*
apt-get purge -y smartsense*
apt-get purge -y hdp-select*
apt-get purge -y ambari*
apt-get purge -y atlas*
apt-get purge -y kafka*
apt-get purge -y knox*
apt-get purge -y ranger*
apt-get purge -y zeppelin*
apt-get purge -y druid*
apt-get purge -y mahout*
apt-get purge -y slider*
apt-get purge -y log search*
apt-get purge -y postgresql*


四、删除用户组
groupdel -f hadoop
groupdel -f fspark
groupdel -f ranger
groupdel -f hdfs
groupdel -f knox
groupdel -f zookeeper
groupdel -f hbase
groupdel -f yarn
groupdel -f mapred
groupdel -f kafka
groupdel -f hive
groupdel -f sqoop
groupdel -f storm
groupdel -f spark
groupdel -f druid
groupdel -f knox
groupdel -f livy
groupdel -f kms
groupdel -f postgres


五、删除用户
userdel nagios
userdel hive
userdel ambari-qa
userdel hbase
userdel oozie
userdel hcat
userdel mapred
userdel hdfs
userdel rrdcached
userdel zookeeper
userdel mysql
userdel sqoop
userdel puppet
userdel yarn
userdel tez
userdel hadoop
userdel knox
userdel storm
userdel falcon
userdel flume
userdel nagios
userdel admin
userdel postgres
userdel hdfs
userdel zookeeper
userdel hbase
userdel ams
userdel spark
userdel kafka
userdel ranger
userdel kms
userdel zookeeper
userdel ambari-qa
userdel hdfs
userdel yarn
userdel mapred


六、删除文件
rm -rf /hadoop/*
rm -rf /etc/ambari*
rm -rf /etc/zookeeper/
rm -rf /etc/ranger*
rm -rf /etc/rc.d/init.d/ranger*
rm -rf /etc/hadoop/
rm -rf /etc/hadoop/
rm -rf /etc/hbase/
rm -rf /etc/hive
rm -rf /usr/hdp/
rm -rf /etc/zookeeper/
rm -rf /tmp/ambari-qa
rm -rf /tmp/sqoop-ambari-qa/
rm -rf /kafka-logs/
rm -rf /var/lib/ambari*
rm -rf /var/lib/hive2*
rm -rf /var/run/kafka/
rm -rf /etc/flume
rm -rf /etc/hive-hcatalog
rm -rf /etc/hive-webhcat
rm -rf /etc/phoenix
rm -rf /etc/ambari-metrics-collector
rm -rf /etc/ambari-metrics-monitor
rm -rf /tmp/hdfs
rm -rf /tmp/hcat
rm -rf /etc/kafka
rm -rf /etc/oozie
rm -rf /etc/storm
rm -rf /etc/tez
rm -rf /etc/falcon
rm -rf /etc/slider
rm -rf /etc/pig
rm -rf /etc/ranger
rm -rf /var/log/hadoop
rm -rf /var/log/hbase
rm -rf /var/log/hive
rm -rf /var/log/oozie
rm -rf /var/log/sqoop
rm -rf /var/log/zookeeper
rm -rf /var/log/flume
rm -rf /var/log/storm
rm -rf /var/log/hive-hcatalog
rm -rf /var/log/falcon
rm -rf /var/log/webhcat
rm -rf /var/log/hadoop-hdfs
rm -rf /var/log/hadoop-yarn
rm -rf /var/log/hadoop-mapreduce
rm -rf /var/log/spark
rm -rf /var/log/ranger
rm -rf /var/log/smartsense
rm -rf /usr/lib/flume
rm -rf /usr/lib/storm
rm -rf /var/lib/hive
rm -rf /var/lib/oozie
rm -rf /var/lib/zookeeper
rm -rf /var/lib/flume
rm -rf /var/lib/hadoop-hdfs
rm -rf /var/lib/slider
rm -rf /var/lib/ranger
rm -rf /var/tmp/oozie
rm -rf /var/tmp/sqoop
rm -rf /tmp/hive
rm -rf /tmp/hadoop-hdfs
rm -rf /etc/sqoop
rm -rf /var/log/ambari-metrics-collector
rm -rf /var/log/ambari-metrics-monitor
rm -rf /usr/lib/ambari-metrics-collector
rm -rf /var/lib/hadoop-yarn
rm -rf /var/lib/hadoop-mapreduce
rm -rf /var/lib/ambari-metrics-collector
rm -rf /var/log/kafka
rm -rf /tmp/hadoop-yarn
rm -rf /hadoop/zookeeper
rm -rf /hadoop/hdfs
rm -rf /kafka-logs
rm -rf /etc/storm-slider-client
rm -rf /etc/postgresql
rm -rf /etc/postgresql-common
rm -rf /var/tmp/*


七、最后重启一下

 

转载于:https://www.cnblogs.com/maluscalc/p/11102395.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值