1. hadoop
cd /usr/local/hadoop
sbin/start-all.sh 开启hadoop,能够直接开启hdfs和yarn(不推荐)
sbin/start-dfs.sh 开启hdfs
sbin/start-yarn.sh 开启yarn
sbin/stop-all.sh 关闭Hadoop(不推荐)
sbin/stop-dfs.sh 关闭hdfs
sbin/stop-yarn.sh 关闭yarn
2. hive
hive 开启
ctrl + c 关闭
3. zookeeper
cd /usr/local/tools/zookeepr
nohup ./bin/zkServer.sh start > zookeeper.log 2>&1 &
查看运行状态:
./bin/zkServer.sh status
4. hbase
启动顺序:hadoop->hbase,如果系统中使用了自己安装的zookeeper,则启动顺序是:hadoop->zookeeper->hbase
cd /usr/local/hadoop
sbin/start-all.sh
cd /usr/local/tools/zookeepr
bin/zkServer.sh start
cd /usr/local/tools/hbase
bin/start-hbase.sh
cd /usr/local/tools/hbase
bin/hbase shell
关闭顺序正好相反
5. flume
cd /usr/local/tools/flume
bin/flume-ng agent -c conf -f ./conf/example.conf -n a1 -Dflume.root.logger=INFO,console
参数说明
• -n 指定agent名称
• -c 指定配置文件目录
• -f 指定配置文件
• -Dflume.root.logger=DEBUG,console 设置日志等级
6. kafka
启动顺序:zookeeper -> kafka
cd /usr/local/tools/zookeeper
nohup ./bin/zkServer.sh start > zookeeper.log 2>&1 &
cd /usr/local/tools/kafka
nohup bin/kafka-server-start.sh config/server.properties > logs/kafka-server.log 2>&1 &
新建一个topic
bin/kafka-topics.sh --create --zookeeper IP:2181 --replication-factor 1 --partitions 1 --topic test
参数说明
• –zookeeper 设置zookeeper的IP地址和2181端口号(IP地址最好都写上)
• –partitions 分区数
• –topic 设置topic的名称
删除一个topic
bin/kafka-topics.sh --delete --zookeeper IP:2181 --topic test
启动producer(生产者)
bin/kafka-console-producer.sh --broker-list IP:9092 --topic test
启动consumer(消费者)
bin/kafka-console-consumer.sh --zookeeper IP:2181 --topic test --from-beginning
--from-beginning
从生产者生产开始输出
7. 运行storm
cd /usr/local/tools/storm
bin/storm jar examples/storm-starter/storm-starter-topologies-1.0.3.jar org.apache.storm.starter.WordCountTopology WordCountTopology
storm jar 【jar路径】 【拓扑包名.拓扑类名】 【拓扑名称】