一:单机Nginx的安装
1.上传nginx安装包
2.解压nginx
tar -zxvf nginx-1.12.2.tar.gz -C /usr/local/src/
3.进入到nginx的源码目录
cd /usr/local/src/nginx-1.12.2/
4.预编译
./configure
5.安静gcc编译器
yum -y install gcc pcre-devel openssl openssl-devel
6.然后再执行
./configure
7.编译安装nginx
make && make install
8.启动nginx
sbin/nginx
9.查看nginx进程
ps -ef | grep nginx
netstat -anpt | grep nginx
--------------------------------------------------------------------------
将springboot程序部署在多台服务器上,然后启动springboot
java -jar qianqian-0.0.1-SNAPSHOT.war >> ./logs 2>&1 &
--------------------------------------------------------------------------
修改nginx的配置文件,让nginx实现负载均衡功能
vi /usr/local/nginx/conf/nginx.conf
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#响应数据的来源(从tomcat组获取数据)
upstream tomcats {
server hdp-01:8080 weight=1;
server hdp-02:8080 weight=1;
server hdp-03:8080 weight=1;
}
server {
listen 80;
server_name node-5.xiaoniu.com;
location ~ .* {
proxy_pass http://tomcats;
}
}
}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
二:安装nginx-kafka插件
1.安装git
yum install -y git
2.切换到/usr/local/src目录,然后将kafka的c客户端源码clone到本地
cd /usr/local/src
git clone https://github.com/edenhill/librdkafka
3.进入到librdkafka,然后进行编译
cd librdkafka
yum install -y gcc gcc-c++ pcre-devel zlib-devel
./configure
make && make install
(如果当前登录的用户不是root,则使用 "sudo make && sudo make install" 命令)
4.安装nginx整合kafka的插件,进入到/usr/local/src,clone nginx整合kafka的源码
cd /usr/local/src
git clone https://github.com/brg-liuwei/ngx_kafka_module
5.进入到nginx的源码包目录下 (编译nginx,然后将将插件同时编译)
cd /usr/local/src/nginx-1.12.2
./configure --add-module=/usr/local/src/ngx_kafka_module/
make
make install
6.修改nginx的配置文件
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
kafka;
kafka_broker_list hdp-04:9092 hdp-04:9092 hdp-04:9092;
server {
listen 80;
server_name hdp-08;
location = /kafka/track {
kafka_topic track;
}
location = /kafka/user {
kafka_topic user;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
7.启动zk和kafka集群(创建topic)
在hdp-01上启动自定义的zookeeper的shell脚本
./zkmanager start
apps/kafka-0.10.2/bin/kafka-server-start.sh -daemon apps/kafka-0.10.2/config/server.properties
7-2:创建topic
apps/kafka-0.10.2/bin/kafka-topics.sh -create --zookeeper hdp-01:2181,hdp-02:2181,hdp-03:2181 --replication-factor 3 --partitions 3 --topic track
apps/kafka-0.10.2/bin/kafka-topics.sh -create --zookeeper hdp-01:2181,hdp-02:2181,hdp-03:2181 --replication-factor 3 --partitions 3 --topic user
7-3:查看描述信息
apps/kafka-0.10.2/bin/kafka-topics.sh --describe --zookeeper hdp-01:2181,hdp-02:2181,hdp-03:2181 --topic track
apps/kafka-0.10.2/bin/kafka-topics.sh --describe --zookeeper hdp-01:2181,hdp-02:2181,hdp-03:2181 --topic user
8.启动nginx,报错,找不到kafka.so.1的文件
error while loading shared libraries: librdkafka.so.1: cannot open shared object file: No such file or directory
9.加载so库
echo "/usr/local/lib" >> /etc/ld.so.conf
ldconfig
10.测试,向nginx中写入数据,然后观察kafka的消费者能不能消费到数据
在Nginx服务器上输入命令
curl localhost/kafka/track -d "message send to kafka topic"
curl localhost/kafka/user -d "message send to kafka topic"
在Kafka服务器上输入命令进行消费
apps/kafka-0.10.2/bin/kafka-console-consumer.sh --bootstrap-server hdp-04:9092,hdp-05:9092,hdp-06:9092 --topic track --from-beginning
apps/kafka-0.10.2/bin/kafka-console-consumer.sh --bootstrap-server hdp-04:9092,hdp-05:9092,hdp-06:9092 --topic user --from-beginning