使用Kafka做日志收集。
一、需要收集的信息:
1、用户ID(user_id)
2、时间(act_time)
3、操作(action,可以是:点击:click,收藏:job_collect,投简历:cv_send,上传简历:cv_upload)
4、对方企业编码(job_code)
git clone https://github.com/edenhill/librdkafka
cd librdkafka
./configure
make
sudo make install
下载模块
git clone https://github.com/brg-liuwei/ngx_kafka_module
# cd /path/to/nginx
./configure --add-module=/usr/local/zookeeper/ngx_kafka_module
make
sudo make install
启动nginx
./nginx start
loading shared libraries: librdkafka.so.1: cannot open shared object file: No such file or directory
执行下面的操作
解决办法:
加载so库:
执行命令:
echo "/usr/local/lib" >> /etc/ld.so.conf
/etc/ld.so.conf这个是Linux上的文件,里面放的东西是指定Linux在启动时要加载的文件。
然后再执行下面的命令使修改生效:
ldconfig
修改nginx
cd /usr/local/nginx/conf
vim nginx.conf
kafka;
kafka_broker_list 192.168.181.141:9092 192.168.181.142:9092 192.168.181.144:9092; # host:port ...
server {
# some other configs
location = /log {
# optional directive: kafka_partition [<partition-num> | auto]
#
# kafka_partition auto; # default value
# kafka_partition 0;
# kafka_partition 1;
add_header 'Access-Control-Allow-Origin' $http_origin;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_