1.创建agent配置文件
把下列内容存入agent4.conf,并保存到Flume的工作目录/opt/flume/bin下面
agent4.sources = netsource
agent4.sinks = hdfssink
agent4.channels = memorychannel
agent4.sources.netsource.type = netcat
agent4.sources.netsource.bind = localhost
agent4.sources.netsource.port = 3000
agent4.sinks.hdfssink.type = hdfs
agent4.sinks.hdfssink.hdfs.path = /flume
agent4.sinks.hdfssink.hdfs.filePrefix = log
agent4.sinks.hdfssink..hdfs.rollInterval = 0
agent4.sinks.hdfssink.hdfs.rollCount = 3
agent4.sinks.hdfssink.hdfs.fileType = DataStream
agent4.channels.memorychannel.type = memory
agent4.channels.memorychannel.capacity = 1000
agent4.channels.memorychannel.transactionCapacity = 100
agent4.sources.netsource.channels = memorychannel
agent4.sinks.hdfssink.channel = memorychannel
配置说明: 使用netcat信源和HDFS信宿,指定信宿文件存于HDFS的/flume目录,每一个文件都以log为前缀,每一个文件最多只能存储三条数据。
2.启动Flume代理
caiyong@caiyong:/opt/flume/bin$ flume-ng agent --conf conf --conf-file agent4.conf --name agent4
3.在另一个窗口中开启一个远程连接并发送几个事件
caiyong@caiyong:~$ telnet localhost 3000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
write
OK
data
OK
based
OK
on
OK
network
OK
to
OK
HDFS
OK
4.检查结果
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -ls /
Found 7 items
drwxr-xr-x - caiyong supergroup 0 2015-03-14 14:46 /flume
drwxr-xr-x - caiyong supergroup 0 2015-03-05 14:51 /hbase
drwxr-xr-x - caiyong supergroup 0 2015-03-14 13:07 /home
drwxr-xr-x - caiyong supergroup 0 2015-03-07 16:03 /pig
drwxr-xr-x - caiyong supergroup 0 2015-03-11 19:12 /testcopy
drwxr-xr-x - caiyong supergroup 0 2015-03-14 08:39 /tmp
drwxr-xr-x - caiyong supergroup 0 2015-03-11 19:04 /user
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -ls /flume/
Found 3 items
-rw-r--r-- 1 caiyong supergroup 20 2015-03-14 14:45 /flume/log.1426315528974
-rw-r--r-- 1 caiyong supergroup 17 2015-03-14 14:45 /flume/log.1426315528975
-rw-r--r-- 1 caiyong supergroup 6 2015-03-14 14:46 /flume/log.1426315528976
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -cat /flume/*
write
data
based
on
network
to
HDFS
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -cat /flume/log*4
write
data
based
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -cat /flume/log*5
on
network
to
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -cat /flume/log*6
HDFS
把下列内容存入agent4.conf,并保存到Flume的工作目录/opt/flume/bin下面
agent4.sources = netsource
agent4.sinks = hdfssink
agent4.channels = memorychannel
agent4.sources.netsource.type = netcat
agent4.sources.netsource.bind = localhost
agent4.sources.netsource.port = 3000
agent4.sinks.hdfssink.type = hdfs
agent4.sinks.hdfssink.hdfs.path = /flume
agent4.sinks.hdfssink.hdfs.filePrefix = log
agent4.sinks.hdfssink..hdfs.rollInterval = 0
agent4.sinks.hdfssink.hdfs.rollCount = 3
agent4.sinks.hdfssink.hdfs.fileType = DataStream
agent4.channels.memorychannel.type = memory
agent4.channels.memorychannel.capacity = 1000
agent4.channels.memorychannel.transactionCapacity = 100
agent4.sources.netsource.channels = memorychannel
agent4.sinks.hdfssink.channel = memorychannel
配置说明: 使用netcat信源和HDFS信宿,指定信宿文件存于HDFS的/flume目录,每一个文件都以log为前缀,每一个文件最多只能存储三条数据。
2.启动Flume代理
caiyong@caiyong:/opt/flume/bin$ flume-ng agent --conf conf --conf-file agent4.conf --name agent4
3.在另一个窗口中开启一个远程连接并发送几个事件
caiyong@caiyong:~$ telnet localhost 3000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
write
OK
data
OK
based
OK
on
OK
network
OK
to
OK
HDFS
OK
4.检查结果
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -ls /
Found 7 items
drwxr-xr-x - caiyong supergroup 0 2015-03-14 14:46 /flume
drwxr-xr-x - caiyong supergroup 0 2015-03-05 14:51 /hbase
drwxr-xr-x - caiyong supergroup 0 2015-03-14 13:07 /home
drwxr-xr-x - caiyong supergroup 0 2015-03-07 16:03 /pig
drwxr-xr-x - caiyong supergroup 0 2015-03-11 19:12 /testcopy
drwxr-xr-x - caiyong supergroup 0 2015-03-14 08:39 /tmp
drwxr-xr-x - caiyong supergroup 0 2015-03-11 19:04 /user
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -ls /flume/
Found 3 items
-rw-r--r-- 1 caiyong supergroup 20 2015-03-14 14:45 /flume/log.1426315528974
-rw-r--r-- 1 caiyong supergroup 17 2015-03-14 14:45 /flume/log.1426315528975
-rw-r--r-- 1 caiyong supergroup 6 2015-03-14 14:46 /flume/log.1426315528976
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -cat /flume/*
write
data
based
on
network
to
HDFS
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -cat /flume/log*4
write
data
based
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -cat /flume/log*5
on
network
to
caiyong@caiyong:/opt/hadoop$ bin/hadoop fs -cat /flume/log*6
HDFS