简介
canal [kə'næl],译意为水道/管道/沟渠,主要用途是基于 MySQL 数据库增量日志解析,提供增量数据订阅和消费
工作原理
canal 模拟 MySQL slave 的交互协议,伪装自己为 MySQL slave ,向 MySQL master 发送 dump 协议
MySQL master 收到 dump 请求,开始推送 binary log 给 slave (即 canal )
canal 解析 binary log 对象(原始为 byte 流)
QuickStart
show variables like 'log_bin';
show variables like 'binlog_format';
MySQL , 需要先开启 Binlog 写入功能,配置 binlog-format 为 ROW 模式
[mysqld]
log-bin=mysql-bin # 开启 binlog
binlog-format=ROW # 选择 ROW 模式
server_id=1 # 配置 MySQL replaction 需要定义,不要和 canal 的 slaveId 重复
授权 canal 链接 MySQL 账号具有作为 MySQL slave 的权限, 如果已有账户可直接 grant
CREATE USER canal IDENTIFIED BY 'canal';
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
-- GRANT ALL PRIVILEGES ON *.* TO 'canal'@'%' ;
FLUSH PRIVILEGES;
下载
Releases · alibaba/canal · GitHub
# 客户端
canal.adapter-1.1.8-SNAPSHOT.tar.gz
# 管理端
canal.admin-1.1.8-SNAPSHOT.tar.gz
# 服务端
canal.deployer-1.1.8-SNAPSHOT.tar.gz
配置修改
vi conf/example/instance.properties
启动sh bin/startup.sh
windows下启动报错Unrecognized VM option 'PermSize=128m',是因为在jdk8(含)以后,永久代被移除了,所以虚拟机的启动参数MaxPermSize(最大永久代大小)不可用,去掉-XX:PermSize=128m就可以了。
Caused by: java.lang.ClassNotFoundException: com.alibaba.druid.pool.DruidDataSource
lib下缺少druid依赖包,复制jar包到lib下,重新启动
Caused by: org.h2.jdbc.JdbcSQLDataException: Value too long for column "CHARACTER VARYING":
升级h2的jar,使用2.2.224版本替换2.1.210版本,不要使用2.3版本,版本不兼容
删除conf/example下h2.mv.db,重新启动
创建项目
org.apache.maven.archetypes:maven-archetype-quickstart
依赖配置
<dependency>
<groupId>com.alibaba.otter</groupId>
<artifactId>canal.client</artifactId>
<version>1.1.7</version>
</dependency>
<dependency>
<groupId>com.alibaba.otter</groupId>
<artifactId>canal.protocol</artifactId>
<version>1.1.7</version>
</dependency>
CanalHandler.java
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.CanalEntry;
import com.alibaba.otter.canal.protocol.Message;
import com.google.protobuf.ByteString;
import com.google.protobuf.InvalidProtocolBufferException;
import java.net.InetSocketAddress;
import java.util.HashMap;
import java.util.List;
/**
* Canal处理器
* 作用:打印canal服务器监测到的数据
*/
public class CanalHandler {
public static void main(String[] args) throws InvalidProtocolBufferException {
// 1.创建连接
CanalConnector canalConnector = CanalConnectors
.newSingleConnector(new InetSocketAddress("localhost", 11111), "example", "", "");
// 2.抓取数据
while (true) {
// 3.开始连接
canalConnector.connect();
// 4.订阅数据,所有的库和表
canalConnector.subscribe(".*\\..*");
// 5.抓取数据,每次抓取100条
Message message = canalConnector.get(100);
// 6.获取entry集合
List<CanalEntry.Entry> entries = message.getEntries();
// 7.判断是否有数据
if (entries.size() == 0) {
System.out.println(">>>暂无数据<<<");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
} else {
// 8.解析数据
for (CanalEntry.Entry entry : entries) {
// 获取表名
String tableName = entry.getHeader().getTableName();
// 获取操作类型
CanalEntry.EntryType entryType = entry.getEntryType();
// 判断entryType是否为ROWDATA
if (CanalEntry.EntryType.ROWDATA.equals(entryType)) {
// 序列化数据
ByteString storeValue = entry.getStoreValue();
// 反序列化数据
CanalEntry.RowChange rowChange = CanalEntry.RowChange.parseFrom(storeValue);
// 获取事件类型
CanalEntry.EventType eventType = rowChange.getEventType();
// 获取具体的数据
List<CanalEntry.RowData> rowDatasList = rowChange.getRowDatasList();
// 遍历打印
for (CanalEntry.RowData rowData : rowDatasList) {
// 获取拉取前后的数据
List<CanalEntry.Column> beforeColumnsList = rowData.getBeforeColumnsList();
List<CanalEntry.Column> afterColumnsList = rowData.getAfterColumnsList();
// 用Map存储每条数据
HashMap<String, Object> beforeMap = new HashMap<>();
HashMap<String, Object> afterMap = new HashMap<>();
// 获取不同操作的数据
if (CanalEntry.EventType.INSERT.equals(eventType)) {
System.out.println("【" + tableName + "】表插入数据");
for (CanalEntry.Column column : afterColumnsList) {
afterMap.put(column.getName(), column.getValue());
}
System.out.println("新增数据:" + afterMap);
} else if (CanalEntry.EventType.UPDATE.equals(eventType)) {
System.out.println("【" + tableName + "】表更新数据");
for (CanalEntry.Column column : beforeColumnsList) {
beforeMap.put(column.getName(), column.getValue());
}
System.out.println("更新前:" + beforeMap);
System.out.println("----");
for (CanalEntry.Column column : afterColumnsList) {
afterMap.put(column.getName(), column.getValue());
}
System.out.println("更新后:" + afterMap);
} else if (CanalEntry.EventType.DELETE.equals(eventType)) {
System.out.println("【" + tableName + "】表删除数据");
for (CanalEntry.Column column : beforeColumnsList) {
beforeMap.put(column.getName(), column.getValue());
}
System.out.println("被删除的数据:" + beforeMap);
}
}
}
}
}
}
}
}
启动
更新数据
INSERT INTO `canal_user` (`username`,`password`,`name`,`roles`,`introduction`,`avatar`,`creation_date`) VALUES
('test','6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9','Canal Manager','admin',NULL,NULL,'2019-07-14 00:05:28');
Canal Admin QuickStart
配置修改
vi conf\application.yml
导入数据
source canal_manager.sql
修改deployer配置,重启
vi conf/canal.properties
select password('admin')
+-------------------------------------------+
| password('admin') |
+-------------------------------------------+
| *4ACFE3202A5FF5CF467898FC58AAB1D615029441 |
+-------------------------------------------+
# 如果遇到mysql8.0,可以使用select upper(sha1(unhex(sha1('admin'))))
启动sh bin/startup.sh
http://127.0.0.1:8089/ 访问,默认密码:admin/123456
Canal Adapter QuickStart
配置修改
vi conf\application.yml
vi conf\rdb\mytest_user.yml
dataSourceKey: defaultDS
destination: example
groupId: g1
outerAdapterKey: mysql1
concurrent: true
dbMapping:
database: mytest
table: canal_user
targetTable: canal_user
targetPk:
id: id
mapAll: true
etlCondition: "where c_time>={}"
commitBatch: 3000 # 批量提交的大小
启动sh bin/startup.sh
修改mytest数据
INSERT INTO `canal_user` (`username`,`password`,`name`,`roles`,`introduction`,`avatar`,`creation_date`) VALUES
('test','6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9','Canal Manager','admin',NULL,NULL,'2019-07-14 00:05:28');
查看mytest2数据会相应变化