Canal 整合 canal-admin ,canal-adapter

一.修改数据库配置

 1.修改my.cnf配置:

log-bin=mysql-bin #设置日志位置
binlog-format=ROW #设置日志模式,记录每条数据修改内容
server_id=6  #唯一不能和canal的slave一样,1.1.4版本以后无需配置,增加自增机制
#binlog-do-db= #制定可以同步的库,不写默认全部同步
#binlog-ignore-db= #忽略同步的库 

2.创建用户,授权复制权限,库表,允许访问地址权限

  创建canal用户:

          CREATE USER canal IDENTIFIED BY 'canal';    

  授权canal查询,复制binlog,操作所有库,所有表,可以用任何ip地址访问:

         GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%'; 

         *.*:所有库所有表    %:所有ip地址

  刷新权限用户:

         FLUSH PRIVILEGES;

查询用户是否授权

select u.User,u.Host,u.Repl_slave_priv,u.Repl_client_priv from mysql.user u

二.修canal配置

     下载地址:wget https://github.com/alibaba/canal/releases/download/canal-1.1.4/canal.deployer-1.1.4.tar.gz

     版本:1.1.4

     

canal.properties系统配置:

修改instance.properties实例配置

启动成功:

三.编写java测试代码

package com.zxy.loglearn.controller;

import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.CanalEntry;
import com.alibaba.otter.canal.protocol.Message;

import java.net.InetSocketAddress;
import java.util.List;

/**
 * @USER: zhouxy
 * @DATE: 2020/1/15 15:32
 **/
public class TestMain {

    public static void main(String[] args) throws Exception {
        CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress("10.3.130.16",
                11111), "example", "canal", "canal");
        int batchSize = 1000;
        int emptyCount = 0;
        System.out.println("开始链接");
        try {
            connector.connect();
            connector.subscribe(".*\\..*");
            connector.rollback();
            int totalEmptyCount = 120;
            while (emptyCount < totalEmptyCount) {
                Message message = connector.getWithoutAck(batchSize); // 获取指定数量的数据
                long batchId = message.getId();
                int size = message.getEntries().size();
                if (batchId == -1 || size == 0) {
                    emptyCount++;
                    System.out.println("empty count : " + emptyCount);
                    try {
                        Thread.sleep(1000);
                    } catch (InterruptedException e) {
                    }
                } else {
                    emptyCount = 0;
                    printEntry(message.getEntries());
                }

                connector.ack(batchId); // 提交确认
            }

            System.out.println("empty too many times, exit");
        } finally {
            connector.disconnect();
        }

    }

    private static void printEntry(List<CanalEntry.Entry> entrys) {
        for (CanalEntry.Entry entry : entrys) {
            if (entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONBEGIN || entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONEND) {
                continue;
            }

            CanalEntry.RowChange rowChage = null;
            try {
                rowChage = CanalEntry.RowChange.parseFrom(entry.getStoreValue());
            } catch (Exception e) {
                throw new RuntimeException("ERROR ## parser of eromanga-event has an error , data:" + entry.toString(),
                        e);
            }

            CanalEntry.EventType eventType = rowChage.getEventType();
            System.out.println(String.format("================&gt; binlog[%s:%s] , name[%s,%s] , eventType : %s",
                    entry.getHeader().getLogfileName(), entry.getHeader().getLogfileOffset(),
                    entry.getHeader().getSchemaName(), entry.getHeader().getTableName(),
                    eventType));

            for (CanalEntry.RowData rowData : rowChage.getRowDatasList()) {
                if (eventType == CanalEntry.EventType.DELETE) {
                    printColumn(rowData.getBeforeColumnsList());
                } else if (eventType == CanalEntry.EventType.INSERT) {
                    printColumn(rowData.getAfterColumnsList());
                } else {
                    System.out.println("-------&gt; before");
                    printColumn(rowData.getBeforeColumnsList());
                    System.out.println("-------&gt; after");
                    printColumn(rowData.getAfterColumnsList());
                }
            }
        }
    }

    private static void printColumn(List<CanalEntry.Column> columns) {
        for (CanalEntry.Column column : columns) {
            System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());
        }
    }
}

三.整合canal-admin

1.下载地址:wget https://github.com/alibaba/canal/releases/download/canal-1.1.4/canal.admin-1.1.4.tar.gz

    版本:1.1.4 与上相同

2.创建canal-admin对应数据库

修改canal - server canal.properties,将canal_local.properties覆盖到canal.properties,代码如下

# register ip
canal.register.ip =

# canal admin config
canal.admin.manager = 10.3.130.16:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9
canal.admin.register.auto = true
canal.admin.register.cluster =

 

通过 ui 界面启动canal-server,增加canal -instance

 

账号admin 密码:123456

修改canal -server canal.properties配置文件

新建实例

查看端口号,看服务是否启动  :netstat -ntul | grep 8089

以后通过canal-admin 修改配置文件,启动关闭服务,查看操作日志等等一系列操作

四.canal-adapter:

可以同步mysql  ES Hbase 等  ,我这边是mysql demo:

下载地址:wget https://github.com/alibaba/canal/releases/download/canal-1.1.4/canal.adapter-1.1.4.tar.gz

版本:1.1.4

1.修改配置文件内容

server:
  port: 8081
spring:
  jackson:
    date-format: yyyy-MM-dd HH:mm:ss
    time-zone: GMT+8
    default-property-inclusion: non_null

canal.conf:
  mode: tcp # kafka rocketMQ
  canalServerHost: 10.3.130.16:11111
#  zookeeperHosts: slave1:2181
#  mqServers: 127.0.0.1:9092 #or rocketmq
#  flatMessage: true
  batchSize: 500
  syncBatchSize: 1000
  retries: 0
  timeout:
  accessKey:
  secretKey:
  srcDataSources:
    defaultDS:
      url: jdbc:mysql://10.3.30.6:3306/mytest?useUnicode=true&characterEncoding=UTF-8&serverTimezone=UTC
      username: canal
      password: canal
  canalAdapters:
  - instance: example # canal instance Name or mq topic name
    groups:
    - groupId: g1
      outerAdapters:
      - name: logger
      - name: rdb  #指定为rdb类型同步
        key: mysql1  #指定adapter的唯一key, 与表映射配置中outerAdapterKey对应
        properties:
          jdbc.driverClassName: com.mysql.jdbc.Driver #jdbc驱动名, 部分jdbc的jar包需要自行放致lib目录下
          jdbc.url: jdbc:mysql://10.1.4.2:3306/mytest?useUnicode=true&characterEncoding=UTF-8&serverTimezone=UTC #目标库的jdbc配置
          jdbc.username: root #用户名
          jdbc.password: 123qwe #密码

修改对应rdb   rdb(mysql数据库)

 

dataSourceKey: defaultDS
destination: example
groupId: g1
outerAdapterKey: mysql1
concurrent: true
dbMapping:
  database: mytest
  table: hz_user
  targetTable: mytest.hz_user
  targetPk:
    id: uid
  mapAll: true
  #targetColumns:
  # uid:
  # nickname:
  # email:
  # phone:
  # status:
  # create_uid:
  # create_time:
  # update_time:
  # login_name:
  # password:
  # ip:
  # salt:

   # id:
   # name:
   # role_id:
   # c_time:
   # test1:
 # etlCondition: "where c_time>={}"
 # commitBatch: 3000 # 批量提交的大小


## Mirror schema synchronize config
#dataSourceKey: defaultDS
#destination: example
#groupId: g1
#outerAdapterKey: mysql1
#concurrent: true
#dbMapping:
#  mirrorDb: true

   执行日志:

完成 具体参数不清楚,需要自行百度官方文档

评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值