V$LOGMNR_CONTENTS视图中的SESSION_INFO 是null的.

本文档解决了在Oracle数据库中使用LogMiner时遇到的用户名和会话信息缺失的问题。当补充日志记录未激活时,LogMiner无法获取所需信息。确保启用至少最小级别的补充日志记录,并检查兼容性和交易审计参数。

Column USERNAME And SESSION_INFO Are UNKNOWN Or NULL In V$LOGMNR_CONTENTS (Doc ID 110301.1)

APPLIES TO:

Oracle Database - Enterprise Edition - Version 8.1.7.0 and later
Information in this document applies to any platform.
***Checked for relevance on 29-Jul-2013***

SYMPTOMS

You have just built the LogMiner dictionary and started the LogMiner session. You query V$LOGMNR_CONTENTS expecting to see the USERNAME and SESSION_INFO for some particular redo or undo operation.

However, the USERNAME field is NULL. This is NOT a recursive operation, and the USERNAME column is expected to contain a non-null value.

CAUSE

  1. If supplemental logging was not active at the time when the redo records were created, then LogMiner won't be able to obtain all the required information. TheOracle Database Utilities manual mentions:
    By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable. Therefore, you must enable at least minimal supplemental logging prior to generating log files which will be analyzed by LogMiner.
    So, we have to enable supplemental logging by using a SQL statement similar to the following:
    SQL> CONNECT / AS SYSDBA
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

    Then the information necessary to populate the USERNAME and SESSION_INFO columns will be stored in the redo stream.

  2. The redo stream does not contain the USERNAME and SESSION_INFO data for every transaction. This information is only stored for the first transaction executed in the user's session. So in order to be able to see this information in V$LOGMNR_CONTENTS, all the redo generated during the entire session must be added to the mining session. Should this not be done, then the USERNAME and SESSION_INFO columns will remain empty.

  3. LogMiner was first available in Oracle8i. If the COMPATIBLE instance parameter is set to a value lower than 8.1.0 you will not have access to its full functionality.

  4. In Oracle9i and lower releases of Oracle, the TRANSACTION_AUDITING instance parameter is set to TRUE by default. This causes the generation of a redo record containing the user logon name, username, session ID, and some operating system and client information. For each successive transaction in the session, Oracle will store only the session ID. These session IDs are linked back to the first record to retrieve user and session information.

    When TRANSACTION_AUDITING is set to FALSE, this redo record is not written and the user information is not available to LogMiner.

SOLUTION

This can result from your database parameter settings and also from the method you are using to mine redo logs using LogMiner.

  1. Ensure that database was in minimum supplemental logging at the time that the redo information was created:
    SQL> SELECT name, supplemental_log_data_min FROM v$database;

    NAME                           SUPPLEME
    ------------------------------ --------
    M10202WA                       YES
  2. Ensure that all archive redo logs containing the necessary redo information have been added to the LogMiner session.

  3. Ensure that the COMPATIBLE initialization parameter is set to 8.1.0 or higher.
    SQL> show parameter compatible

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ----------
    compatible                           string      10.2.0.2.0
  4. For Oracle8i and Oracle9i only: ensure that the TRANSACTION_AUDITING instance parameter is set to TRUE (default).
    SQL> show parameter transaction_auditing

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ----------
    transaction_auditing                 boolean     TRUE

END; 2025-06-05 13:51:09.307 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 13:51:09.308 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914757418 AND NEXT_CHANGE# > 7914757418 2025-06-05 13:51:09.333 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/product/12.2.0/dbhome_1/dbs/arch/1_74286_1100144861.dbf',options=>dbms_logmnr.new); END; 2025-06-05 13:51:09.352 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914757418',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 13:51:09.362 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: archive 2025-06-05 13:51:09.732 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 本轮归档日志已分析结束, 结束本次会话, 开始下次分析, 开始点位: 7914757422 2025-06-05 13:51:09.732 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 13:51:09.733 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 13:51:09.733 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 13:51:09.734 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 13:51:09.734 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914757422 AND NEXT_CHANGE# > 7914757422 2025-06-05 13:51:09.750 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析在线日志... 2025-06-05 13:51:09.768 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT f.member,g.group#,g.FIRST_CHANGE# FROM v$log g left join v$logfile f on g.group# = f.group# where g.FIRST_CHANGE# <= 7914757422 AND g.NEXT_CHANGE# > 7914757422 2025-06-05 13:51:09.768 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/oradata/orcl/redo02.log',options=>dbms_logmnr.new); END; 2025-06-05 13:51:09.772 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914757422',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 13:51:09.786 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: online 2025-06-05 13:54:02.535 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 13:54:02.537 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 13:54:02.537 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 13:54:02.538 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 13:54:02.538 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914822313 AND NEXT_CHANGE# > 7914822313 2025-06-05 13:54:02.555 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/product/12.2.0/dbhome_1/dbs/arch/1_74287_1100144861.dbf',options=>dbms_logmnr.new); END; 2025-06-05 13:54:02.560 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914822313',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 13:54:02.569 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: archive 2025-06-05 13:54:02.980 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 本轮归档日志已分析结束, 结束本次会话, 开始下次分析, 开始点位: 7914822317 2025-06-05 13:54:02.980 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 13:54:02.982 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 13:54:02.982 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 13:54:02.983 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 13:54:02.983 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914822317 AND NEXT_CHANGE# > 7914822317 2025-06-05 13:54:03.010 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析在线日志... 2025-06-05 13:54:03.040 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT f.member,g.group#,g.FIRST_CHANGE# FROM v$log g left join v$logfile f on g.group# = f.group# where g.FIRST_CHANGE# <= 7914822317 AND g.NEXT_CHANGE# > 7914822317 2025-06-05 13:54:03.041 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/oradata/orcl/redo03.log',options=>dbms_logmnr.new); END; 2025-06-05 13:54:03.045 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914822317',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 13:54:03.054 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: online 2025-06-05 14:00:13.564 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 14:00:13.570 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 14:00:13.570 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 14:00:13.571 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 14:00:13.571 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914891702 AND NEXT_CHANGE# > 7914891702 2025-06-05 14:00:13.587 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/product/12.2.0/dbhome_1/dbs/arch/1_74288_1100144861.dbf',options=>dbms_logmnr.new); END; 2025-06-05 14:00:13.600 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914891702',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 14:00:13.608 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: archive 2025-06-05 14:00:13.971 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 本轮归档日志已分析结束, 结束本次会话, 开始下次分析, 开始点位: 7914891706 2025-06-05 14:00:13.971 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 14:00:13.973 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 14:00:13.973 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 14:00:13.974 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 14:00:13.974 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914891706 AND NEXT_CHANGE# > 7914891706 2025-06-05 14:00:14.005 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析在线日志... 2025-06-05 14:00:14.014 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT f.member,g.group#,g.FIRST_CHANGE# FROM v$log g left join v$logfile f on g.group# = f.group# where g.FIRST_CHANGE# <= 7914891706 AND g.NEXT_CHANGE# > 7914891706 2025-06-05 14:00:14.014 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/oradata/orcl/redo01.log',options=>dbms_logmnr.new); END; 2025-06-05 14:00:14.018 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914891706',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 14:00:14.043 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: online 这个日志执行结果 ,下面是代码, package com.alibaba.otter.canal.server.embedded.handle.oracle; import com.alibaba.otter.canal.instance.core.CanalInstance; import com.alibaba.otter.canal.meta.FileMixedMetaManager; import com.alibaba.otter.canal.parse.CanalEventParser; import com.alibaba.otter.canal.parse.inbound.mysql.MysqlEventParser; import com.alibaba.otter.canal.parse.support.AuthenticationInfo; import com.alibaba.otter.canal.protocol.FlatMessage; import com.alibaba.otter.canal.protocol.Message; import com.alibaba.otter.canal.server.embedded.handle.Constant; import com.alibaba.otter.canal.server.embedded.handle.NotMysqlHandler; import com.alibaba.otter.canal.server.exception.CanalServerException; import com.alibaba.otter.canal.utils.OracleUtil; import com.google.common.collect.Lists; import org.apache.commons.lang.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.util.CollectionUtils; import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import java.sql.*; import java.text.SimpleDateFormat; import java.util.*; import java.util.concurrent.atomic.AtomicLong; /** * @author : yangpeng * create at: 2022/5/21 13:09 * @description: oracle连接实现 */ @Deprecated public class OracleHandlerV3 implements NotMysqlHandler { private static final Logger logger = LoggerFactory.getLogger(OracleHandler.class); private final static String ORACLE_DRIVER_URL = "jdbc:oracle:thin:@"; private final static String DATA_DICTIONARY_PATH = "/oracle/oradata/orcl/logmnr"; private Connection connection; private AuthenticationInfo authInfo; private AtomicLong currentScn = new AtomicLong(0); private String queryLogSql; private Map<String, List<String>> pkMaps; private List<String> whiteList; private String currentRedoGroups; private boolean isSwitch = true; private int pageSize = 500; static { //加载驱动 try { Class.forName("oracle.jdbc.OracleDriver"); } catch (ClassNotFoundException e) { e.printStackTrace(); } } @Override public void subscribe(CanalInstance canalInstance) { //数据库基本信息 CanalEventParser eventParser = canalInstance.getEventParser(); MysqlEventParser mysqlEventParser = (MysqlEventParser) eventParser; this.authInfo = mysqlEventParser.getMasterInfo(); connect(); String filterRegex = authInfo.getFilterRegex(); if(!org.springframework.util.StringUtils.isEmpty(filterRegex) && !filterRegex.endsWith(".*")){ //whiteList = Arrays.asList(filterRegex.split(",")); whiteList = new ArrayList<>(); for (String table : filterRegex.split(",")) { whiteList.add(table.toUpperCase()); } } String index = FileMixedMetaManager.getScnLocal(canalInstance.getDestination()); if("0".equals(index)){ String scn = authInfo.getTimestamp(); if(StringUtils.isEmpty(scn)) scn = getStartPosition(); try { this.currentScn.set(Long.valueOf(scn)); FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), Long.valueOf(scn)); } catch (IOException e) { e.printStackTrace(); } }else{ this.currentScn.set(Long.valueOf(index)); } //queryCurrentArchivedLogInfo(); currentRedoGroups = queryCurrentRedoGroups(); authInfo.setArchived(true); updateSettings(); //获取主键 pkMaps = new HashMap<>(); for (String table : whiteList) { List<String> pkNames = getPkNames(authInfo.getDefaultDatabaseName(), table); //pkMaps.put(authInfo.getDefaultDatabaseName()+"."+table, pkNames); String key = (authInfo.getDefaultDatabaseName() + "." + table).toUpperCase(); pkMaps.put(key, pkNames != null ? pkNames : Collections.emptyList()); } } private String queryCurrentRedoGroups(){ //获取当前正在使用的redo日志文件的相关组信息 Statement statement = null; ResultSet resultSet = null; try { statement = connection.createStatement(); resultSet = statement.executeQuery("SELECT GROUP# FROM V$LOG WHERE STATUS = 'CURRENT'"); StringBuilder sb = new StringBuilder(); while (resultSet.next()) { sb.append(resultSet.getString(1)); } if(sb.length() > 0){ return sb.toString(); } } catch (SQLException e) { logger.warn("Query Current Redo Group failed!"); } finally { close(statement, resultSet); } return ""; } // private void queryCurrentArchivedLogInfo(){ // //获取当前最新的归档日志信息 // Statement statement = null; // ResultSet resultSet = null; // try { // statement = connection.createStatement(); // resultSet = statement.executeQuery("SELECT NAME, RECID FROM " + // "(SELECT NAME, RECID FROM v$archived_log WHERE DEST_ID = 1 ORDER BY RECID DESC) WHERE ROWNUM = 1"); // if (resultSet.next()) { // //lastArchivedLogName = resultSet.getString(1); // //lastArchivedLogRecid = resultSet.getLong(2); // //logger.warn("1 - > lastArchivedLogName: " + lastArchivedLogName); // } // } catch (SQLException e) { // logger.warn("Query Current Archived Log failed!"); // } finally { // close(statement, resultSet); // } // } private void isArchived(Statement statement, ResultSet resultSet, ArrayList<String> logFiles){ if(authInfo.isArchived()){ try { logger.warn("分析归档日志..."); statement = connection.createStatement(); String sql = "SELECT NAME, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# >= "+currentScn+" ORDER BY RECID ASC"; resultSet = statement.executeQuery(sql); String NAME; while(resultSet.next()){ NAME = resultSet.getString(1); //添加归档日志 logFiles.add(NAME); } } catch (SQLException e) { logger.warn("Archive log parsing failed!"); e.printStackTrace(); StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); // 将出错的栈信息输出到printWriter中 e.printStackTrace(pw); pw.flush(); sw.flush(); try { sw.close(); } catch (IOException ex) { throw new RuntimeException(ex); } pw.close(); } finally { close(statement, resultSet); } } } private void addLogfile(Statement statement, ResultSet resultSet, ArrayList<String> logFiles){ if(CollectionUtils.isEmpty(logFiles)){ try { logger.warn("分析在线日志..."); statement = connection.createStatement(); resultSet = statement.executeQuery("select member from v$logfile"); while (resultSet.next()) { String fileName = resultSet.getString(1); if(!fileName.contains("sredo")){ logFiles.add(fileName); } } } catch (SQLException e) { logger.warn("Description Failed to query online logs!"); } finally { close(statement, resultSet); } } } private void setLogAnalyseDir(){ String createDictSql = "BEGIN dbms_logmnr_d.build(dictionary_filename =>'" + authInfo.getOraName() + "', dictionary_location =>'" + DATA_DICTIONARY_PATH + "'); END;"; logger.warn(createDictSql); getCallableStatement(createDictSql); } private void addAllfile(ArrayList<String> logFiles){ StringBuilder sbSQL = new StringBuilder(); for (int i = 0; i < logFiles.size(); ++i) { if (i == 0) { sbSQL.append("BEGIN\n"); } sbSQL.append("dbms_logmnr.add_logfile(logfilename=>'").append(logFiles.get(i)).append("',options=>dbms_logmnr.") .append(i == 0 ? "new" : "addfile").append(");\n"); } logFiles.clear(); sbSQL.append("END;\n"); logger.warn(sbSQL.toString()); getCallableStatement(sbSQL.toString()); } private void appendSql(){ //logmner视图查询 StringBuilder queryStr = new StringBuilder(); queryStr.append("SELECT scn,operation,sql_redo,table_space,table_name,timestamp,csf FROM v$logmnr_contents where 1=1"); queryStr.append(" AND seg_type_name='TABLE'"); queryStr.append(" AND table_space!='SYSTEM'"); String filter = authInfo.getFilter(); if (filter.contains(".*") && !".*..*".equals(filter)) { queryStr.append(" AND seg_owner='").append(authInfo.getDefaultDatabaseName()).append("'"); } if (!filter.contains(".*") && !".*..*".equals(filter)) { queryStr.append(" AND seg_owner='").append(authInfo.getDefaultDatabaseName()).append("'"); queryStr.append(" AND table_name in(").append(authInfo.getTable()).append(")"); } queryStr.append(" AND scn > " + "?"); queryStr.append(" AND rownum < ").append(pageSize); queryStr.append(" order by scn asc "); queryStr.append(" AND operation IN ('INSERT','UPDATE','DELETE')"); this.queryLogSql = queryStr.toString(); } public void updateSettings() { ArrayList<String> logFiles = new ArrayList<>(); Statement statement = null; ResultSet resultSet = null; try { // 是否分析归档日志 isArchived(statement, resultSet, logFiles); // 分析在线日志 addLogfile(statement, resultSet, logFiles); if (logFiles.isEmpty()) throw new RuntimeException("The Oracle log file was not read"); // 设置日志分析目录 setLogAnalyseDir(); // 添加所有日志文件 addAllfile(logFiles); try { // 开启logmnr视图 String startsql = "BEGIN\n" + "dbms_logmnr.start_logmnr(startScn=>'" + currentScn.get() //开始的scn号 + "',dictfilename=>'" + DATA_DICTIONARY_PATH + "/" + authInfo.getOraName() //路径 + "',options=>" + " dbms_logmnr.primary_key" + " +dbms_logmnr.skip_corruption" + " + dbms_logmnr.no_sql_delimiter" + " + dbms_logmnr.no_rowid_in_stmt" + " + dbms_logmnr.committed_data_only" + " + dbms_logmnr.string_literals_in_stmt) ;\n" + "END;"; logger.warn(startsql); getCallableStatement(startsql); } catch (Exception e) { logger.error("oracle logminer Unsubscribe failed:{}", e); // 开启logmnr视图 String startsql = "BEGIN\n" + "dbms_logmnr.start_logmnr(startScn=>'" + 0 //开始的scn号 + "',dictfilename=>'" + DATA_DICTIONARY_PATH + "/" + authInfo.getOraName() //路径 + "',options=>" + " dbms_logmnr.primary_key" + " +dbms_logmnr.skip_corruption" + " + dbms_logmnr.no_sql_delimiter" + " + dbms_logmnr.no_rowid_in_stmt" + " + dbms_logmnr.committed_data_only" + " + dbms_logmnr.string_literals_in_stmt) ;\n" + "END;"; logger.warn(startsql); getCallableStatement(startsql); } //logmner视图查询 appendSql(); logger.info("Subscribe to Oracle {} successfully!", authInfo.getAddress() + ":" + authInfo.getDefaultDatabaseName()); } catch (Exception e) { throw new CanalServerException("Failed to subscribe to Oracle logs. Procedure:::", e); } finally { close(statement, resultSet); } } private void logSwitchVerify(){ String crgs = queryCurrentRedoGroups(); if(StringUtils.isNotEmpty(crgs) && !crgs.equals(currentRedoGroups)){ currentRedoGroups = crgs; unsubscribe(); //oracle切换日志组时可能存在日志没来得及分析的遗漏问题, 此时应分析最近的归档日志, 且只分析一次, 处理掉可能遗漏的数据后再开启分析在线日志 authInfo.setArchived(true); isSwitch = true; updateSettings(); } } @Override public synchronized Message getWithoutAck(CanalInstance canalInstance) { try { if (connection.isClosed()) { reconnect(); } } catch (Exception e) { logger.error(e.getMessage(), e); } logSwitchVerify(); String findSql = this.queryLogSql.replace("?", currentScn.toString()); logger.warn(findSql); List<FlatMessage> data = new ArrayList<>(); Statement statement = null; ResultSet resultSet = null; try { statement = connection.createStatement(); resultSet = statement.executeQuery(findSql); //获取历史标识 long lastScn = currentScn.get(); Long endScn = null; while (resultSet.next()) { long scn = resultSet.getLong(Constant.scn); if (scn <= currentScn.get()) continue; String operation = resultSet.getString(Constant.operation); String sql = resultSet.getString(Constant.sql_redo); if(StringUtils.isEmpty(sql)) continue; // String tableSpace = resultSet.getString(Constant.table_space); String tableName = resultSet.getString(Constant.table_name); String timestamp = resultSet.getString(Constant.timestamp); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); long es = sdf.parse(timestamp).getTime() / 1000; FlatMessage flatMessage = new FlatMessage(); flatMessage.setDatabase(authInfo.getDefaultDatabaseName()); flatMessage.setTable(tableName); flatMessage.setIsDdl(false); flatMessage.setType(operation); flatMessage.setEs(es); flatMessage.setTs(System.currentTimeMillis()); flatMessage.setSql(sql); OracleUtil.toMqformat(flatMessage);//转换sql为mq所需json格式 //主键设置 //List<String> list = pkMaps.get(authInfo.getDefaultDatabaseName() + "." + tableName); String key = (authInfo.getDefaultDatabaseName() + "." + tableName).toUpperCase(); List<String> list = pkMaps.get(key); if (list == null || list.isEmpty()) { logger.warn("表 {} 无主键信息! 操作: {}, SQL: {}", tableName, operation, sql); } else { logger.info("主键信息: table={}, pk={}", tableName, list); } if(!CollectionUtils.isEmpty(list)){ flatMessage.setPkNames(list); }else{ flatMessage.setPkNames(new ArrayList<>()); } data.add(flatMessage); endScn = scn; } if (!CollectionUtils.isEmpty(data)) { if(null != endScn) currentScn.set(endScn); FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), currentScn.get()); return new Message(lastScn, data, Constant.NOT_MYSQL); } } catch (Exception e) { logger.error("oracle logminer Unsubscribe failed:{}", e); unsubscribe(); updateSettings(); } finally { close(statement, resultSet); if(isSwitch){ if(!CollectionUtils.isEmpty(data) && data.size() >= (pageSize-1)){ //data不等于空, 且条目数大于等于每次获取的日志数量, 可能说明从归档日志中一次不足以拿完全部日志, 多循环几次 }else{ isSwitch = false; authInfo.setArchived(false); unsubscribe(); updateSettings(); } } } return new Message(-1, true, null); } private String getStartPosition() { String sql = "SELECT MAX(CURRENT_SCN) CURRENT_SCN FROM GV$DATABASE"; Statement statement = null; ResultSet resultSet = null; try { statement = connection.createStatement(); resultSet = statement.executeQuery(sql); while (resultSet.next()) { long scn = resultSet.getLong(1); return String.valueOf(scn); } } catch (SQLException e) { e.printStackTrace(); } finally { close(statement, resultSet); } return "0"; } public List<String> getPkNames(String datasourceName, String tableName) { List<String> pkNames = Lists.newArrayList(); Statement statement = null; ResultSet resultSet = null; try { DatabaseMetaData metaData = connection.getMetaData(); String tableNameUpper = tableName.toUpperCase(); resultSet = metaData.getPrimaryKeys(datasourceName, null, tableNameUpper); while (resultSet.next()) { pkNames.add(resultSet.getString("COLUMN_NAME")); } return pkNames; } catch (Exception e) { logger.error(datasourceName+"."+tableName+"oracle get table primary key returns null", e); return Collections.emptyList(); } finally { close(statement, resultSet); } } private void close(Statement statement, ResultSet resultSet){ try { if (resultSet != null) resultSet.close(); } catch (SQLException e1) { e1.printStackTrace(); } finally { try { if (statement != null) statement.close(); } catch (SQLException e) { e.printStackTrace(); } } } @Override public void connect() { String jdbcUrl = ORACLE_DRIVER_URL + authInfo.getAddress().getHostString() + ":" + authInfo.getAddress().getPort() + ":" + authInfo.getServerName(); try { this.connection = DriverManager.getConnection(jdbcUrl, authInfo.getUsername(), authInfo.getPassword()); } catch (SQLException e) { e.printStackTrace(); } } @Override public void reconnect() { disconnect(); connect(); } @Override public void disconnect() { if (connection != null) { try { unsubscribe(); connection.close(); } catch (SQLException e) { e.printStackTrace(); } } } @Override public void ack(CanalInstance canalInstance) { return; } @Override public boolean checkConsistent(CanalInstance canalInstance) { CanalEventParser eventParser = canalInstance.getEventParser(); MysqlEventParser parser = (MysqlEventParser) eventParser; AuthenticationInfo newAuthInfo = parser.getMasterInfo(); String newUrl = newAuthInfo.getAddress().getHostName() + newAuthInfo.getAddress().getPort(); String oldUrl = this.authInfo.getAddress().getHostName() + this.authInfo.getAddress().getPort(); if (Objects.equals(newUrl, oldUrl)) { return true; } return false; } @Override //回退到指定位置 public void rollback(CanalInstance canalInstance, Long batchId) { try { FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), batchId); currentScn.set(batchId); } catch (IOException e) { e.printStackTrace(); } } public void getCallableStatement(String sql) { CallableStatement callableStatement = null; try { callableStatement = connection.prepareCall(sql); callableStatement.execute(); } catch (SQLException e) { logger.error(e.getMessage(), e); e.printStackTrace(); StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); // 将出错的栈信息输出到printWriter中 e.printStackTrace(pw); pw.flush(); sw.flush(); try { sw.close(); } catch (IOException ex) { throw new RuntimeException(ex); } pw.close(); if (e.getErrorCode() == 1291) { throw new CanalServerException("1291"); } } finally { close(callableStatement, null); } } public void getStatement(String sql) { try (Statement statement = connection.createStatement()) { statement.execute(sql); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } /** * description: 关闭查询会话 * * @param {} * @return void */ public void unsubscribe() { getStatement("BEGIN\n" + "dbms_logmnr.end_logmnr;\n" + "END;"); logger.warn("Unsubscribing succeeded !"); } } 怎么在代码加日志打印来排查 ,解析到的oracle归档日志中的sql是否正确,尤其是update和insert,来排查增量数据没有增加的原因
06-06
package com.alibaba.otter.canal.server.embedded.handle.oracle; import com.alibaba.otter.canal.instance.core.CanalInstance; import com.alibaba.otter.canal.meta.FileMixedMetaManager; import com.alibaba.otter.canal.parse.CanalEventParser; import com.alibaba.otter.canal.parse.inbound.mysql.MysqlEventParser; import com.alibaba.otter.canal.parse.support.AuthenticationInfo; import com.alibaba.otter.canal.protocol.FlatMessage; import com.alibaba.otter.canal.protocol.Message; import com.alibaba.otter.canal.server.embedded.handle.Constant; import com.alibaba.otter.canal.server.embedded.handle.NotMysqlHandler; import com.alibaba.otter.canal.server.exception.CanalServerException; import com.alibaba.otter.canal.utils.OracleUtil; import com.google.common.collect.Lists; import org.apache.commons.lang.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.util.CollectionUtils; import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import java.sql.*; import java.text.SimpleDateFormat; import java.util.Date; import java.util.*; import java.util.concurrent.atomic.AtomicLong; /** * @author : yangpeng * create at: 2022/5/21 13:09 * @description: oracle连接实现 */ public class OracleHandler implements NotMysqlHandler { private static final Logger logger = LoggerFactory.getLogger(OracleHandler.class); private final static String ORACLE_DRIVER_URL = "jdbc:oracle:thin:@"; private final static String DATA_DICTIONARY_PATH = "/oracle/oradata/orcl/logmnr"; private Connection connection; private AuthenticationInfo authInfo; private AtomicLong currentScn = new AtomicLong(0); private Map<String, List<String>> pkMaps; private List<String> whiteList; //归档日志本文件最后一个点位-1, 下一个文件的开始点位 private Long nextChange = null; private Long firstChange = null; private String currentModel; private String currentRedoGroups; private String lastRedoGroups; private String opt = ">="; private List<String> action = new ArrayList<>(); static { //加载驱动 try { Class.forName("oracle.jdbc.OracleDriver"); } catch (ClassNotFoundException e) { e.printStackTrace(); } } @Override public void subscribe(CanalInstance canalInstance) { //数据库基本信息 CanalEventParser eventParser = canalInstance.getEventParser(); MysqlEventParser mysqlEventParser = (MysqlEventParser) eventParser; this.authInfo = mysqlEventParser.getMasterInfo(); connect(); String filterRegex = authInfo.getFilterRegex(); if(!org.springframework.util.StringUtils.isEmpty(filterRegex) && !filterRegex.endsWith(".*")){ whiteList = Arrays.asList(filterRegex.split(",")); } logger.warn("订阅实例: {}, 过滤规则: {}, 初始 SCN: {}", canalInstance.getDestination(), filterRegex, currentScn.get()); // 新增日志 String index = FileMixedMetaManager.getScnLocal(canalInstance.getDestination()); //首次启动, 文件中的index是0, 同步类型: 全量+增量时使用衔接点位作为开始点位, //增量时衔接点位可能为空, 为空时获取系统最新点位作为开始点位 if("0".equals(index)){ String scn = authInfo.getTimestamp(); if(StringUtils.isEmpty(scn)) scn = getStartPosition(); try { this.currentScn.set(Long.valueOf(scn)); FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), Long.valueOf(scn)); } catch (IOException e) { e.printStackTrace(); } }else{ //index.scn文件中是非0点位, 说明当前实例运行被停止了或者被删除后实例重建了, 再次启动时比较文件点位和衔接点位 //衔接点位大于文件点位说明是删除后实例重建了, 删除后重建实例应使用最新的衔接点位开始增量同步 //衔接点位小于等于文件点位说明是暂停后重启了, 暂停后重启应使用文件中点位继续增量同步 String scn = authInfo.getTimestamp(); if(!StringUtils.isEmpty(scn) && Long.valueOf(scn) > Long.valueOf(index)){ try { this.currentScn.set(Long.valueOf(scn)); FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), Long.valueOf(scn)); } catch (IOException e) { e.printStackTrace(); } }else{ opt = ">"; this.currentScn.set(Long.valueOf(index)); } } if(StringUtils.isEmpty(authInfo.getAction())){ action.add("INSERT"); action.add("UPDATE"); action.add("DELETE"); }else{ action = Arrays.asList(authInfo.getAction().split(",")); } logger.warn("开始分析的点位:"+this.currentScn.get()); currentRedoGroups = queryCurrentRedoGroups(); lastRedoGroups = currentRedoGroups; queryLastArchivedLogScn(); authInfo.setArchived(true); updateSettings(); //获取主键 pkMaps = new HashMap<>(); for (String table : whiteList) { List<String> pkNames = getPkNames(authInfo.getDefaultDatabaseName(), table); pkMaps.put(authInfo.getDefaultDatabaseName()+"."+table, pkNames); } } private String queryCurrentRedoGroups(){ //获取当前正在使用的redo日志文件的相关组信息 Statement statement = null; ResultSet resultSet = null; try { statement = connection.createStatement(); resultSet = statement.executeQuery("SELECT GROUP# FROM V$LOG WHERE STATUS = 'CURRENT'"); resultSet.next(); return resultSet.getString(1); } catch (SQLException e) { logger.warn("Query Current Redo Group failed!"); } finally { close(statement, resultSet); } return ""; } private void queryLastArchivedLogScn(){ Statement statement = null; ResultSet resultSet = null; try { logger.warn("获取最后一个归档日志的开始点位和结束点位..."); statement = connection.createStatement(); String sql = "SELECT FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 ORDER BY RECID DESC"; logger.warn(sql); resultSet = statement.executeQuery(sql); if(resultSet.next()){ firstChange = resultSet.getLong(1); nextChange = resultSet.getLong(2); } } catch (Exception e) { logger.warn("query last one Archive log startScn and endScn failed!"); } finally { close(statement, resultSet); } } private void isArchived(ArrayList<String> logFiles){ Statement statement = null; ResultSet resultSet = null; try { logger.warn("分析归档日志..."); currentModel = "archive"; statement = connection.createStatement(); String sql = "SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND " + "FIRST_CHANGE# <= "+currentScn.get()+" AND NEXT_CHANGE# > "+currentScn.get(); logger.warn(sql); logger.warn("### [ARCHIVE] 查询归档日志: {}", sql); resultSet = statement.executeQuery(sql); logger.warn("分析归档日志,当前 SCN: {}, 匹配条件: FIRST_CHANGE# <= {} AND NEXT_CHANGE# > {}", currentScn.get(), currentScn.get(), currentScn.get()); // 新增日志 if(resultSet.next()){ String NAME = resultSet.getString(1); firstChange = resultSet.getLong(2); nextChange = resultSet.getLong(3); logger.warn("找到归档日志文件: {}, SCN 范围: [{}, {})", NAME, firstChange, nextChange); // 新增日志 //添加归档日志 logFiles.add(NAME); if(!Objects.equals(currentRedoGroups, lastRedoGroups)){ lastRedoGroups = currentRedoGroups; } } if(CollectionUtils.isEmpty(logFiles) && !Objects.equals(currentRedoGroups, lastRedoGroups)){ int num = 0; while (true){ resultSet = statement.executeQuery(sql); if(resultSet.next()){ String NAME = resultSet.getString(1); firstChange = resultSet.getLong(2); nextChange = resultSet.getLong(3); //添加归档日志 logFiles.add(NAME); } //在线日志写入到归档异步进行, 可能刚切换归档日志还没有落地, 若没获取到日志文件就阻塞, 最大60秒 if(CollectionUtils.isEmpty(logFiles)){ Thread.sleep(1000L); num++; logger.warn("在线日志写入到归档未完成, 线程已等待: "+num+"秒"); if(num >= 60) { logger.warn("在线日志写入到归档未完成等待60秒未完成, 默认放弃分析本次归档日志, 可能造成数据丢失, 丢失可能发生的点位:"+currentScn.get()); break; } } else{ lastRedoGroups = currentRedoGroups; break; } } } } catch (Exception e) { logger.warn("Archive log parsing failed!"); } finally { close(statement, resultSet); } } private void addLogfile(ArrayList<String> logFiles){ if(CollectionUtils.isEmpty(logFiles)){ Statement statement = null; ResultSet resultSet = null; try { logger.warn("分析在线日志..."); currentModel = "online"; statement = connection.createStatement(); //分析前校验日志组是否发生了变更, 发生变更的话从分析归档开始 String group = queryCurrentRedoGroups(); if(!currentRedoGroups.equals(group)){ currentRedoGroups = group; isArchived(logFiles); }else{ String sql = "SELECT f.member,g.group#,g.FIRST_CHANGE# FROM v$log g left join v$logfile f on g.group# = f.group# " + "where g.FIRST_CHANGE# <= " + currentScn.get() + " AND g.NEXT_CHANGE# > " + currentScn.get(); resultSet = statement.executeQuery(sql); logger.warn(sql); logger.warn("### [ONLINE] 查询在线日志: {}", sql); while (resultSet.next()) { String fileName = resultSet.getString(1); if(!fileName.contains("sredo")){ logFiles.add(fileName); firstChange = resultSet.getLong(3); } } } } catch (SQLException e) { logger.warn("Description Failed to query online logs!"); } finally { close(statement, resultSet); } } } private void offsetLogfile(ArrayList<String> logFiles){ if(CollectionUtils.isEmpty(logFiles)){ Statement statement = null; ResultSet resultSet = null; try { logger.warn("补偿措施, 分析最新的在线日志..."); currentModel = "online"; statement = connection.createStatement(); String sql = "SELECT f.member,g.group#,g.FIRST_CHANGE# FROM v$log g left join v$logfile f on g.group# = f.group# where g.status = 'CURRENT'"; resultSet = statement.executeQuery(sql); logger.warn(sql); while (resultSet.next()) { String fileName = resultSet.getString(1); if(!fileName.contains("sredo")){ logFiles.add(fileName); firstChange = resultSet.getLong(3); } } if(!CollectionUtils.isEmpty(logFiles)){ currentScn.set(firstChange); opt = ">="; } } catch (SQLException e) { logger.warn("Description Failed to query online logs!"); } finally { close(statement, resultSet); } } } private void setSqlTrace() { String setSqlTraceSql = "BEGIN\n" + " DBMS_SESSION.SET_SQL_TRACE(FALSE);\n" + " END;"; logger.warn(setSqlTraceSql); getCallableStatement(setSqlTraceSql); } private void setLogAnalyseDir(){ String createDictSql = "BEGIN dbms_logmnr_d.build(dictionary_filename =>'" + authInfo.getOraName() + "', dictionary_location =>'" + DATA_DICTIONARY_PATH + "'); END;"; logger.warn(createDictSql); getCallableStatement(createDictSql); } private void addAllfile(ArrayList<String> logFiles){ logger.warn("添加 LogMiner 日志文件: {}", logFiles); // 新增日志 StringBuilder sbSQL = new StringBuilder(); //logmnr最大分析文件个数3 for (int i = 0; i < logFiles.size(); i++) { if (i == 0) { sbSQL.append("BEGIN\n"); } sbSQL.append("dbms_logmnr.add_logfile(logfilename=>'").append(logFiles.get(i)).append("',options=>dbms_logmnr.") .append(i == 0 ? "new" : "addfile").append(");\n"); } logFiles.clear(); sbSQL.append("END;\n"); logger.warn(sbSQL.toString()); getCallableStatement(sbSQL.toString()); } public void updateSettings() { ArrayList<String> logFiles = new ArrayList<>(); // 禁用当前会话的SQL跟踪功能 setSqlTrace(); // 设置日志分析目录 // setLogAnalyseDir(); // 是否分析归档日志 isArchived(logFiles); // 分析在线日志 addLogfile(logFiles); // 归档和在线都没有符合点位的文件, 可能是停机久了再次启动时归档文件被迁移或删除了, 从最新的在线日志文件开始点位开始分析 if (logFiles.isEmpty()) offsetLogfile(logFiles); // 归档丢失, 在线不存在, 数据库可能出现故障 if (logFiles.isEmpty()) throw new RuntimeException("The Oracle log file was not read"); // 添加所有日志文件 addAllfile(logFiles); logger.warn("### [LOGMINER-CFG] 模式: {}, 起始SCN: {}, 加载文件: {}", currentModel, currentScn.get(), logFiles); try { // 开启logmnr视图 String startsql = "BEGIN\n" + "dbms_logmnr.start_logmnr(startScn=>'" + currentScn.get() + "'" //开始的scn号 // + ",dictfilename=>'" + DATA_DICTIONARY_PATH + "/" + authInfo.getOraName() + "'" //字典路径 + ",options=>" + " dbms_logmnr.dict_from_online_catalog" + //使用在线数据字典来查看相关的元数据信息 " + dbms_logmnr.skip_corruption" + //跳过损坏的日志快 " + dbms_logmnr.no_sql_delimiter" + //去掉SQL分隔符 " + dbms_logmnr.no_rowid_in_stmt" + //去掉SQL中的ROWID // " + dbms_logmnr.committed_data_only" + //去掉未提交事务的数据 " + dbms_logmnr.string_literals_in_stmt) ;\n" //输出SQL语句中的字符串常量 + "END;"; logger.warn(startsql); logger.warn("### [DICTIONARY] 使用{}数据字典", startsql.contains("dict_from_online_catalog") ? "在线" : "持久化"); getCallableStatement(startsql); logger.warn("dbms_logmnr successfully! 当前分析模式: " + currentModel); } catch (Exception e) { logger.error(e.getMessage(), e); } } private void logSwitchVerify(){ String crgs = queryCurrentRedoGroups(); if(StringUtils.isNotEmpty(crgs) && !crgs.equals(currentRedoGroups)){ logger.warn("### [REDO-SWITCH] 日志组切换! 旧组: {} → 新组: {}", currentRedoGroups, crgs); currentRedoGroups = crgs; unsubscribe(); updateSettings(); } } @Override public synchronized Message getWithoutAck(CanalInstance canalInstance) { try { if (connection.isClosed()) { reconnect(); } } catch (Exception e) { logger.error(e.getMessage(), e); } logSwitchVerify(); List<FlatMessage> data = new ArrayList<>(); Statement statement = null; ResultSet resultSet = null; String sql =""; try { statement = connection.createStatement(); long endScn = currentScn.get(); String queryLogSql = "SELECT scn,operation,sql_redo,table_space,seg_owner,table_name,timestamp,csf FROM v$logmnr_contents " + "where scn %s %d and table_name = '%s' and operation in('INSERT','DELETE','UPDATE') order by scn asc"; //logger.warn(String.format(queryLogSql, opt, currentScn.get(), whiteList.get(0))); resultSet = statement.executeQuery(String.format(queryLogSql, opt, currentScn.get(), whiteList.get(0))); opt = ">"; long lastScn = currentScn.get(); while (resultSet.next()) { endScn = resultSet.getLong("scn"); String operation = resultSet.getString(Constant.operation); if(!action.contains(operation.toUpperCase())) continue; // String segOwner = resultSet.getString(Constant.seg_owner); // String tableSpace = resultSet.getString(Constant.table_space); String tableName = resultSet.getString(Constant.table_name); sql = resultSet.getString(Constant.sql_redo); logger.warn("### [ORACLE-LOG] 解析到原始SQL - SCN: {}, 操作: {}, 表: {}, SQL: {}", endScn, operation, tableName, sql); if(StringUtils.isEmpty(sql)){ logger.error(operation + "->" + tableName+", sql为空默认跳过"); continue; } FlatMessage flatMessage = new FlatMessage(); String timestamp = resultSet.getString(Constant.timestamp); if(!StringUtils.isEmpty(timestamp)){ SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); long es = sdf.parse(timestamp).getTime() / 1000; flatMessage.setEs(es); }else{ long es = new Date().getTime() / 1000; flatMessage.setEs(es); } flatMessage.setDatabase(authInfo.getDefaultDatabaseName()); flatMessage.setTable(tableName); flatMessage.setIsDdl(false); flatMessage.setType(operation); flatMessage.setTs(System.currentTimeMillis()); flatMessage.setSql(sql); OracleUtil.toMqformat(flatMessage);//转换sql为mq所需json格式 if(null == flatMessage.getData()) { logger.warn("异常SQL: " + sql); continue; } logger.warn("### [CANAL-MSG] 转换后消息 - 类型: {}, 主键: {}, 数据: {}", flatMessage.getType(), flatMessage.getPkNames(), flatMessage.getData()); //主键设置 List<String> list = pkMaps.get(authInfo.getDefaultDatabaseName() + "." + tableName); if(!CollectionUtils.isEmpty(list)){ flatMessage.setPkNames(list); }else{ flatMessage.setPkNames(new ArrayList<>()); } data.add(flatMessage); } if(endScn != currentScn.get()){ logger.warn("### [SCN-UPDATE] 更新SCN: 旧={}, 新={}", currentScn.get(), endScn); currentScn.set(endScn); FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), currentScn.get()); } //本次的归档日志已分析结束, 结束本次会话, 开始下次分析 if("archive".equals(currentModel) && CollectionUtils.isEmpty(data)){ currentScn.set(nextChange); opt = ">="; FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), currentScn.get()); logger.warn("本轮归档日志已分析结束, 结束本次会话, 开始下次分析, 开始点位: "+nextChange); unsubscribe(); updateSettings(); } if (!CollectionUtils.isEmpty(data)) { return new Message(lastScn, data, Constant.NOT_MYSQL); } } catch (Exception e) { logger.error("oracle logminer select v$logmnr_contents failed"); logger.warn(sql); logger.error("### [ERROR] SQL解析异常: {}", sql, e); StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); // 将出错的栈信息输出到printWriter中 e.printStackTrace(pw); pw.flush(); sw.flush(); try { sw.close(); } catch (IOException ex) { throw new RuntimeException(ex); } pw.close(); unsubscribe(); updateSettings(); } finally { close(statement, resultSet); } return new Message(-1, true, null); } //TRIGGER private String getStartPosition() { String sql = "SELECT MAX(CURRENT_SCN) CURRENT_SCN FROM GV$DATABASE"; Statement statement = null; ResultSet resultSet = null; try { statement = connection.createStatement(); resultSet = statement.executeQuery(sql); while (resultSet.next()) { long scn = resultSet.getLong(1); return String.valueOf(scn); } } catch (SQLException e) { e.printStackTrace(); } finally { close(statement, resultSet); } return "0"; } public List<String> getPkNames(String datasourceName, String tableName) { List<String> pkNames = Lists.newArrayList(); Statement statement = null; ResultSet resultSet = null; logger.warn("### [PRIMARY-KEY] 查询主键: {}.{}", datasourceName, tableName); try { DatabaseMetaData metaData = connection.getMetaData(); resultSet = metaData.getPrimaryKeys(datasourceName, null, tableName); while (resultSet.next()) { String pkName = resultSet.getString("COLUMN_NAME"); // +++ 新增: 记录找到的主键 +++ logger.warn("### [PRIMARY-KEY] 找到主键列: {}", pkName); pkNames.add(pkName); } if (pkNames.isEmpty()) { logger.error("### [PRIMARY-KEY-WARN] 表 {}.{} 未找到主键!", datasourceName, tableName); } else { logger.warn("### [PRIMARY-KEY] 共找到 {} 个主键列", pkNames.size()); } return pkNames; } catch (Exception e) { logger.error(datasourceName+"."+tableName+"oracle get table primary key returns null", e); } finally { close(statement, resultSet); } return null; } private void close(Statement statement, ResultSet resultSet){ try { if (resultSet != null) resultSet.close(); } catch (SQLException e1) { e1.printStackTrace(); } finally { try { if (statement != null) statement.close(); } catch (SQLException e) { e.printStackTrace(); } } } @Override public void connect() { String jdbcUrl = ORACLE_DRIVER_URL + authInfo.getAddress().getHostString() + ":" + authInfo.getAddress().getPort() + ":" + authInfo.getServerName(); try { this.connection = DriverManager.getConnection(jdbcUrl, authInfo.getUsername(), authInfo.getPassword()); logger.warn("成功连接到 Oracle: {}:{}", authInfo.getAddress().getHostString(), authInfo.getAddress().getPort()); // 新增日志 } catch (SQLException e) { e.printStackTrace(); } } @Override public void reconnect() { disconnect(); connect(); } @Override public void disconnect() { if (connection != null) { try { unsubscribe(); connection.close(); } catch (SQLException e) { e.printStackTrace(); } } } @Override public void ack(CanalInstance canalInstance) { return; } @Override public boolean checkConsistent(CanalInstance canalInstance) { CanalEventParser eventParser = canalInstance.getEventParser(); MysqlEventParser parser = (MysqlEventParser) eventParser; AuthenticationInfo newAuthInfo = parser.getMasterInfo(); String newUrl = newAuthInfo.getAddress().getHostName() + newAuthInfo.getAddress().getPort(); String oldUrl = this.authInfo.getAddress().getHostName() + this.authInfo.getAddress().getPort(); if (Objects.equals(newUrl, oldUrl)) { return true; } return false; } @Override //回退到指定位置 public void rollback(CanalInstance canalInstance, Long batchId) { try { FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), batchId); currentScn.set(batchId); } catch (IOException e) { e.printStackTrace(); } logger.warn("回退到 SCN: {}, 实例: {}", batchId, canalInstance.getDestination()); // 新增日志 } public void getCallableStatement(String sql) { CallableStatement callableStatement = null; try { callableStatement = connection.prepareCall(sql); callableStatement.execute(); } catch (SQLException e) { logger.error(e.getMessage(), e); e.printStackTrace(); StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); // 将出错的栈信息输出到printWriter中 e.printStackTrace(pw); logger.error("执行 SQL 失败: {}, 错误码: {}, 堆栈: {}", sql, e.getErrorCode(), sw.toString()); // 新增日志 pw.flush(); sw.flush(); try { sw.close(); } catch (IOException ex) { throw new RuntimeException(ex); } pw.close(); if (e.getErrorCode() == 1291) { throw new CanalServerException("1291"); } } finally { close(callableStatement, null); } } /** * description: 关闭查询会话 * * @param {} * @return void */ public void unsubscribe() { logger.warn("### [SESSION-END] 结束会话! 当前SCN: {}", currentScn.get()); String endLogmnrSql = "BEGIN\n" + " dbms_logmnr.end_logmnr;\n" + " END;"; logger.warn(endLogmnrSql); getCallableStatement(endLogmnrSql); logger.warn("end_logmnr succeeded !"); }你觉得增量数据未同步的原因是啥 分析下代码 }
06-06
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值