Oracle 10G dataguard 主库诊断脚本

这是一个Oracle官方提供的10g DataGuard主库诊断SQL脚本,用于收集信息帮助解决DataGuard遇到的问题。脚本通过运行一系列查询来检查数据库角色、配置参数、在线重做日志状态等。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

分享一个Oracle官方的10g Dataguard主库诊断SQL脚本,此脚本可以诊断出DG的相关配置,日志应用相关信息

-- NAME: new_dg_prim_diag.sql  (Run from sqlplus on PRIMARY with a LOGICAL or PHYSICAL STANDBY as SYS)
-- ------------------------------------------------------------------------  
-- Copyright 2002, Oracle Corporation       
-- LAST UPDATED: 15-Apr-2013
--
-- Usage: @new_dg_prim_diag
-- ------------------------------------------------------------------------  
-- PURPOSE:  
--    This script is to be used to assist in the collection of information to help
--    troubleshoot Data Guard issues with a Primary Database
------------------------------------------------------------------------  
-- DISCLAIMER:  
--    This script is provided for educational purposes only. It is NOT   
--    supported by Oracle World Wide Technical Support.  
--    The script has been tested and appears to work as intended.  
--    You should always run new scripts on a test instance initially.  
-- ------------------------------------------------------------------------  
-- Script output is as follows: 

set echo off 
set feedback off 
column timecol new_value timestamp 
column spool_extension new_value suffix 
SELECT TO_CHAR(sysdate,'Mondd_hhmi') timecol, '.html' spool_extension FROM dual; 
column output new_value dbname 
SELECT value || '_' output FROM v$parameter WHERE name = 'db_unique_name'; 
spool new_dg_prim_diag_&&dbname&×tamp&&suffix
set linesize 2000
set pagesize 50000
set numformat 999999999999999
set trim on 
set trims on 
set markup html on
set markup html entmap off

ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; 
SELECT TO_CHAR(sysdate) time FROM dual; 

SELECT 'In the following output the DATABASE_ROLE should be PRIMARY as that is what this script is intended to be run on.<br>PLATFORM_ID should match the PLATFORM_ID of the standby(s) or conform to the supported options in<br>Note: 413484.1 Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration<br>Note: 1085687.1 Data Guard Support for Heterogeneous Primary and Logical Standbys in Same Data Guard Configuration<br>OPEN_MODE should be READ WRITE.<br>LOG_MODE should be ARCHIVELOG.<br>FLASHBACK can be YES (recommended) or NO.<br>If PROTECTION_LEVEL is different from PROTECTION_MODE then for some reason the mode listed in PROTECTION_MODE experienced a need to downgrade.<br>Once the error condition has been corrected the PROTECTION_LEVEL should match the PROTECTION_MODE after the next log switch.' "Database 1" FROM dual;

SELECT database_role role, name, db_unique_name, platform_id, open_mode, log_mode, flashback_on, protection_mode, protection_level FROM v$database;

SELECT 'FORCE_LOGGING is not mandatory but is recommended.<br>REMOTE_ARCHIVE should be ENABLE.<br>SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI must be enabled if the standby associated with this primary is a logical standby.<br>During normal operations it is acceptable for SWITCHOVER_STATUS to be SESSIONS ACTIVE or TO STANDBY.<br>DG_BROKER can be ENABLED (recommended) or DISABLED.' "Database 2" FROM dual;

column force_logging format a13 tru
column remote_archive format a14 tru 
column supplemental_log_data_pk format a24 tru
column supplemental_log_data_ui format a24 tru
column dataguard_broker format a16 tru

SELECT force_logging, remote_archive, supplemental_log_data_pk, supplemental_log_data_ui, switchover_status, dataguard_broker FROM v$database;  

SELECT 'The following query gives us information about catpatch. From this we can tell if the catalog version doesn''t match the image version it was started with.' "Database 3" FROM dual;

column version format a10 tru 

SELECT version, modified, status FROM dba_registry WHERE comp_id = 'CATPROC';

SELECT 'Check how many threads are enabled and started for this database. If the number of instances below does not match then not all instances are up.' "Threads" FROM dual;

SELECT thread#, instance, status FROM v$thread;

SELECT 'The number of instances returned below is the number currently running.  If it does not match the number returned in Threads above then not all instances are up.<br>VERSION should match the version from CATPROC above.<br>ARCHIVER can be (STOPPED | STARTED | FAILED). FAILED means that the archiver failed to archive a log last time, but will try again within 5 minutes.<br>LOG_SWITCH_WAIT the ARCHIVE LOG/CLEAR LOG/CHECKPOINT event log switching is waiting for.<br>Note that if ALTER SYSTEM SWITCH LOGFILE is hung, but there is room in the current online redo log, then the value is NULL.' "Instances" FROM dual;

column host_name format a32 wrap

SELECT thread#, instance_name, host_name, version, archiver, log_switch_wait FROM gv$instance ORDER BY thread#;

SELECT 'Check how often logs are switching. Log switches should not regularly be occuring in < 20 mins.<br>Excessive log switching is a performance overhead. Whilst rapid log switching is not in itself a Data Guard issue it can affect Data guard. It may also indicate a problem with log shipping.<br>Use redo log size = 4GB or redo log size >= peak redo rate x 20 minutes.' "Log Switches" FROM dual;

SELECT fs.log_switches_under_20_mins, ss.log_switches_over_20_mins FROM (SELECT  SUM(COUNT (ROUND((b.first_time - a.first_time) * 1440) )) "LOG_SWITCHES_UNDER_20_MINS"  FROM v$archived_log a, v$archived_log b WHERE a.sequence# + 1 = b.sequence# AND a.dest_id = 1 AND a.thread# = b.thread#  AND a.dest_id = b.dest_id AND a.dest_id = (SELECT MIN(dest_id) FROM gv$archive_dest WHERE target='PRIMARY' AND destination IS NOT NULL) AND ROUND((b.first_time - a.first_time) * 1440)  < 20 GROUP BY ROUND((b.first_time - a.first_time) * 1440))  fs, (SELECT  SUM(COUNT (ROUND((b.first_time - a.first_time) * 1440) )) "LOG_SWITCHES_OVER_20_MINS"  FROM v$archived_log a, v$archived_log b WHERE a.sequence# + 1 = b.sequence# AND a.dest_id = 1 AND a.thread# = b.thread#  AND a.dest_id = b.dest_id AND a.dest_id = (SELECT MIN(dest_id) FROM gv$archive_dest WHERE target='PRIMARY' AND destination IS NOT NULL) AND ROUND((b.first_time - a.first_time) * 1440)  > 19 GROUP BY ROUND((b.first_time - a.first_time) * 1440)) ss;

column  minutes  format a12

SELECT (CASE WHEN bucket = 1 THEN '<= ' || TO_CHAR(bucket* 5) WHEN (bucket >1 AND bucket < 9) THEN TO_CHAR(bucket * 5 - 4) || ' TO ' || TO_CHAR(bucket * 5) WHEN bucket > 8 THEN '>= ' || TO_CHAR(bucket * 5 - 4) END) "MINUTES", switches "LOG_SWITCHES" FROM (SELECT bucket , COUNT(b.bucket) SWITCHES FROM (SELECT WIDTH_BUCKET(ROUND((b.first_time - a.first_time) * 1440), 0, 40, 8) bucket FROM v$archived_log a, v$archived_log b WHERE a.sequence# + 1 = b.sequence# AND a.dest_id = b.dest_id  AND a.thread# = b.thread#  AND a.dest_id = (SELECT MIN(dest_id) FROM gv$archive_dest WHERE target = 'PRIMARY' AND destination IS NOT NULL)) b GROUP BY bucket ORDER BY bucket);


SELECT 'Check the number and size of online redo logs on each thread.' "Online Redo Logs" FROM dual;

set feedback on

SELECT thread#, group#, sequence#, bytes, archived ,status FROM v$log ORDER BY thread#, group#; 

set feedback off

SELECT 'The following query is run to see if standby redo logs have been created in preparation for switchover.<br>The standby redo logs should be the same size as the online redo logs.<br>There should be (( # of online logs per thread + 1) * # of threads) standby redo logs.<br>A value of 0 for the thread# means the log has never been allocated.' "Standby Redo Logs" FROM dual;

set feedback on

SELECT thread#, group#, sequence#, bytes, archived, status FROM v$standby_log order by thread#, group#; 

set feedback off

SELECT 'This query produces a list of defined archive destinations. It shows if they are enabled, what process is servicing that destination, if the destination is local or remote, and if remote what the current mount ID is.' "Archive Destinations" FROM dual;

column destination format a35 wrap 
column process format a7 
column ID format 99 
column mid format 99
 
SELECT thread#, dest_id, destination, gvad.status, target, schedule, process, mountid mid FROM gv$archive_dest gvad, gv$instance gvi WHERE gvad.inst_id = gvi.inst_id AND destination is NOT NULL ORDER BY thread#, dest_id; 

SELECT 'This select will give further detail on the destinations as to what options have been set.<br>Register indicates whether or not the archived redo log is registered in the remote destination control file.' "Archive Destination Options" FROM dual;

set numwidth 8
column archiver format a8 
column ID format 99 
column error format a55 wrap

SELECT thread#, dest_id, gvad.archiver, transmit_mode, affirm, async_blocks, net_timeout, delay_mins, reopen_secs reopen, register, binding FROM gv$archive_dest gvad, gv$instance gvi WHERE gvad.inst_id = gvi.inst_id AND destination is NOT NULL ORDER BY thread#, dest_id; 

SELECT 'The following select will show any errors that occured the last time an attempt to archive to the destination was attempted.<br>If ERROR is blank and status is VALID then the archive completed correctly.' "Archive Destination Errors" FROM dual;

SELECT thread#, dest_id, gvad.status, error FROM gv$archive_dest gvad, gv$instance gvi WHERE gvad.inst_id = gvi.inst_id AND destination is NOT NULL ORDER BY thread#, dest_id; 

SELECT 'The query below will determine if any error conditions have been reached by querying the v$dataguard_status view (view only available in 9.2.0 and above).' "Data Guard Status" FROM dual;

column message format a80 

set feedback on

SELECT gvi.thread#, timestamp, message FROM gv$dataguard_status gvds, gv$instance gvi WHERE gvds.inst_id = gvi.inst_id AND severity in ('Error','Fatal') ORDER BY timestamp, thread#;

set feedback off

SELECT 'Query v$managed_standby to see the status of processes involved in the shipping redo on this system.<br>Does not include processes needed to apply redo.' "Managed Standby Status" FROM dual;

SELECT inst_id, thread#, process, pid, status, client_process, client_pid, sequence#, block#, active_agents, known_agents FROM gv$managed_standby ORDER BY thread#, pid;

SELECT 'The following query will determine the current sequence number and the last sequence archived.<br>If you are remotely archiving using the LGWR process then the archived sequence should be one higher than the current sequence.<br>If remotely archiving using the ARCH process then the archived sequence should be equal to the current sequence.<br>The applied sequence information is updated at log switch time.' "Archived Sequences" FROM dual;

SELECT la.thread#, la.dest_id, currentsequence "Current Sequence", lastarchived  "Last Archived" FROM (SELECT gval.thread#, gvad.dest_id, MAX(gvad.log_sequence) lastarchived FROM gv$archive_dest gvad, gv$archived_log gval where gvad.inst_id = gval.inst_id AND gvad.dest_id = gval.dest_id AND thread# = gval.inst_id GROUP BY gval.thread#, gvad.dest_id) la, (SELECT thread#, dest_id, MAX(sequence#) currentsequence FROM  gv$archived_log WHERE resetlogs_change# = (SELECT MAX(resetlogs_change#) FROM v$archived_log) AND thread# = inst_id GROUP BY  thread#, dest_id) cs WHERE cs.thread# = la.thread# and cs.dest_id = la.dest_id ORDER BY thread#, dest_id;

SELECT 'The following select will attempt to gather as much information as possible from the standby.<br>Standby redo logs are not supported with Logical Standby until Version 10.1.<br>The ARCHIVED_SEQUENCE# from a logical standby is the sequence# created by the apply, not the sequence# sent from the primary.' "Archive Destination Status" FROM dual;

set numwidth 8
column dest_id format 99 
column Active format 99

SELECT dest_id, database_mode, recovery_mode, protection_mode, standby_logfile_count, standby_logfile_active, archived_seq# FROM v$archive_dest_status WHERE destination IS NOT NULL; 
 
SELECT 'Non-default init parameters.<br>For a RAC DB Thread# = * means the value is the same for all threads (SID=*)<br>Threads with different values are shown with their individual thread# and values.' "Non Default init Parameters" FROM dual;

column num noprint

SELECT num, '*' "THREAD#", name, value FROM v$PARAMETER WHERE NUM IN (SELECT num FROM v$parameter WHERE isdefault = 'FALSE'
MINUS
SELECT num FROM gv$parameter gvp, gv$instance gvi WHERE num IN (SELECT DISTINCT gvpa.num FROM gv$parameter gvpa, gv$parameter gvpb WHERE gvpa.num = gvpb.num AND  gvpa.value <> gvpb.value AND gvpa.isdefault = 'FALSE') AND gvi.inst_id = gvp.inst_id  AND gvp.isdefault = 'FALSE')
UNION
SELECT num, TO_CHAR(thread#) "THREAD#", name, value FROM gv$parameter gvp, gv$instance gvi WHERE num IN (SELECT DISTINCT gvpa.num FROM gv$parameter gvpa, gv$parameter gvpb WHERE gvpa.num = gvpb.num AND  gvpa.value <> gvpb.value AND gvpa.isdefault = 'FALSE') AND gvi.inst_id = gvp.inst_id  AND gvp.isdefault = 'FALSE' ORDER BY 1, 2;

spool off
set markup html off entmap on
set feedback on
set echo on

基于数据挖掘的音乐推荐系统设计与实现 需要一个代码说明,不需要论文 采用python语言,django框架,mysql数据库开发 编程环境:pycharm,mysql8.0 系统分为前台+后台模式开发 网站前台: 用户注册, 登录 搜索音乐,音乐欣赏(可以在线进行播放) 用户登陆时选择相关感兴趣的音乐风格 音乐收藏 音乐推荐算法:(重点) 本课题需要大量用户行为(如播放记录、收藏列表)、音乐特征(如音频特征、歌曲元数据)等数据 (1)根据用户之间相似性或关联性,给一个用户推荐与其相似或有关联的其他用户所感兴趣的音乐; (2)根据音乐之间的相似性或关联性,给一个用户推荐与其感兴趣的音乐相似或有关联的其他音乐。 基于用户的推荐和基于物品的推荐 其中基于用户的推荐是基于用户的相似度找出相似相似用户,然后向目标用户推荐其相似用户喜欢的东西(和你类似的人也喜欢**东西); 而基于物品的推荐是基于物品的相似度找出相似的物品做推荐(喜欢该音乐的人还喜欢了**音乐); 管理员 管理员信息管理 注册用户管理,审核 音乐爬虫(爬虫方式爬取网站音乐数据) 音乐信息管理(上传歌曲MP3,以便前台播放) 音乐收藏管理 用户 用户资料修改 我的音乐收藏 完整前后端源码,部署后可正常运行! 环境说明 开发语言:python后端 python版本:3.7 数据库:mysql 5.7+ 数据库工具:Navicat11+ 开发软件:pycharm
MPU6050是一款广泛应用在无人机、机器人和运动设备中的六轴姿态传感器,它集成了三轴陀螺仪和三轴加速度计。这款传感器能够实时监测并提供设备的角速度和线性加速度数据,对于理解物体的动态运动状态至关重要。在Arduino平台上,通过特定的库文件可以方便地与MPU6050进行通信,获取并解析传感器数据。 `MPU6050.cpp`和`MPU6050.h`是Arduino库的关键组成部分。`MPU6050.h`是头文件,包含了定义传感器接口和函数声明。它定义了类`MPU6050`,该类包含了初始化传感器、读取数据等方法。例如,`begin()`函数用于设置传感器的工作模式和I2C地址,`getAcceleration()`和`getGyroscope()`则分别用于获取加速度和角速度数据。 在Arduino项目中,首先需要包含`MPU6050.h`头文件,然后创建`MPU6050`对象,并调用`begin()`函数初始化传感器。之后,可以通过循环调用`getAcceleration()`和`getGyroscope()`来不断更新传感器读数。为了处理这些原始数据,通常还需要进行校准和滤波,以消除噪声和漂移。 I2C通信协议是MPU6050与Arduino交互的基础,它是一种低引脚数的串行通信协议,允许多个设备共享一对数据线。Arduino板上的Wire库提供了I2C通信的底层支持,使得用户无需深入了解通信细节,就能方便地与MPU6050交互。 MPU6050传感器的数据包括加速度(X、Y、Z轴)和角速度(同样为X、Y、Z轴)。加速度数据可以用来计算物体的静态位置和动态运动,而角速度数据则能反映物体转动的速度。结合这两个数据,可以进一步计算出物体的姿态(如角度和角速度变化)。 在嵌入式开发领域,特别是使用STM32微控制器时,也可以找到类似的库来驱动MPU6050。STM32通常具有更强大的处理能力和更多的GPIO口,可以实现更复杂的控制算法。然而,基本的传感器操作流程和数据处理原理与Arduino平台相似。 在实际应用中,除了基本的传感器读取,还可能涉及到温度补偿、低功耗模式设置、DMP(数字运动处理器)功能的利用等高级特性。DMP可以帮助处理传感器数据,实现更高级的运动估计,减轻主控制器的计算负担。 MPU6050是一个强大的六轴传感器,广泛应用于各种需要实时运动追踪的项目中。通过 Arduino 或 STM32 的库文件,开发者可以轻松地与传感器交互,获取并处理数据,实现各种创新应用。博客和其他开源资源是学习和解决问题的重要途径,通过这些资源,开发者可以获得关于MPU6050的详细信息和实践指南
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值