Archive Logs Are Created With Small

Archive Logs Are Created With Smaller, Uneven Size Than The Original Redo Logs. Why? [ID 388627.1]

  1. Applies to:
  2. Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 11.1.0.6 - Release: 8.1.7 to 11.1
  3. Information in this document applies to any platform.
  4. Goal
  5. Archive logs are created with smaller, uneven size than the original redo logs.
  6. Commands like:
  7. ALTER SYSTEM SWITCH LOGFILE
  8. or
  9. ALTER SYSTEM ARCHIVE LOG ...
  10. are not used to generate archive or switch the logfile. As well, there is no ARCHIVE_LAG_TARGET parameter set.
  11. What else could cause such behaviour?

  12. Solution
  13. According to:
  14. Bug: 5450861 : ARCHIVE LOGS ARE GENERATED WITH A SMALLER SIZE THAN THE REDO LOG FILES
  15. the explanation for this situation has 2 main reasons:

  16. 1. The archive logs do not have to be even in size. This was decided a very long time ago, when blank padding the archive logs was stopped, for a very good reason - in order to save disk space.

  17. 2. The log switch does not occur when a redo log file is 100% full. There is an internal algorithm that determines the log switch moment. This also has a very good reason - doing the log switch at the last moment could incur performance problems (for various reasons, out of the scope of this note).
  18. As a result, after the log switch occurs, the archivers are copying only the actual information from the redo log files. Since the redo logs are not 100% full after the log switch and the archive logs are not blank padded after the copy operation has finished, this results in uneven, smaller files than the original redo log files.
  19. This is very conspicuous for very small redo log files (e.g. smaller than 10 MB), i.e. 2.5 MB archive logs generated from 5 MB redo logs are very visible.
  20. Just note that at this moment, the default redo log files are 100 MB in size. If the resulting archive log files would be between 98 and 100 MB nobody would notice.
  21. The main concern that one should have for the archive log files is an eventual corruption. This can be very easily checked by trying a test recovery. When this is ok, the uneven archive log size should be of no concern, as it's expected.


  22. Please note that Bug: 5450861 has been closed as 'Not a Bug', just like related Bug reports Bug 9272059  and Bug 12317474.
  23. In fact Bug 12317474 discusses an additional situation that could cause the same issue.

  24. 3. With a high CPU_COUNT, a low load and a redo log file size smaller than the redolog buffer, you may small archived log files because of log switches at about 1/8 of the size of the define log file size.
  25. This is because CPU_COUNT defines the number of redo strands (ncpus/16). With a low load only a single strand may be used. With redo log file size smaller than the redolog buffer, the log file space is divided over the available strands. When for instance only a single active strand is used, a log switch can already occur when that strand is filled.

  26. References
  27. BUG:5450861 - ARCHIVE LOGS ARE GENERATED WITH A SMALLER SIZE THAN THE REDO LOG FILES
  28. BUG:9272059 - REDO LOG SWITCH AT 1/8 OF SIZE DUE TO CMT CPU'S
  29. BUG:12317474 - FREQUENT REDO LOG SWITCHES GENERATING SMALL SIZED ARCHIVELOGS


结论是 不管是 RAC或非RAC环境中  archivelog  归档日志的大小都 不是必须等于 重做日志大小的,  一般会 略微小于redo log size , 但是也不排除 远远小于的情况。

造成这种现象的几点原因:

1. 节约不必要的磁盘浪费
2.  内部算法决定了何时切换日志(非ARCHIVE_LAG_TARGET)
3.  较高的CPU_COUNT 参数,因为cpu number 决定了redo strands (ncpus/16)的个数

主要是由于oracle内部算法所决定的,  一般不会造成实例 性能或可用性问题, 唯一的困扰可能是 生成了较多的小归档日志。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/25574072/viewspace-762146/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/25574072/viewspace-762146/

<think>好的,用户现在想了解关于system_logs.logarchive文件的格式、打开和解析方法。我需要结合之前的对话历史和提供的参考资料来给出详细的解答。 首先回顾用户的上一次问题,是关于iOS的.logarchive日志解析,特别是涉及AirDrop违法广告取证的案例。用户之前已经得到了一套基于macOS系统工具(如log命令和Console应用)的解析流程,包括预处理、核心解析方法、深度筛选和高级技巧。现在用户的新问题是关于system_logs.logarchive,可能属于不同的系统或应用,但同样是.logarchive格式的日志文件。 接下来,用户提供的参考引用中提到了Logstash配置、Oracle的LogMiner以及GBase 8a的system.log日志。虽然这些引用被标注为仅供参考,但可能需要考虑是否相关。例如,Logstash可以用来解析日志,但用户之前的问题中已经使用了macOS的log工具,可能需要比较两者的适用性。Oracle的LogMiner和GBase的日志结构可能与当前问题关联不大,但可以作为其他日志解析工具的参考。 用户的主要需求是了解如何打开和解析system_logs.logarchive文件。首先需要确定这个文件的来源。因为.logarchive常见于iOS/macOS系统日志,但可能其他系统或应用也会使用类似格式。需要确认用户提到的system_logs.logarchive是否属于某个特定系统或应用,比如是否与之前的GBase 8a的system.log有关联,但根据引用3,GBase的日志路径是/opt/gcluster/log/gcluster,而文件类型是文本日志,不是.logarchive格式。因此,可能用户提到的system_logs.logarchive还是属于苹果系统的日志归档。 接下来需要详细解释.logarchive文件的格式。根据之前的回答,.logarchive是苹果系统特有的日志归档格式,采用目录结构存储多个日志文件和元数据。可能需要进一步说明其内部结构,如tracev3文件、数据库文件等。 在打开和解析方法上,需要重复之前提到的macOS工具,如log命令和Console应用,但也要考虑是否有其他工具或跨平台方法。例如,如果用户没有macOS系统,是否有其他工具可以解析,比如第三方工具或脚本。此外,用户可能需要在解析时进行过滤和筛选,比如提取特定时间或关键词的日志条目。 另外,用户可能关心如何验证日志的完整性和合规性,比如哈希校验,这在之前的回答中也有涉及,需要再次提及。 还需要注意用户是否有特定的使用场景,比如取证、性能分析或故障排查,不同的场景可能需要不同的解析策略。例如,取证需要确保数据完整性,而性能分析可能关注时间戳和资源使用情况。 最后,要生成相关问题,帮助用户进一步探索,比如如何自动化解析流程、与其他日志系统的集成,或者处理加密的.logarchive文件等。</think>### 关于`system_logs.logarchive`文件的解析与处理 `.logarchive`是苹果系统(iOS/macOS)特有的二进制日志归档格式,用于存储结构化系统日志和应用程序日志。以下从格式解析、工具使用到深度处理的全流程说明: --- ### 一、文件格式解析 1. **结构特性** - **目录伪装**:实际为包含`.tracev3`文件的特殊目录结构,可通过`ls`命令查看: ```bash file system_logs.logarchive # 显示"directory"类型 ls -R system_logs.logarchive # 显示内部子目录及*.tracev3文件 ``` - **数据库存储**:`.tracev3`为SQLite数据库格式,包含`log`表和元数据表[^1] - **时间分片**:按时间分割日志块,支持高效时间范围检索 2. **内容组成** - **标准字段**:`timestamp`, `process`, `subsystem`, `category`, `event_type`, `message` - **扩展元数据**:设备UDID、进程ID、线程ID、签名信息等 --- ### 二、打开与解析方法 #### ▶ 方法1:macOS原生工具链 1. **控制台应用(图形化)** ```bash open /System/Applications/Utilities/Console.app system_logs.logarchive ``` *操作路径:* `左侧栏选择.logarchive文件 → 搜索栏输入过滤条件 → 导出为CSV/JSON` 2. **命令行工具`log`** ```bash # 基础解析(显示时间、进程、日志内容) log show --archive system_logs.logarchive --style syslog # 精准过滤(示例:提取特定进程的日志) log show --archive system_logs.logarchive \ --predicate 'process == "kernel" AND eventMessage CONTAINS "error"' \ --start "2023-01-01" --end "2023-12-31" \ --debug # 显示完整元数据 ``` #### ▶ 方法2:跨平台解析(需导出数据) 1. **转换为通用格式** ```bash # 导出为JSON格式 log show --archive system_logs.logarchive --style json > system_logs.json # 导出为SQLite数据库 cp system_logs.logarchive/*/*.tracev3 merged_logs.db ``` 2. **使用分析工具** - **SQLite浏览器**:查看`.tracev3`文件的表结构和原始数据 - **Logstash管道**(参考引用[^1]): ```ruby input { file { path => "system_logs.json" } } filter { json { source => "message" } date { match => ["timestamp", "ISO8601"] } } output { elasticsearch { ... } } # 构建可视化分析平台 ``` --- ### 三、深度解析技巧 1. **关键字段提取** ```bash # 提取设备标识与进程树 log show --archive system_logs.logarchive \ --predicate 'eventMessage CONTAINS "UUID"' \ --info | grep -E 'process|subsystem|UUID' ``` 2. **异常检测模式** ```python # 分析错误频率(Python示例) import pandas as pd logs = pd.read_json('system_logs.json') error_stats = logs[logs['loglevel'] == 'Error'].groupby('process').size() print(error_stats.sort_values(ascending=False)) ``` 3. **时间轴重建** ```bash # 生成每小时日志量统计 log show --archive system_logs.logarchive \ --count 1h --start "2023-08-01" --end "2023-08-07" ``` --- ### 四、合规与验证 1. **完整性校验** ```bash # 生成SHA256哈希(取证必需) shasum -a 256 system_logs.logarchive/* ``` 2. **法律文书生成** ```bash # 使用sysdiagnose生成报告 sysdiagnose -f pdf -l system_logs.logarchive ``` --- ### 五、高级应用场景 1. **安全事件溯源** ```bash log show --archive system_logs.logarchive \ --predicate 'subsystem == "com.apple.security" AND eventMessage CONTAINS "CVE-2023"' ``` 2. **性能瓶颈分析** ```bash log show --archive system_logs.logarchive \ --predicate 'eventMessage CONTAINS "high CPU" OR "memory pressure"' ``` ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值