052-16&17

16.Note the following functionalities of various background processes:
1: Record the checkpoint information in data file headers.--CKPT
2: Perform recovery at instance startup.--SMON
3: Cleanup unused temporary segments.--SMON
4: Free the resources used by a user process when it fails.--PMON
5: Dynamically register database services with listeners.--PMON
6: Monitor sessions for idle session timeout.--PMON
Which option has the correct functionalities listed for a background process?
A.Archiver Process (ARCn): 1, 2, 5
B.System Monitor Process (SMON): 1, 4, 5
C.Process Monitor Process (PMON): 4, 5, 6
D.Database Writer Process (DBWn): 1, 3, 4

17. Note the functionalities of various background processes:
1: Perform recovery at instance startup. --SMON
2: Free the resources used by a user process when it fails.--PMON
3: Cleanup the database buffer cache when a process fails.--PMON
4: Dynamically register database services with listeners.--PMON
5: Monitor sessions for idle session timeout.--PMON
6: Cleanup unused temporary segments. --SMON
7: Record the checkpoint information in control file.--CKPT
Which option has the correct functionalities listed for a background process?
A.Checkpoint (CKPT): 1, 2, 5
B.System Monitor (SMON): 1, 6
C.Process Monitor (PMON): 4, 6, 7
D.Database Writer (DBWR): 1, 3, 4

首先看一下 Oracle体系结构详解

SMON(System Monitor):安装和打开数据库,实例恢复也是由此进程完成的

PMON(Process Monitor):进程监视器,主要监视服务器进程。前面提到过,专有服务器体系模式下,用户进程和服务器进程是一对一的关系,如果某个会话发生异常,PMON会销毁对应的服务器进程,回滚未提交的事务,并回收会话专有的PGA内存区域。

CKPT(Checkpoint Process):CKPT负责发起检查点信号,手动设置检查点的语法:

1
SQL> alter  system  checkpoint ;

  检查点可强制DBWn写入脏缓冲区,当数据库崩溃后,由于大量脏缓冲区未写入数据文件,在重新启动时,需要由SMON进行实例恢复,实例恢复需要提取和应用重做日志记录,提取的位置就是从上次检查点发起的位置开始的(检查点之前的数据已经被强制写入到数据文件中去了),这个位置称为RBA(redo byte address),CKPT会不断将这个位置更新到控制文件中去(以确定实例恢复需要从哪儿开始提取日志记录)。

数据库写入器(DBWn)

  数据库写入器是Oracle的一个后台进程,所谓后台进程是相对于前台进程(服务器进程)来讲的。DBWn的"n"意味着一个实例是可以有多个数据库写入器的。

  作用:简而言之,DBWn的作用就是将变脏了的缓冲区从数据库缓冲区缓存中写入到磁盘中的数据文件中去。

  数据库缓冲区缓存这块内存区域和数据库写入器这块是比较重要的概念,别的数据库产品像mySql也都有对应的实现,只不过叫法不一样罢了。了解这块的时候,要时刻意识到会话是不会直接更新磁盘数据的,会话的更新,插入,删除包括查询等都是先作用到缓冲区上,随后,DBWn会将其中的脏缓冲区转储到磁盘上去。 

  DBWn什么时候写入?

DBWn是个比较懒的进程,它会尽可能少的进行写入,在以下四种情况它会执行写入:

a.没有任何可用缓冲区(不得不写啊)

b.脏缓冲区过多

c.3秒超时(最晚3秒会执行一次写入)

d.遇到检查点,即checkPoint(检查点),检查点是个Oracle事件,遇到检查点,DBWn会执行写入。比如实例有序关闭的时候会有检查点,DBWn会将所有脏缓冲区写入到磁盘上去的,这很容易理解,要保持数据文件的一致性。

    注意:

  从上述DBWn的几个写入时机,我们能意识到,DBWn的写入不是直接依赖于会话的更新操作的。不是一有脏缓冲区,它就执行写入。而且,DBWn执行写入跟commit操作也没有任何关系,不要以为commit操作的影响结果会实时流入到磁盘中去。

  DBWn采用极懒算法进行写入,原因我们应该要清楚:频繁的磁盘IO对系统的压力很大,如果DBWn很积极地去写入磁盘,那对系统性能的影响就太大了,换个角度想,如果DBWn很勤快的写磁盘,那么数据库缓冲区存在的意义也就不大了。

  当然,讲到这儿,我们可能会意识到一个问题,DBWn如此懒地进行数据转储,如果在某一时刻,数据库缓冲区缓存内存在着大量的脏缓冲区(生产环境中,这是常态),也就是有大量的未commit和已commit的数据还在内存中,没有持久化到磁盘中,然后突然系统断电了,这种情况下,数据是不是就丢掉了?数据当然不会丢失,这就引出了重做日志(redo log)的概念,接下来,我们就来谈谈对应重做日志的内存结构和后台进程。

ARCn(Archiver

  归档进程,这个进程是可选的,如果数据库配置为归档模式,这个进程就是必须的。所谓归档,就是将重做日志文件永久保存(生产库一般都会配置为归档模式)到归档日志文件中。归档日志文件和重做日志文件作用是一样的,只不过重做日志文件会不短被重写,而归档日志文件则保留了关于数据更改的完整的历史记录。

 




转载于:https://www.cnblogs.com/Babylon/p/7942106.html

[root@hadoop01 apache-hive-3.1.3-bin]# hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/servers/hadoop-3.3.5/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/servers/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 2025-06-16 17:53:36,956 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:40,291 INFO SessionState: Hive Session ID = 30846036-47a7-480e-81e3-48f09d764412 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-16 17:53:40,414 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-16 17:53:43,041 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,099 INFO session.SessionState: Created local directory: /tmp/root/30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,119 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/30846036-47a7-480e-81e3-48f09d764412/_tmp_space.db 2025-06-16 17:53:43,154 INFO conf.HiveConf: Using the default value passed in for log id: 30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,154 INFO session.SessionState: Updating thread name to 30846036-47a7-480e-81e3-48f09d764412 main 2025-06-16 17:53:45,040 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-16 17:53:45,120 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-16 17:53:45,133 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-16 17:53:45,139 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-16 17:53:45,141 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-16 17:53:45,141 INFO conf.MetastoreConf: Found configuration file null 2025-06-16 17:53:45,143 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-16 17:53:45,143 INFO conf.MetastoreConf: Found configuration file null 2025-06-16 17:53:45,603 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-16 17:53:46,052 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-16 17:53:46,556 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-16 17:53:46,645 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-16 17:53:46,677 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-16 17:53:47,494 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-16 17:53:47,815 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-16 17:53:47,820 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-16 17:53:48,285 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,286 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,287 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,287 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,288 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,288 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,987 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,989 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,989 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:57,252 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-16 17:53:57,253 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore root@192.168.245.131 2025-06-16 17:53:57,550 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-16 17:53:57,560 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-16 17:53:57,673 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-16 17:53:58,030 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-16 17:53:58,085 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-16 17:53:58,089 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-16 17:53:58,263 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,266 INFO SessionState: Hive Session ID = c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,341 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,350 INFO session.SessionState: Created local directory: /tmp/root/c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,365 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/c998401c-9255-4863-8e6f-4932f9f591fa/_tmp_space.db 2025-06-16 17:53:58,372 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-16 17:53:58,373 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-16 17:53:58,377 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-16 17:53:58,383 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-16 17:53:58,462 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-16 17:53:58,466 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-16 17:53:58,494 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,495 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,525 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-16 17:53:58,526 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-16 17:53:58,530 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,530 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,539 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-16 17:53:58,539 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-16 17:53:58,539 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized
06-17
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符  | 博主筛选后可见
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值