转 -- Temporary tablespaces in RAC

本文探讨了Oracle RAC环境中临时表空间的工作原理及空间分配机制,详细介绍了实例间如何共享临时表空间资源,包括缓存机制、空间释放过程以及跨实例请求等关键操作。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

原址如下:

https://orainternals.wordpress.com/2012/02/13/temporary-tablespaces-in-rac/

Temporary tablespaces in RAC

Posted by Riyaj Shamsudeen on February 13, 2012

Temporary tablespaces are shared objects and they are associated to an user or whole database (using default temporary tablespace). So, in RAC, temporary tablespaces are shared between the instances. Many temporary tablespaces can be created in a database, but all of those temporary tablespaces are shared between the instances. Hence, temporary tablespaces must be allocated in shared storage or ASM. We will explore the space allocation in temporary tablespace in RAC, in this blog entry.

In contrast, UNDO tablespaces are owned by an instance and all transactions from that instance is exclusively allocated in that UNDO tablespace. Remember that other instances can read blocks from remote undo tablespace, and so, undo tablespaces also must be allocated from shared storage or ASM.

Space allocation in TEMP tablespace

TEMP tablespaces are divided in to extents (In 11.2, extent size is 1M, not sure whether the size of an extent is controllable or not). These extent maps are cached in local SGA, essentially, soft reserving those extents for the use of sessions connecting to that instance. But, note that, extents in a temporary tablespace are not cached at instance startup, instead instance caches the extents as the need arises. We will explore this with a small example:

This database has two instances and a TEMP tablespace. TEMP tablespace has two temp files, 300M each.

Listing 1-1: dba_temp_files

  1* select file_name, bytes/1024/1024 sz_in_mb from dba_temp_files
SYS@solrac1:1>/

FILE_NAME                                                      SZ_IN_MB
------------------------------------------------------------ ----------
+DATA/solrac/tempfile/temp.266.731449235                            300
+DATA/solrac/tempfile/temp.448.775136163                            300

Initially, no extents were cached, and no extents were in use as shown from the output of gv$temp_extent_pool view in Listing 1-2.

Listing 1-2: Initial view of temp extents

select inst_id, file_id, extents_cached, extents_used from gv$temp_extent_pool order by 1,2;

   INST_ID    FILE_ID EXTENTS_CACHED EXTENTS_USED
---------- ---------- -------------- ------------
         1          1              0            0
         1          2              0            0
         2          1              0            0
         2          2              0            0

We are ready to start a test case

Listing 1-3: Script in execution

select inst_id, file_id, extents_cached, extents_used from gv$temp_extent_pool order by 1,2;
  INST_ID    FILE_ID EXTENTS_CACHED EXTENTS_USED
---------- ---------- -------------- ------------
         1          1              0            0
         1          2              0            0
         2          1             22           22
         2          2             23           23
...
/
   INST_ID    FILE_ID EXTENTS_CACHED EXTENTS_USED
---------- ---------- -------------- ------------
         1          1              0            0
         1          2              0            0
         2          1            108          108
         2          2            111          111

I started a small SQL script that joins multiple tables with hash join so as to induce disk based sorting. After starting the SQL script execution in instance 2, you can see that extents are cached and used in the instance 2, as shown in Listing 1-3. Initially, 45 extents were in use, few seconds later, temp tablespace usage grew to 219 extents.

Listing 1-4: script completion

  INST_ID    FILE_ID EXTENTS_CACHED EXTENTS_USED
---------- ---------- -------------- ------------
         1          1              0            0
         1          2              0            0
         2          1            163            0
         2          2            166            0

After the completion of the script,as shown in Listing 1-4, extents_used column is set to 0, But extents_cached is still at maximum usage level (319 extents). Meaning that, extents are cached (soft reserved) in an instance and not released (until another instance asks for it, as we see later).

You should also note that extents are equally spread between two files in that temporary tablespace. If you have more files in that temporary tablespace, then the extents will be uniformly allocated in all those temp files.

Space reallocation

Even if the cached extents are free, these extents are not available to use in other instance(s) immediately. An instance will request the owning instance to uncache the extents and then only those extents are available for use in the requesting instance. We will demonstrate this concept with the same test case, except that we will execute that test case in instance 1.

Listing 1-5: script in instance #1 execution

  INST_ID    FILE_ID EXTENTS_CACHED EXTENTS_USED
---------- ---------- -------------- ------------
         1          1             42           42
         1          2             42           42
         2          1            163            0
         2          2            166            0

At the start of SQL execution, instance started to reserve extents by caching them. My session was using those extents as visible from gv$temp_extent_pool. Number of extents used by the instance #1 was slowly growing.See Listing 1-5.

Listing 1-6: instance #1 stole the extents from instance #2

   INST_ID    FILE_ID EXTENTS_CACHED EXTENTS_USED
---------- ---------- -------------- ------------
         1          1            195           71
         1          2            133          116
         2          1             63            0 <-- note here
         2          2            166            0

It gets interesting. Notice that 329 extents were reserved In Listing 1-5. Since my SQL script needs 329M of space in the temp tablespace, instance 1 needs to steal space from instance 2.

In Listing 1-6, Instance 1 needed more extents and so, Instance 2 uncached 100 extents as the extents_cached column went down from a value of 163 to 63 extents (third row in the output above). Essentially, in this example, instance 1 requested instance 2 to uncache the extents and instance 2 obliged and uncached 100 extents. Prior to 11g, un-caching of extents used to be at one extent per request. From 11g onwards, 100 extents are released for a single request and all 100 extents are acquired by the requesting instance. Instance 1 acquired those 100 extents, cached those extents, and then the session continued to use those temp extents.

Listing 1-7: script completion and node #1 has more extents cached.

  INST_ID    FILE_ID EXTENTS_CACHED EXTENTS_USED
---------- ---------- -------------- ------------
         1          1            195            0
         1          2            133            0
         2          1             63            0
         2          2            166            0

After the completion of the script execution, instance 1 did not release the extents. Cached extents are not released (extents are soft reserved )until another instance asks for those extents to be un-cached.

I also enabled sql trace in my session from instance 1 while executing the script. SQL trace file spills out the details about un-reserving of these extents.

Listing 1-8: SQL Trace
...
#1: nam='enq: TS - contention' ela= 4172867 name|mode=1414725636 tablespace ID=3 dba=2 obj#=0 tim=6322835898
#2: nam='enq: SS - contention' ela= 608 name|mode=1397948417 tablespace #=3 dba=2 obj#=0 tim=6322837101
#3: nam='enq: SS - contention' ela= 414 name|mode=1397948422 tablespace #=3 dba=2 obj#=0 tim=6322837710
#4: nam='DFS lock handle' ela= 389 type|mode=1128857605 id1=14 id2=1 obj#=0 tim=6322838264
#5: nam='DFS lock handle' ela= 395 type|mode=1128857605 id1=14 id2=3 obj#=0 tim=6322838788
#6: nam='DFS lock handle' ela= 260414 type|mode=1128857605 id1=14 id2=2 obj#=0 tim=6323099335
...

Line #1 above shows a tablespace level lock (TS enqueue) is taken on TEMP tablespace (ID=3 is ts# column in sys.ts$ table). Then SS locks were acquired on that tablespace, first with mode=1 and then with mode=6 (line #2 and #3). In Line #4, Cross Invocation Call (CIC) was used to ask remote SMON process to un-reserve the cached extents using CI type locks and DFS lock handle mechanism with lock types CI-14-1, CI-14-2, and CI-14-3.

Listing 1-9: Enqueue type

select chr(bitand(&&p1,-16777216)/16777215) || chr(bitand(&&p1,16711680)/65535) type,
mod(&&p1, 16) md from dual;
Enter value for p1: 1397948422

TY         MD
-- ----------
SS          6

Enter value for p1: 1128857605
TY         MD
-- ----------
CI          5

From Listing 1-8, Approximately, 4.5 seconds were spent to move the cached extents from the one instance to another instance. Prior to 11g, this test case will run much longer, since the extents were un-cached 1 extent per request. Hundreds of such request would trigger tsunami of SS, CI enqueue requests leading to massive application performance issues. In 11g, Oracle Development resolved this issue by un-caching 100 extents per request.

Important points to note

  1. As you can see, extents are allocated from all temporary files uniformly. There are also changes to file header block during this operation. This is one of the reason, to create many temporary files in RAC. Recommendation is to create, as many files as the # of instances. If you have 24 nodes in your RAC cluster, yes, that would imply that you would have to create 24 temp files to the TEMP tablespace.
  2. As we saw in our test case locking contention output, having more temp tablespace might help alleviate SS enqueue contention since SS locks are at tablespace level. Essentially, more temporary tablespace means more SS enqueues, But, you will move the contention from SS locks to ‘DFS lock handle’ waits as Cross invocation Call is one per the instance for extents un-caching operation.
  3. Temporary tablespace groups is of no use since the contention will be at Cross Invocation Call. In fact, there is a potential for temporary tablespace groups to cause more issues since the free space in one temp tablespace can not be reassigned to another temp tablespace dynamically, even if they are in the same tablespace group. In theory, it is possible to have more SS, CI locking contention with temp tablespace groups.
  4. Probably a good approach is to assign different temporary tablespace to OLTP users and DSS users and affinitize the workload to specific instances.

Update 1: Remember that you need to understand your application workload before following my advice

Update 2 (2015 May 10): A reader asked me what to do if an instance has cached all available extents. Processes running in the other instances were encountering TEMP space errors in that database. For example below, about 450GB were cached by instance 1 and the processes running in the instance 2 were encountering error.

select inst_id, trunc(bytes_cached/1024/1024/1024,2) Gbytes_cached from gv$temp_extent_pool
;
   INST_ID GBYTES_CACHED
---------- -------------
         1        450.56
         2         42.38

How to uncache the extents from instance 1? : Open two sessions to the database. In the first session, monitor the caching activity using the above SQL statement. In the second session, try resizing the temp file. Resizing operation triggers uncaching activity and you will notice that both instances are uncaching the temporary extents. Instances will try to uncache all extents even if your command is resizing only to 300G. After plenty of extents are uncached, you can cancel the resize operation or you can resize the temp file back to original size.

alter database tempfile '+DATA/c/tempfile/temp.304.828993879' resize 300G;

Temporary failure in name resolution”错误通常表示系统在尝试解析主机名时遇到了临时性的问题。这种问题可能由多种原因引起,包括但不限于DNS服务器不可达、网络连接中断、本地hosts文件配置错误或DNS缓存问题。 ### 常见解决方法 #### 1. 检查网络连接 确保系统的网络连接正常。可以通过 `ping` 命令测试与外部服务器的连通性,例如: ```bash ping -c 4 google.com ``` 如果无法 ping 通外部服务器,则可能是网络连接问题。 #### 2. 修改 DNS 配置 可以手动修改 `/etc/resolv.conf` 文件,添加可靠的公共 DNS 服务器,如 114.114.114.114 或 Google 的 8.8.8.8: ```bash nameserver 114.114.114.114 nameserver 8.8.8.8 ``` 保存文件后,尝试重新运行可能导致问题的命令。 #### 3. 检查 hosts 文件 检查 `/etc/hosts` 文件中的配置是否正确。确保其中包含正确的本地主机条目: ```bash 127.0.0.1 localhost ::1 localhost ``` 如果遇到 `localhost.localdomain` 解析失败的问题,可以在 hosts 文件中添加以下条目: ```bash 127.0.0.1 localhost.localdomain ``` #### 4. 清除 DNS 缓存 某些系统使用了 DNS 缓存服务(如 `nscd` 或 `systemd-resolved`),可以尝试清除缓存或重启相关服务: ```bash sudo systemctl restart systemd-resolved ``` #### 5. 检查 DNS 服务状态 如果系统使用了 `systemd-resolved`,可以通过以下命令检查其状态: ```bash systemctl status systemd-resolved ``` 如果服务未运行,可以尝试启动它: ```bash sudo systemctl start systemd-resolved ``` #### 6. 使用 `nslookup` 或 `dig` 测试 DNS 解析 通过 `nslookup` 或 `dig` 工具测试 DNS 解析是否正常: ```bash nslookup google.com ``` 或者: ```bash dig google.com ``` 如果返回错误信息,则说明 DNS 配置存在问题。 #### 7. 重启网络服务 有时重启网络服务可以解决暂时性的网络问题: ```bash sudo systemctl restart networking ``` 或者对于使用 `NetworkManager` 的系统: ```bash sudo systemctl restart NetworkManager ``` #### 8. 更新系统和软件包 确保系统及其网络相关组件是最新的,可以通过以下命令更新: ```bash sudo apt update && sudo apt upgrade ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值