Export/Import DataPump Parameter ACCESS_METHOD[ID 552424.1]

本文详细介绍了 Oracle Data Pump 的使用方法,包括数据导出(Export)、导入(Import)数据的不同模式(Direct Path、External Tables 等),以及如何强制指定数据加载或导出的方法。此外,还讨论了已知问题和增强请求。

Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1]


Modified06-APR-2009TypeHOWTOStatusPUBLISHED

In this Document
Goal
Solution
1. Introduction.
2. Export Data Pump: unloading data in "Direct Path" mode.
3. Export Data Pump: unloading data in "External Tables" mode.
4. Import Data Pump: loading data in "Direct Path" mode.
5.Import Data Pump: loading data in "External Tables" mode.
6. How to enforce a specific load/unload method ?
7. Known issues.
@ 8. For Support: Enhancement Requests.
References


Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
Oracle Server - Personal Edition - Version: 10.1.0.2 to 11.1.0.6
Oracle Server - Standard Edition - Version: 10.1.0.2 to 11.1.0.6
Enterprise Manager for RDBMS - Version: 10.1.0.2 to 11.1
Information in this document applies to any platform.

Goal

Starting with Oracle10g, Oracle Data Pump can be used to move data in and out of a database. Data Pump can make use of different methods to move the data, and will automatically choose the fastest method. It is possible though, to manually enforce a specific method. This document demonstrates how to specify the method with which data will be loaded or unloaded with Data Pump.

Solution

1. Introduction.

Data Pump can use four mechanisms to move data in and out of a database:

  • Data file copying;
  • Direct path;
  • External tables;
  • Network link import.

The two most commonly used methods to move data in and out of databases with Data Pump are the "Direct Path" method and the "External Tables" method.

1.1.Direct Path mode.
After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation. Data Pump automatically uses the direct path method for loading and unloading data when the structure of a table allows it.

1.2. External Tables mode.
If data cannot be moved in direct path mode, or if there is a situation where parallel SQL can be used to speed up the data move even more, then the external tables mode is used. The external table mechanism creates an external table that maps the dump file data for the database table. The SQL engine is then used to move the data. If possible, the APPEND hint is used on import to speed the copying of the data into the database.
Note: When the Export NETWORK_LINK parameter is used to specify a network link for an export operation, a variant of the external tables method is used. In this case, data is selected from across the specified network link and inserted into the dump file using an external table.

1.3. Data File Copying mode.
This mode is used when a transport tablespace job is started, i.e.: the TRANSPORT_TABLESPACES parameter is specified for an Export Data Pump job. This is the fastest method of moving data because the data is not interpreted nor altered during the job, and Export Data Pump is used to unload only structural information (metadata) into the dump file.

1.4. Network Link Import mode.
This mode is used when the NETWORK_LINK parameter is specified during an Import Data Pump job. This is the slowest of the four access methods because this method makes use of an INSERT SELECT statement to move the data over a database link, and reading over a network is generally slower than reading from a disk.

The "Data File Copying" and "Network Link Import" methods to move data in and out of databases are outside the scope of this article, and therefore not discussed any further.

For details about the access methods of the classic export client (exp), see:
Note:155477.1"Parameter DIRECT: Conventional Path Export Versus Direct Path Export"

2. Export Data Pump: unloading data in "Direct Path" mode.

Export Data Pump will use the "Direct Path" mode to unload data in the following situations:

EXPDPwill use DIRECT_PATH mode if:

2.1.The structure of a table allows a Direct Path unload, i.e.:
- The table does not have fine-grained access control enabled for SELECT.
- The table is not a queue table.
- The table does not contain one or more columns of type BFILE or opaque, or an object type containing opaque columns.
- The table does not contain encrypted columns.
- The table does not contain a column of an evolved type that needs upgrading.
- If the table has a column of datatype LONG or LONG RAW, then this column is the last column.

2.2. The parameters QUERY, SAMPLE, or REMAP_DATA parameter were not used for the specified table in the Export Data Pump job.

2.3. The table or partition is relatively small (up to 250 Mb), or the table or partition is larger, but the job cannot run in parallel because the parameter PARALLEL was not specified (or was set to 1).

Note that with an unload of data in Direct Path mode, parallel I/O execuation Processes (PX processes) cannot be used to unload the data in parallel (paralllel unload is not supported in Direct Path mode).

3. Export Data Pump: unloading data in "External Tables" mode.

Export Data Pump will use the "External Tables" mode to unload data in the following situations:

EXPDP will useEXTERNAL_TABLE mode if:

3.1. Data cannot be unloaded in Direct Path mode, because of the structure of the table, i.e.:
- Fine-grained access control for SELECT is enabled for the table.
- The table is a queue table.
- The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.
- The table contains encrypted columns.
- The table contains a column of an evolved type that needs upgrading.
- The table contains a column of type LONG or LONG RAW that is not last.

3.2. Data could also have been unloaded in "Direct Path" mode, but the parameters QUERY, SAMPLE, or REMAP_DATA were used for the specified table in the Export Data Pump job.

3.3. Data could also have been unloaded in "Direct Path" mode, but the table or partition is relatively large (> 250 Mb) and parallel SQL can be used to speed up the unload even more.

Note that with an unload of data in External Tables mode, parallel I/O execuation Processes (PX processes) can be used to unload the data in parallel. In that case the Data Pump Worker process acts as the coordinator for the PX processes. However, this does not apply when the table has a LOB column: in that case the table parallelism will always be 1. See also:
Bug:5943346"PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN"

4. Import Data Pump: loading data in "Direct Path" mode.

Import Data Pump will use the "Direct Path" mode to load data in the following situations:

IMPDP will useDIRECT_PATH if:

4.1. The structure of a table allows a Direct Path load, i.e.:
- A global index does not exist on a multipartition table during a single-partition load. This includes object tables that are partitioned.
- A domain index does not exist for a LOB column.
- The table is not in a cluster.
- The table does not have BFILE columns or columns of opaque types.
- The table does not have VARRAY columns with an embedded opaque type.
- The table does not have encrypted columns.
- Supplemental logging is not enabled and the table does not have a LOB column.
- The table into which data is being imported is a pre-existing table and:
– There is not an active trigger, and:
– The table is partitioned and has an index, and:
– Fine-grained access control for INSERT mode is not enabled, and:
– A constraint other than table check does not exist, and:
– A unique index does not exist.

4.2 The parameters QUERY, REMAP_DATA parameter were not used for the specified table in the Import Data Pump job.

4.3. The table or partition is relatively small (up to 250 Mb), or the table or partition is larger, but the job cannot run in parallel because the parameter PARALLEL was not specified (or was set to 1).


5.Import Data Pump: loading data in "External Tables" mode.

Import Data Pump will use the "External Tables" mode to load data in the following situations:

IMPDP will useEXTERNAL_TABLE if:

5.1. Data cannot be loaded in Direct Path mode, because at least one of the following conditions exists:
- A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
- A domain index exists for a LOB column.
- A table is in a cluster.
- A table has BFILE columns or columns of opaque types.
- A table has VARRAY columns with an embedded opaque type.
- The table has encrypted columns.
- Supplemental logging is enabled and the table has at least one LOB column.
- The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:
– There is an active trigger
– The table is partitioned and does not have any indexes
– Fine-grained access control for INSERT mode is enabled for the table.
– An enabled constraint exists (other than table check constraints)
– A unique index exists

5.2. Data could also have been loaded in "Direct Path" mode, but the parameters QUERY, or REMAP_DATA were used for the specified table in the Import Data Pump job.

5.3. Data could also have been loaded in "Direct Path" mode, but the table or partition is relatively large (> 250 Mb) and parallel SQL can be used to speed up the load even more.

Note that with a load of data in External Tables mode, parallel I/O execuation Processes (PX processes) can be used to load the data in parallel. In that case the Data Pump Worker process acts as the coordinator for the PX processes. However, this does not apply when the table has a LOB column: in that case the table parallelism will always be 1. See also:
Bug:5943346"PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN"

6. How to enforce a specific load/unload method ?

In very specific situations, the undocumented parameter ACCESS_METHOD can be used to enforce a specific method to unload or load the data. Example:

%expdpsystem/manager...ACCESS_METHOD=DIRECT_PATH
%expdpsystem/manager...ACCESS_METHOD=EXTERNAL_TABLE

or:

%impdpsystem/manager...ACCESS_METHOD=DIRECT_PATH
%impdpsystem/manager...ACCESS_METHOD=EXTERNAL_TABLE

Important Need-To-Know'swhen the parameter ACCESS_METHOD is specified for a job:

  • The parameter ACCESS_METHOD is anundocumented parameterand should only be usedwhen requested by Oracle Support.
  • If the parameter is not specified, then Data Pump willautomatically choose the best methodto load or unload the data.
  • If import Data Pump cannot choose due to conflicting restrictions, an error will be reported:
    ORA-31696: unable to export/import TABLE_DATA:"SCOTT"."EMP" using client specified AUTOMATIC method
  • The parameter can only be specified when the Data Pump job isinitially started(i.e. the parameter cannot be specified when the job is restarted).
  • If the parameter is specified, the method of loading or unloading the data isenforced on all tablesthat need to be loaded or unloaded with the job.
  • Enforcing a specific method may result in aslower performanceof the overall Data Pump job, or errors such as:

...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31696: unable to export/import TABLE_DATA:"SCOTT"."MY_TAB" using client specified DIRECT_PATH method
...

  • Todeterminewhich access method is used, a Worker trace file can be created, e.g.:

%expdpsystem/managerDIRECTORY=my_dir/
DUMPFILE=expdp_s.dmpLOGFILE=expdp_s.log/
TABLES=scott.my_tabTRACE=400300

The Worker trace file shows the method with which the data was loaded (or unloaded for Import Data Pump):

...
KUPW:14:57:14.289: 1: object: TABLE_DATA:"SCOTT"."MY_TAB"
KUPW:14:57:14.289: 1: TABLE_DATA:"SCOTT"."MY_TAB" external table, parallel: 1
...

For details, see also:
Note:286496.1" Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump"

7. Known issues.

7.1. Bug 4722517 - Materialized view log not updated after import into existing table
-Defect:Bug:4722517"MATERIALIZED VIEW LOG NOT UPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE"
-Symptoms:a materialized view is created with FAST REFRESH on a master table; if data is imported intothis master table, then these changes (inserts) do not show up in the materialized view log
-Releases: 10.1.0.2.0 and higher
-Fixed in: not applicable, closed as not-a-bug
-Patched files: not applicable
-Workaround: if possible import into a temporary holding table then copy the data with "insert as select" into the master table
-Cause: afast refresh does not apply changes that result from bulk load operations on masters, such as anINSERT with the APPEND hint used by Import Data Pump
-Trace: not applicable, changes are not propagated
-Remarks: see alsoNote:340789.1"Import Datapump (Direct Path) Does Not Update Materialized View Logs "

7.2. Bug 5599947 - Export Data Pump is slow when table has a LOB column
-Defect:Bug:5599947"DATAPUMP EXPORT VERY SLOW"
-Symptoms:Export Data Pumphas low performance when exporting table with LOB column
-Releases: 11.1.0.6 and below
-Fixed in: not applicable, closed as not feasible to fix
-Patched files: not applicable
-Workaround: if possible re-organize the large table with LOB column and make it partitioned
-Cause: if a table has a LOB column, and the unload or load takes place in "External Tables" mode, then we cannot make use of parallel I/O execution Processes (PX processes)
-Trace: not applicable
-Remarks: see alsoBug:5943346"PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN"

7.3. Bug 5941030 - Corrupt blocks after Import Data Pump when table has LONG / LONG RAW column
-Defect:Bug:5941030"Datapump import can produce corrupt blocks when there is a LONG / LONG RAW"
-Symptoms: Direct Path import of a LONG / LONG RAW column can create corrupt blocks in the database. If DB_BLOCK_CHECKING is enabled then an ORA-600 [6917] error can be signalled. If not then the corrupt block can cause subsequent problems, like ORA-1498 (block check failure) on an analyze of the table.
-Releases: 11.1.0.6 and below
-Fixed in: 10.2.0.5.0 and 11.1.0.7.0 and higher; for some platforms a fix on top of 10.2.0.2.0 and on top of 10.2.0.3.0 is available withPatch:5941030
-Patched files:kdbl.o
-Workaround: if possible use the classic export and import clients to transfer this table
-Cause:internal issue with column count when loading table with LONG/LONG RAW column in Direct Path mode
-Trace: not applicable
-Remarks: see alsoNote:457128.1"Logical Corruption Encountered After Importing Table With Long Column Using DataPump"

@ 8. For Support: Enhancement Requests.

@ Open Enhancement Requests:

References

BUG:4722517- MATERIALIZED VIEW LOG NOT UPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE
BUG:4727162- PRODUCT ENHANCEMENT: ADD NEW DATAPUMP EXT TAB ACCESS METHOD WITHOUT APPEND HINT
BUG:5599947- DATAPUMP EXPORT VERY SLOW
BUG:5941030- DATAPUMP IMPORT CAN CORRUPT DATA WHEN THERE IS A LONG / LONG RAW
BUG:5943346- PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN
NOTE:155477.1- Parameter DIRECT: Conventional Path Export Versus Direct Path Export
NOTE:286496.1- Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump
NOTE:340789.1- Import Datapump (Direct Path) Does Not Update Materialized View Logs
NOTE:365459.1- Parallel Capabilities of Oracle Data Pump
NOTE:453895.1- Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp)
NOTE:457128.1- Logical Corruption Encountered After Importing Table With Long Column Using DataPump
NOTE:469439.1- IMPDP Can Fail with ORA-31696 if ACCESS_METHOD=DIRECT_PATH Is Manually Specified
http://www.oracle.com/technology/pub/notes/technote_pathvsext.html


Related


Products


  • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
  • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
  • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Personal Edition
  • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Personal Edition
  • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Standard Edition
  • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Standard Edition
  • Enterprise Management > Enterprise Manager Consoles, Packs, and Plugins > Managing Databases using Enterprise Manager > Enterprise Manager for RDBMS

Keywords


IMPORTING DATA; CONVENTIONAL PATH; DIRECT PATH; LOB; PARALLELISM; IMPDP; EXTERNAL TABLES; DATAPUMP

Errors


ORA-600[6917]; ORA-31696; ORA-1498

------------------------------------------------------------------------------

<think>我们正在处理一个Oracle数据库导入任务状态检查的问题。根据用户提供的引用和之前的问题,我们知道用户可能正在使用Data Pump(impdp)进行导入操作。在Oracle中,Data Pump作业的状态可以通过数据字典视图或使用ATTACH参数来检查。 根据引用[5]提到的,可以使用`impdp/expdp`命令的`attach`参数来连接到现有的作业并查看其状态。此外,还可以查询数据字典视图来获取作业状态。 以下是检查Oracle Data Pump导入任务状态的几种方法: ### 方法1:使用ATTACH参数连接到作业 1. 首先,需要知道作业的名称。如果在启动导入作业时没有指定作业名,系统会自动生成一个(如SYS_IMPORT_*模式*)。 2. 使用以下命令连接到作业: ```bash impdp system/password ATTACH=<作业名> ``` 例如,如果作业名为SYS_IMPORT_FULL_01,则命令为: ```bash impdp system/password ATTACH=SYS_IMPORT_FULL_01 ``` 3. 连接后,会显示作业的当前状态,并进入交互模式。可以输入`STATUS`来查看详细状态,或者输入`CONTINUE_CLIENT`让作业继续运行(如果处于暂停状态),也可以输入`STOP_JOB`暂停作业。 ### 方法2:查询数据字典视图 Oracle提供了以下数据字典视图来监控Data Pump作业: - `DBA_DATAPUMP_JOBS`:显示所有Data Pump作业。 - `USER_DATAPUMP_JOBS`:显示当前用户的Data Pump作业。 - `DBA_DATAPUMP_SESSIONS`:显示与Data Pump作业相关的会话。 #### 示例查询: ```sql -- 查看当前所有Data Pump作业 SELECT job_name, operation, job_mode, state, degree, attached_sessions FROM dba_datapump_jobs; -- 查看更详细的状态,包括已处理的对象和错误 SELECT * FROM dba_datapump_jobs WHERE state = 'EXECUTING'; -- 只查看正在执行的作业 ``` #### 状态说明: - `EXECUTING`:作业正在运行。 - `NOT RUNNING`:作业已停止(可能是暂停或完成)。 - `DEFINING`:作业正在定义中(初始状态)。 ### 方法3:通过日志文件 在启动Data Pump导入时,可以通过`LOGFILE`参数指定日志文件。可以直接查看该日志文件来了解导入进度和状态。 ### 方法4:使用SQL Developer等图形工具 如果使用Oracle SQL Developer,可以通过以下步骤查看: 1. 连接到目标数据库。 2. 导航到“工具” > “数据泵” > “导入作业”。 3. 在打开的窗口中查看当前作业的状态。 ### 示例:检查作业状态的具体步骤 假设我们不知道作业名,可以通过查询数据字典视图来获取: ```sql SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs; ``` 然后,使用作业名连接到作业: ```bash impdp system/password ATTACH=<job_name> ``` ### 注意事项: - 需要具有足够的权限(如DBA角色)才能查看所有作业。 - 如果作业是由其他用户启动的,需要以该用户身份连接或具有`DATAPUMP_IMP_FULL_DATABASE`等权限。 ### 引用说明: - 引用[5]提到使用`impdp/expdp`的`attach`参数,这为我们提供了直接连接到作业的方法[^5]。 ### 相关问题: 1. 如何停止或暂停一个正在运行的Oracle Data Pump作业? 2. 如何查看Oracle Data Pump导入/导出的详细日志? 3. 如何解决Oracle Data Pump作业挂起或长时间无响应的问题? 希望这些方法能帮助您检查导入任务的状态。</think>### 检查 Oracle 数据库导入任务状态的方法 #### 方法1:通过 Data Pump 控制台(推荐) ```bash # 连接到正在运行的导入作业 impdp system/password ATTACH=<作业名> # 示例(作业名可在启动时查看或通过查询获取) impdp system/oracle ATTACH=SYS_IMPORT_FULL_03 ``` 连接后执行: ```sql STATUS; -- 查看详细状态 CONTINUE_CLIENT; -- 继续作业 STOP_JOB=IMMEDIATE; -- 立即停止作业 ``` #### 方法2:查询数据字典视图 ```sql -- 查看所有 Data Pump 作业状态 SELECT owner_name, job_name, operation, job_mode, state, degree FROM dba_datapump_jobs; -- 查看详细进度(需要作业运行时) SELECT * FROM table(dbms_datapump.get_status( handle => 'SYS_IMPORT_FULL_03', -- 替换为你的作业名 mask => DBMS_DATAPUMP.KU$_STATUS_JOB_ERROR + DBMS_DATAPUMP.KU$_STATUS_WIP )); ``` #### 方法3:检查日志文件 在启动导入时指定的日志目录中查看日志文件: ```bash # 示例日志内容 tail -f /u01/app/oracle/admin/ORCL/dpdump/import.log # 典型日志输出 Job: SYS_IMPORT_FULL_03 Operation: IMPORT Mode: FULL State: EXECUTING Bytes Processed: 1.2GB Percent Done: 65% ``` #### 方法4:监控会话状态 ```sql -- 查看相关会话 SELECT sid, serial#, username, program, module FROM v$session WHERE module LIKE '%Data Pump%'; -- 查看实时 I/O SELECT * FROM v$session_longops WHERE opname LIKE 'SYS_IMPORT%'; ``` ### 状态说明 | 状态值 | 含义 | 处理建议 | |--------|------|----------| | `EXECUTING` | 正常运行中 | 无需干预 | | `NOT RUNNING` | 作业已停止 | 检查错误日志 | | `DEFINING` | 初始化阶段 | 等待元数据加载 | | `STOPPING` | 正在停止 | 等待完成 | | `COMPLETING` | 收尾阶段 | 即将完成 | ### 常见问题排查 1. **作业卡住** ```sql ALTER SYSTEM KILL SESSION 'sid,serial#'; -- 终止会话 impdp ... ATTACH=<job> STOP_JOB=IMMEDIATE -- 安全停止 ``` 2. **查看错误详情** ```sql SELECT * FROM dba_datapump_errors WHERE job_name = 'SYS_IMPORT_FULL_03'; ``` 3. **空间不足检查** ```sql SELECT tablespace_name, used_percent FROM dba_tablespace_usage_metrics WHERE used_percent > 85; ``` ### 最佳实践 1. **启动时记录作业名** ```bash impdp ... job_name=DAILY_IMPORT # 指定易记名称 ``` 2. **使用并行加速** ```bash impdp ... parallel=4 # 多CPU处理 ``` 3. **分离监控会话** ```bash nohup impdp ... > import.log 2>&1 & # 后台运行 ``` > **提示**:大型导入建议启用状态自动记录: > ```sql > BEGIN > DBMS_DATAPUMP.SET_PARAMETER( > handle => l_job, > name => 'KEEP_MASTER', > value => 1); -- 保留元数据 > END; > ```[^5]
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值