记一次impdp导入数据时的ORA-31696错误

本文介绍了解决Oracle Data Pump导入时遇到ORA-31696错误的方法,该错误通常发生在导入包含LONG类型字段的表时。文章提供了多种解决方案,包括使用特定参数、调整表结构或升级数据库版本。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

今天帮一同事从一个dump文件中导入数据到测试库中,结果死活出现ORA-31696的错误:

 

[racdb2@oracle]$ impdp pebank/pebank directory=dumpdir dumpfile=mcj123.1011.dmp remap_schema=ebank:pebank table_exists_action=append

 

Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 16 October, 2012 14:47:23

 

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

 

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

and Real Application Testing options

Master table "PEBANK"."SYS_IMPORT_FULL_01" successfully loaded/unloaded

Starting "PEBANK"."SYS_IMPORT_FULL_01":  pebank/******** directory=dumpdir dumpfile=mcj123.1011.dmp remap_schema=ebank:pebank table_exists_action=append

Processing object type TABLE_EXPORT/TABLE/TABLE

ORA-39152: Table "PEBANK"."MCJNL" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

ORA-39152: Table "PEBANK"."MCJNLDATA" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

ORA-39152: Table "PEBANK"."MCJNLQUERYLOG" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

ORA-31696: unable to export/import TABLE_DATA:"PEBANK"."MCJNLQUERYLOG" using client specified AUTOMATIC method

ORA-31696: unable to export/import TABLE_DATA:"PEBANK"."MCJNLDATA" using client specified AUTOMATIC method

ORA-31693: Table data object "PEBANK"."MCJNL":"MCJNL_2011_X" failed to load/unload and is being skipped due to error:

ORA-00001: unique constraint (PEBANK.PK_MCJNL1) violated

. . imported "PEBANK"."MCJNL":"MCJNL_2009_10"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2009_11"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2009_12"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_01"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_02"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_03"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_04"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_05"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_06"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_07"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_08"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_09"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_10"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_11"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_12"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_01"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_02"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_03"                0 KB       0 rows

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "PEBANK"."SYS_IMPORT_FULL_01" completed with 6 error(s) at 14:51:44

 

发现其中两个表无法导入,提示ORA-31696的错误,无法使用客户端的“自动选择”方法导出/导入表。

查了一下发现这是Oracle的一个Bug:Bug 4239903 : IMPDP FAILED IF LONG DATATYPE IS THERE INTHE TABLE,因为这两个表中均存在LONG类型,且我使用了table_exusts_action选项。

这个Bug是指在版本10.1.0.2 to 10.2.0.4中,使用impdp导入带有LONG类型的表时,如果目标库中已存在该表而使用table_exists_action=append时,会出现ORA-31696的错误。

参考一下官方文档,可知,在版本10.1.0.2to 10.2.0.4中,如果使用impdp导入带有LONG类型的表时,目标库中如果已存在该表,则需要需要使用table_exists_action=replace选项,也不能使用先content=metadata_only再content=data_only的方式两者需要同时进行,或者可以先删除或者禁用表上的约束,或者您可以使用原始的exp/imp工具替代expdp/impdp工具,或者您也可以把数据库升级到10.2.0.5来Fixed这个Bug:


DataPump Import (IMPDP) Fails For Table With Column Datatype LONG With Error ORA-31696 [ID 305819.1]

转到底部


修改时间: 2012-3-9 类型:PROBLEM 状态:PUBLISHED优先级:3

注释 (0)

In this Document
  Symptoms
  Cause
  Solution
  References


Appliesto:

Oracle Server - Enterprise Edition - Version:10.1.0.2 to 10.2.0.4 - Release: 10.1 to 10.2
Information in this document applies to any platform.

Symptoms

DataPump importfails with error ORA-31696 while loading data into pre-existing table, if thereis a LONG column in that table. This is demonstrated by the following example:

connect / as sysdba

create user test identified by testdefault tablespace users temporary tablespace temp;
grant connect, resource to test;

create or replace directory tmp as '/tmp';
grant read, write on directory tmp totest;

connect test/test

-- create table with LONG column
create table a_tab
(
   id    number,
   text_v varchar2(10),
   text_l long
);
alter table a_tab add constrainta_tab_pk primary key (id);

-- populate the table
begin
  for i in 1..10 loop
    insert into a_tabvalues (i, 'Text '||lpad (to_char (i), 5, '0'), 'Text LONG '||lpad (to_char(i), 990, '0'));
  end loop;
  commit;
end;
/

set long 1000

select * from a_tab;

ID        TEXT_V
---------- ----------
TEXT_L
--------------------------------------------------------------------------------
        1 Text 00001
Text LONG0000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
....
0000000000000000000000000000000000000001

        2 Text 00002
Text LONG0000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000002
....

10 rows selected.


Export the table with:

#> expdp test/test directory=tmp dumpfile=a_tab.dmpcontent=data_only tables=a_tab logfile=expdp_a_tab.log


Then:

truncate table a_tab;


and import the data with:

#> impdp test/test directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log


This fails with error:

Import: Release10.2.0.1.0 - 64bit Production on Friday, 09 March, 2012 9:58:40

Copyright (c) 2003, 2005, Oracle. All rightsreserved.

Connected to: Oracle Database 10g EnterpriseEdition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Miningoptions
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_FULL_01":test/******** directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
ORA-31696: unable to export/importTABLE_DATA:"TEST"."A_TAB" using client specified AUTOMATICmethod
Job"TEST"."SYS_IMPORT_FULL_01" completed with 1 error(s) at09:58:43

Cause

The following restrictions exist regarding dataload into pre-existing table:

- cannot use external table mode if there is a LONG column
- cannot use direct path load mode if an enabled constraint other than tablecheck constraint is present on pre-existing table

Due to these restrictions, the procedure KUPD$DATA_INT.SELECT_MODE returns'load_nopossible', and DataPump import fails with the error message.

Solution

Please choose oneof the following options:

1. Import both metadata and data at once, if the table has a LONG column and anenabled constraint.

Or:

2. First disable (or drop) the constraints on existing table and then start theimport.

Or:

3. Use the original export/import (exp/imp) to transfer the table from sourceto target.

Or:

4. Beginning with version 10.2.0.5, importing data in a pre-existing table withLONG column is possible. The same test above returns during import:

Import: Release 10.2.0.5.0- 64bit Production on Friday, 09 March, 2012 9:57:07

Copyright (c) 2003, 2007, Oracle. All rightsreserved.

Connected to: Oracle Database 10g EnterpriseEdition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting"TEST"."SYS_IMPORT_FULL_01": test/******** directory=tmpdumpfile=a_tab.dmp full=y table_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported"TEST"."A_TAB"         15.46 KB          10 rows
Job"TEST"."SYS_IMPORT_FULL_01" successfully completed at09:57:12

 

Import: Release11.1.0.7.0 - 64bit Production on Friday, 09 March, 2012 9:49:23

Copyright (c) 2003, 2007, Oracle. All rightsreserved.

Connected to: Oracle Database 11g EnterpriseEdition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table "TEST"."SYS_IMPORT_FULL_01"successfully loaded/unloaded
Starting"TEST"."SYS_IMPORT_FULL_01": test/******** directory=tmpdumpfile=a_tab.dmp full=y table_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."A_TAB"         15.75 KB          10 rows
Job"TEST"."SYS_IMPORT_FULL_01" successfully completed at09:49:34

 

Import: Release11.2.0.3.0 - Production on Fri Mar 9 09:45:23 2012

Copyright (c) 1982, 2011, Oracle and/or itsaffiliates. All rights reserved.

Connected to: Oracle Database 11g EnterpriseEdition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_FULL_01":test/******** directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported"TEST"."A_TAB"         15.74 KB          10 rows
Job "TEST"."SYS_IMPORT_FULL_01"successfully completed at 09:45:31

References

BUG:4239903 - IMPDP FAILED IF LONG DATATYPE ISTHERE IN THE TABLE

 

扩展阅读一下:

Export/Import DataPump Parameter ACCESS_METHOD - How toEnforce a Method of Loading and Unloading Data ? [ID 552424.1]

转到底部


修改时间:2011-8-26类型:HOWTO状态:PUBLISHED优先级:3

注释 (0)

In this Document
  Goal
  Solution
     1.Introduction. 
     2.Export Data Pump: unloading data in "Direct Path" mode. 
     3.Export Data Pump: unloading data in "External Tables" mode. 
     4.Import Data Pump: loading data in "Direct Path" mode. 
     5. ImportData Pump: loading data in "External Tables" mode. 
     6.How to enforce a specific load/unload method ? 
     7.Known issues.
     @8. For Support: Enhancement Requests.
  References


Appliesto:

Oracle Server - Enterprise Edition - Version:10.1.0.2 to 11.2.0.2 - Release: 10.1 to 11.2
Oracle Server - Personal Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Oracle Server - Standard Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Enterprise Manager for RDBMS - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Oracle Server - Enterprise Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Information in this document applies to any platform.
***Checked for relevance on 7-Feb-2011***

Goal

Starting with Oracle10g, Oracle Data Pump can beused to move data in and out of a database. Data Pump can make use of differentmethods to move the data, and will automatically choose the fastest method. Itis possible though, to manually enforce a specific method. This documentdemonstrates how to specify the method with which data will be loaded orunloaded with Data Pump.

Solution

1. Introduction.

Data Pump can use four mechanisms to move data in and out of adatabase:

  • Data file copying;
  • Direct path;
  • External tables;
  • Network link import.

The two most commonly used methods to move data in and out ofdatabases with Data Pump are the "Direct Path" method and the"External Tables" method.

1.1. Direct Path mode. 
After data file copying, direct path is the fastest method of moving data. Inthis method, the SQL layer of the database is bypassed and rows are moved toand from the dump file with only minimal interpretation. Data Pumpautomatically uses the direct path method for loading and unloading data whenthe structure of a table allows it.

1.2. External Tables mode. 
If data cannot be moved in direct path mode, or if there is a situation whereparallel SQL can be used to speed up the data move even more, then the externaltables mode is used. The external table mechanism creates an external tablethat maps the dump file data for the database table. The SQL engine is thenused to move the data. If possible, the APPEND hint is used on import to speedthe copying of the data into the database. 
Note: When the Export NETWORK_LINK parameter is used to specify a network linkfor an export operation, a variant of the external tables method is used. Inthis case, data is selected from across the specified network link and insertedinto the dump file using an external table.

1.3. Data File Copying mode.
This mode is used when a transport tablespace job is started, i.e.: theTRANSPORT_TABLESPACES parameter is specified for an Export Data Pump job. Thisis the fastest method of moving data because the data is not interpreted noraltered during the job, and Export Data Pump is used to unload only structuralinformation (metadata) into the dump file.

1.4. Network Link Import mode. 
This mode is used when the NETWORK_LINK parameter is specified during an ImportData Pump job. This is the slowest of the four access methods because thismethod makes use of an INSERT SELECT statement to move the data over a databaselink, and reading over a network is generally slower than reading from a disk.

The "Data File Copying" and "Network LinkImport" methods to move data in and out of databases are outside the scopeof this article, and therefore not discussed any further.

For details about the access methods of the classic export client(exp), see:
Note:155477.1 "Parameter DIRECT: ConventionalPath Export Versus Direct Path Export"

2. Export Data Pump: unloading data in "Direct Path"mode.

Export Data Pump will use the "Direct Path" mode tounload data in the following situations:

EXPDP will useDIRECT_PATH mode if:

2.1. The structure of a table allows a Direct Path unload, i.e.: 
     - The table does not have fine-grained access controlenabled for SELECT. 
     - The table is not a queue table. 
     - The table does not contain one or more columns oftype BFILE or opaque, or an object type containing opaque columns. 
     - The table does not contain encrypted columns. 
     - The table does not contain a column of an evolved typethat needs upgrading. 
     - If the table has a column of datatype LONG or LONG RAW,then this column is the last column. 

2.2. The parameters QUERY, SAMPLE, or REMAP_DATA parameter were not used forthe specified table in the Export Data Pump job. 

2.3. The table or partition is relatively small (up to 250 Mb), or the table orpartition is larger, but the job cannot run in parallel because the parameterPARALLEL was not specified (or was set to 1). 

Note that with an unload of data in Direct Path mode, parallel I/Oexecuation Processes (PX processes) cannot be used to unload the data inparallel (paralllel unload is not supported in Direct Path mode). 

3. Export Data Pump: unloading data in "ExternalTables" mode.

Export Data Pump will use the "External Tables" mode tounload data in the following situations:

EXPDP willuse EXTERNAL_TABLE mode if:

3.1. Data cannot be unloaded in Direct Path mode, because of the structure ofthe table, i.e.: 
     - Fine-grained access control for SELECT is enabledfor the table. 
     - The table is a queue table. 
     - The table contains one or more columns of type BFILE oropaque, or an object type containing opaque columns. 
     - The table contains encrypted columns. 
     - The table contains a column of an evolved type that needsupgrading. 
     - The table contains a column of type LONG or LONG RAW thatis not last. 

3.2. Data could also have been unloaded in "Direct Path" mode, butthe parameters QUERY, SAMPLE, or REMAP_DATA were used for the specified tablein the Export Data Pump job. 

3.3. Data could also have been unloaded in "Direct Path" mode, butthe table or partition is relatively large (> 250 Mb) and parallel SQL canbe used to speed up the unload even more. 

Note that with an unload of data in External Tables mode, parallelI/O execuation Processes (PX processes) can be used to unload the data inparallel. In that case the Data Pump Worker process acts as the coordinator forthe PX processes. However, this does not apply when the table has a LOB column:in that case the table parallelism will always be 1. See also: 
Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN" 

4. Import Data Pump: loading data in "Direct Path"mode.

Import Data Pump will use the "Direct Path" mode to loaddata in the following situations:

IMPDP willuse DIRECT_PATH if:

4.1. The structure of a table allows a Direct Path load, i.e.: 
     - A global index does not exist on a multipartition tableduring a single-partition load. This includes object tables that arepartitioned. 
     - A domain index does not exist for a LOB column. 
     - The table is not in a cluster. 
     - The table does not have BFILE columns or columns ofopaque types. 
     - The table does not have VARRAY columns with an embeddedopaque type. 
     - The table does not have encrypted columns. 
     - Supplemental logging is not enabled or supplementallogging is enabled and the table does not have a LOB column. 
     - The table into which data is being imported is apre-existing table and: 
        – There is not an active trigger, and: 
        – The table is partitioned and has anindex, and: 
        – Fine-grained access control for INSERTmode is not enabled, and: 
        – A constraint other than table checkdoes not exist, and: 
        – A unique index does not exist. 

4.2 The parameters QUERY, REMAP_DATA parameter were not used for the specifiedtable in the Import Data Pump job. 

4.3. The table or partition is relatively small (up to 250 Mb), or the table orpartition is larger, but the job cannot run in parallel because the parameterPARALLEL was not specified (or was set to 1). 


5. Import Data Pump: loading data in "External Tables" mode.

Import Data Pump will use the "External Tables" mode toload data in the following situations:

IMPDP willuse EXTERNAL_TABLE if:

5.1. Data cannot be loaded in Direct Path mode, because at least one of thefollowing conditions exists: 
     - A global index on multipartition tables exists during asingle-partition load. This includes object tables that are partitioned. 
     - A domain index exists for a LOB column. 
     - A table is in a cluster. 
     - A table has BFILE columns or columns of opaque types. 
     - A table has VARRAY columns with an embedded opaque type. 
     - The table has encrypted columns. 
     - Supplemental logging is enabled and the table has atleast one LOB column. 
     - The table into which data is being imported is apre-existing table and at least one of the following conditions exists: 
        – There is an active trigger 
        – The table is partitioned and does nothave any indexes 
        – Fine-grained access control for INSERTmode is enabled for the table. 
        – An enabled constraint exists (otherthan table check constraints) 
        – A unique index exists 

5.2. Data could also have been loaded in "Direct Path" mode, but theparameters QUERY, or REMAP_DATA were used for the specified table in the ImportData Pump job. 

5.3. Data could also have been loaded in "Direct Path" mode, but thetable or partition is relatively large (> 250 Mb) and parallel SQL can beused to speed up the load even more.

Note that with a load of data in External Tables mode, parallelI/O execuation Processes (PX processes) can be used to load the data inparallel. In that case the Data Pump Worker process acts as the coordinator forthe PX processes. However, this does not apply when the table has a LOB column:in that case the table parallelism will always be 1. See also: 
Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN" 

6. How to enforce a specific load/unload method ?

In very specific situations, the undocumented parameterACCESS_METHOD can be used to enforce a specific method to unload or load thedata. Example:

%expdp system/manager ... ACCESS_METHOD=DIRECT_PATH  
%expdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE 

or:

%impdp system/manager ... ACCESS_METHOD=DIRECT_PATH  
%impdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE 

Important Need-To-Know's when the parameter ACCESS_METHOD is specified for a job:

  • The parameter ACCESS_METHOD is an undocumentedparameter andshould only be used when requested by Oracle Support.
  • If the parameter is not specified, then Data Pump will automaticallychoose the best method toload or unload the data.
  • If import Data Pump cannot choose due to conflictingrestrictions, an error will be reported:
    ORA-31696: unable to export/import TABLE_DATA:"SCOTT"."EMP"using client specified AUTOMATIC method
  • The parameter can only be specified when the Data Pump job is initiallystarted (i.e. theparameter cannot be specified when the job is restarted).
  • If the parameter is specified, the method of loading orunloading the data is enforced on all tables that need to be loaded or unloaded withthe job.
  • Enforcing a specific method may result in a slowerperformance ofthe overall Data Pump job, or errors such as:

... 
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA 
ORA-31696: unable to export/importTABLE_DATA:"SCOTT"."MY_TAB" using client specifiedDIRECT_PATH method 
... 

  • To determine which access method is used, a Workertrace file can be created, e.g.:

%expdp system/manager DIRECTORY=my_dir \ 
DUMPFILE=expdp_s.dmp LOGFILE=expdp_s.log \ 
TABLES=scott.my_tab TRACE=400300

The Worker trace file shows the method with which the data wasloaded (or unloaded for Import Data Pump):

... 
KUPW:14:57:14.289: 1: object: TABLE_DATA:"SCOTT"."MY_TAB" 
KUPW:14:57:14.289: 1: TABLE_DATA:"SCOTT"."MY_TAB" externaltable, parallel: 1 
...

For details, see also: 
Note:286496.1 " Export/Import DataPumpParameter TRACE - How to Diagnose Oracle Data Pump"

7. Known issues.

7.1. Bug 4722517 - Materialized view lognot updated after import into existing table 
Defect:  Bug:4722517 "MATERIALIZED VIEW LOG NOTUPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE" 
Symptoms:  amaterialized view is created with FAST REFRESH on a master table; if data isimported into this master table, then these changes (inserts) do not showup in the materialized view log
Releases: 10.1.0.2.0 and higher
Fixed in:  notapplicable, closed as not-a-bug
Patched files:  notapplicable 
Workaround:  ifpossible import into a temporary holding table then copy the data with"insert as select" into the master table
Cause:  a fastrefresh does not apply changes that result from bulk load operations onmasters, such as an INSERT with the APPEND hint used by Import Data Pump
Trace:  notapplicable, changes are not propagated
Remarks:  see also Note:340789.1 "Import Datapump (Direct Path)Does Not Update Materialized View Logs "

7.2. Bug 5599947 - Export Data Pump isslow when table has a LOB column
Defect:  Bug:5599947 "DATAPUMP EXPORT VERY SLOW"
Symptoms:  ExportData Pump has low performance when exporting table with LOB column
Releases:  11.1.0.6and below
Fixed in:  notapplicable, closed as not feasible to fix
Patched files:  notapplicable
Workaround:  ifpossible re-organize the large table with LOB column and make it partitioned
Cause:  if a tablehas a LOB column, and the unload or load takes place in "ExternalTables" mode, then we cannot make use of parallel I/O execution Processes(PX processes)
Trace:  notapplicable
Remarks:  see also Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN"

7.3. Bug 5941030 - Corrupt blocks afterImport Data Pump when table has LONG / LONG RAW column
Defect:  Bug:5941030 "Datapump import can producecorrupt blocks when there is a LONG / LONG RAW" 
Symptoms:  DirectPath import of a LONG / LONG RAW column can create corrupt blocks in thedatabase. If DB_BLOCK_CHECKING is enabled then an ORA-600 [6917] error can besignalled. If not then the corrupt block can cause subsequent problems, likeORA-1498 (block check failure) on an analyze of the table.
Releases:  11.1.0.6and below 
Fixed in: 10.2.0.5.0 and 11.1.0.7.0 and higher; for some platforms a fix on top of10.2.0.2.0 and on top of 10.2.0.3.0 is available with Patch:5941030
Patched files:  kdbl.o
Workaround:  ifpossible use the classic export and import clients to transfer this table
Cause:  internal issue withcolumn count when loading table with LONG/LONG RAW column in Direct Path mode
Trace:  notapplicable 
Remarks:  see also Note:457128.1 "Logical Corruption EncounteredAfter Importing Table With Long Column Using DataPump"

@ 8. For Support: Enhancement Requests.

@ Open Enhancement Requests: 

References

BUG:4722517 - MATERIALIZED VIEW LOG NOT UPDATEDAFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE
BUG:4727162 - PRODUCT ENHANCEMENT: ADD NEWDATAPUMP EXT TAB ACCESS METHOD WITHOUT APPEND HINT
BUG:5599947 - DATAPUMP EXPORT VERY SLOW
BUG:5941030 - DATAPUMP IMPORT CAN CORRUPT DATAWHEN THERE IS A LONG / LONG RAW
BUG:5943346 - PRODUCT ENHANCEMENT: PARALLELISM OFDATAPUMP JOB ON TABLE WITH LOB COLUMN
NOTE:155477.1 - Parameter DIRECT: Conventional PathExport Versus Direct Path Export
NOTE:286496.1 - Export/Import DataPump ParameterTRACE - How to Diagnose Oracle Data Pump
NOTE:340789.1 - Import Datapump (Direct Path) DoesNot Update Materialized View Logs
NOTE:365459.1 - Parallel Capabilities of Oracle DataPump
NOTE:453895.1 - Checklist for Slow Performance ofExport Data Pump (expdp) and Import DataPump (impdp)
NOTE:457128.1 - Logical Corruption Encountered AfterImporting Table With Long Column Using DataPump
NOTE:469439.1 - IMPDP Can Fail with ORA-31696 ifACCESS_METHOD=DIRECT_PATH Is Manually Specified
http://www.oracle.com/technology/pub/notes/technote_pathvsext.html

 

 

<think>我们正在解决用户的问题:如何在不使用SYSDBA角色的情况下,解决使用impdp导入数据出现的ORA-39070错误。 根据引用内容,ORA-39070错误通常与无法打开日志文件有关,而根本原因往往是目录对象权限不足或目录不存在。 用户要求不使用SYSDBA,因此我们需要通过授权普通用户来避免这个错误。 步骤: 1. 确保操作系统目录存在并且Oracle进程有读写权限(这一步通常需要DBA或系统管理员操作)。 2. 使用具有足够权限的用户(比如SYSDBA)创建目录对象并授权给普通用户(因为创建目录需要CREATE ANY DIRECTORY权限,普通用户没有,所以这一步需要SYSDBA先做,但之后的操作可以由普通用户完成)。 3. 确保普通用户对目录对象有读写权限。 4. 使用该普通用户执行impdp命令,并指定正确的目录和日志文件。 注意:用户要求不使用SYSDBA执行impdp,但创建目录和授权这一步需要SYSDBA(或具有CREATE ANY DIRECTORY权限的用户)来做。我们可以向用户说明,在准备阶段需要DBA协助,但之后的导入操作可以由普通用户完成。 具体步骤: 步骤1:由DBA(使用SYSDBA)创建物理目录(如果不存在)和目录对象。 步骤2:由DBA将目录对象的读写权限授予执行导入的普通用户。 步骤3:使用普通用户执行impdp,并确保在命令中指定了正确的目录和日志文件。 详细操作: 1. DBA创建操作系统目录并设置权限(以Linux为例): ```bash mkdir -p /u01/dpump_imports chown -R oracle:oinstall /u01/dpump_imports chmod -R 775 /u01/dpump_imports ``` 2. DBA登录数据库创建目录对象并授权: ```sql CREATE OR REPLACE DIRECTORY IMPDP_DIR AS '/u01/dpump_imports'; GRANT READ, WRITE ON DIRECTORY IMPDP_DIR TO import_user; --import_user替换为实际执行导入的用户名 ``` 3. 使用普通用户(import_user)执行导入命令: ```bash impdp import_user/password@service_name DIRECTORY=IMPDP_DIR DUMPFILE=export.dmp LOGFILE=import.log ``` 注意:在导入命令中,我们指定了LOGFILE参数,这样日志文件就会写入到IMPDP_DIR目录中。如果未指定LOGFILE,则默认使用import.log,同样需要目录的写权限。 如果用户自己无法执行步骤1和2(因为需要SYSDBA权限),则需要向DBA申请创建目录并授权。 另外,如果用户已经创建了目录对象,但当前用户没有读写权限,也会出现ORA-39070错误。因此,确保当前用户对使用的目录对象有READ和WRITE权限。 验证权限的SQL(以普通用户身份连接后执行): ```sql SELECT privilege, directory_name FROM all_tab_privs WHERE table_name = 'IMPDP_DIR' AND grantee = 'IMPORT_USER'; ``` 如果权限不足,需要DBA执行授权: ```sql GRANT READ, WRITE ON DIRECTORY IMPDP_DIR TO import_user; ``` 总结:避免ORA-39070错误的关键是确保执行导入的普通用户对所使用的目录对象具有READ和WRITE权限,并且操作系统目录存在且Oracle进程可访问。 注意:在Oracle多租户环境中,如果导入到PDB,则需要在PDB中创建目录对象(或者在CDB中创建但需要授予PDB中的用户访问权限)。 因此,对于非SYSDBA用户,我们通过提前授权目录的读写权限,然后使用该用户执行impdp命令,即可避免ORA-39070错误。 相关问题: 1. 如何为Oracle普通用户创建目录对象? 2. 如何查看Oracle用户对目录对象的权限? 3. 在多租户环境中,如何为PDB中的用户授予目录权限? 4. 使用impdp导入如何指定日志文件? 5. 如何解决impdp导入中的权限问题? 注意:生成相关问题后,在回答中不需要显示相关问题,但需要按照要求生成。</think>### 解决非SYSDBA用户执行impdpORA-39070错误的方法 #### 🔍 错误原因分析 ORA-39070错误的本质是**目录对象权限不足**,具体表现为: 1. 执行用户对目录对象缺少`READ`/`WRITE`权限 2. 操作系统目录物理权限不足 3. 目录路径不存在或拼写错误 4. 日志文件命名冲突(如已存在只读文件) > 📌 关键点:即使不使用SYSDBA,只要正确配置目录权限即可解决[^1][^2] #### ✅ 解决方案(无需SYSDBA角色) ##### 步骤1:创建/验证操作系统目录 ```bash # 创建物理目录(需DBA或系统管理员操作) mkdir -p /u01/dpump_imports chown oracle:oinstall /u01/dpump_imports chmod 775 /u01/dpump_imports # 确保Oracle进程可读写 ``` ##### 步骤2:由DBA创建目录对象并授权 ```sql -- DBA执行(只需执行一次) CREATE DIRECTORY USER_IMPDP_DIR AS '/u01/dpump_imports'; GRANT READ, WRITE ON DIRECTORY USER_IMPDP_DIR TO import_user; -- 授权给操作账户 ``` ##### 步骤3:验证用户权限 ```sql -- 普通用户登录后执行 SELECT * FROM ALL_DIRECTORIES WHERE DIRECTORY_NAME='USER_IMPDP_DIR'; SELECT * FROM USER_TAB_PRIVS WHERE TABLE_NAME='USER_IMPDP_DIR'; -- 确认READ/WRITE权限 ``` ##### 步骤4:执行导入(关键参数) ```bash impdp import_user/password@PDBORCL \ DIRECTORY=USER_IMPDP_DIR \ DUMPFILE=export.dmp \ LOGFILE=impdp_$(date +%Y%m%d).log # 动态日志名避免冲突 ``` #### ⚠️ 注意事项 1. **日志文件处理** - 每次导入使用**新日志文件名**,避免覆盖只读文件 - 添加间戳:`LOGFILE=impdp_20240520.log` 2. **目录对象检查** - 确认`DIRECTORY`参数值与创建对象**完全一致**(大小写敏感) - 避免使用默认`DATA_PUMP_DIR`,其权限常受限制[^3] 3. **权限继承问题** ```sql -- 如果通过角色授权,需直接授权(角色权限对数据泵无效) GRANT READ ON DIRECTORY USER_IMPDP_DIR TO import_user; -- 直接授权 ``` #### 🛠️ 故障排除脚本 ```sql -- 检查目录有效性 SELECT DIRECTORY_NAME, DIRECTORY_PATH FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME='USER_IMPDP_DIR'; -- 检查物理权限 !ls -ld /u01/dpump_imports # 在SQL*Plus中执行 -- 强制释放目录锁 ALTER SYSTEM KILL SESSION 'sid,serial#'; -- 当目录被占用 ``` #### 💡 替代方案:使用DBMS_DATAPUMP API 通过PL/SQL避免命令行权限问题: ```sql DECLARE dp_handle NUMBER; BEGIN dp_handle := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA' ); DBMS_DATAPUMP.ADD_FILE( handle => dp_handle, filename => 'export.dmp', directory => 'USER_IMPDP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE ); DBMS_DATAPUMP.START_JOB(dp_handle); END; / ``` > **最佳实践**: > 1. 为每个项目创建专用目录对象 > 2. 导入前执行预检:`impdp ... SQLFILE=precheck.sql` > 3. 定期清理旧日志文件:`!rm /u01/dpump_imports/impdp_*.log` 通过以上配置,普通用户无需SYSDBA权限即可完成导入,同满足企业安全合规要求[^2][^3]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值