You can use Data Pump to produce an export dump file, transport the dump file to the target database if necessary, and then import the export dump file. Alternatively, you can use Data Pump to copy the database over the network.
The tablespaces in the database being transported can be either dictionary managed or locally managed. The tablespaces in the database are not required to be of the same block size as the target database standard block size.
Note: This method for transporting a database requires that you place the user-defined tablespaces in the database in read-only mode until you complete the export. If this is undesirable, then you can use the transportable tablespaces from backup feature
(一)Limitations on Full Transportable Export/import
Be aware of the following limitations on full transportable export/import:
- The general limitations described in "About Transport Data" apply to full transportable export/import.
- Full transportable export/import can export and import user-defined database objects in administrative tablespaces using conventional Data Pump export/import, such as direct path or external table.
- Full transportable export/import cannot transport a database object that is defined in both an administrative tablespace (such as SYSTEM and SYSAUX) and a user-defined tablespace. For example, a partitioned table might be stored in both a user-defined tablespace and an administrative tablespace. If you have such database objects in your database, then you can redefine them before transporting them so that they are stored entirely in either an administrative tablespace or a user-defined tablespace. If the database objects cannot be redefined, then you can use conventional Data Pump export/import.
- When transporting a database over the network using full transportable export/import, auditing cannot be enabled for tables stored in an administrative tablespace (such as SYSTEM and SYSAUX) when the audit trail information itself is stored in a user-defined tablespace.
(二)Transporting a Database Using an Export Dump File
These tasks for transporting a database are illustrated more fully in the example that follows, where it is assumed the following data files and tablespaces exist:
Tablespace | Type | Data File |
sales | User-defined | /u01/app/oracle/oradata/mydb/sales01.dbf |
customers | User-defined | /u01/app/oracle/oradata/mydb/cust01.dbf |
employees | User-defined | /u01/app/oracle/oradata/mydb/emp01.dbf |
SYSTEM | Administrative | /u01/app/oracle/oradata/mydb/system01.dbf |
SYSAUX | Administrative | /u01/app/oracle/oradata/mydb/sysaux01.dbf |
This example makes the following additional assumptions:
- The target database is a new database that is being populated with the data from the source database. The name of the source database is mydb.
- Both the source database and the target database are Oracle Database 12c databases.
Complete the following tasks to transport the database using an export dump file:
Task 1 Generate the Export Dump File
Generate the export dump file by completing the following steps:
- Make all of the user-defined tablespaces in the database read-only.
ALTER TABLESPACE sales READ ONLY;
ALTER TABLESPACE customers READ ONLY;
ALTER TABLESPACE employees READ ONLY;
- Invoke the Data Pump export utility as a user with DATAPUMP_EXP_FULL_DATABASE role and specify the full transportable export/import options.
$ expdp user_name full=y dumpfile=expdat.dmp directory=data_pump_dir
transportable=always logfile=export.log
You must always specify TRANSPORTABLE=ALWAYS, which determines whether the transportable option is used.
Note: In a non-CDB, the directory object DATA_PUMP_DIR is created automatically. Read and write access to this directory is automatically granted to the DBA role, and thus to users SYS and SYSTEM. However, the directory object DATA_PUMP_DIR is not created automatically in a PDB. Therefore, when importing into a PDB, create a directory object in the PDB and specify the directory object when you run Data Pump.
Note: To perform a full transportable export on an Oracle Database 11g Release 2 (11.2.0.3) or later Oracle Database 11g database, use the VERSION parameter, as shown in the following example:
expdp user_name full=y dumpfile=expdat.dmp directory=data_pump_dir
transportable=always version=12 logfile=export.log
Note: Full transportable import is supported only for Oracle Database 12c databases.
Task 2 Transport the Export Dump File
Transport the dump file to the directory pointed to by the DATA_PUMP_DIR directory object, or to any other directory of your choosing. At the target database, run the following query to determine the location of DATA_PUMP_DIR:
SELECT * FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME = 'DATA_PUMP_DIR';
Task 3 Transport the Data Files for the User-Defined Tablespaces
Transport the data files of the user-defined tablespaces in the database to a place that is accessible to the target database.
In this example, transfer the following data files from the source database to the target database:
sales01.dbf
cust01.dbf
emp01.dbf
If both platforms have the same endianness, then no conversion is necessary. Otherwise you must do a conversion of each tablespace in the database either at the source or target database.
If you are transporting the database to a different platform, you can execute the following query on each platform. If the query returns a row, then the platform supports cross-platform tablespace transport.
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
如果不同需要使用DBMS_FILE_TRANSFER的GET_FILE或PUT_FILE存储过程传输或传输前使用RMAN CONVERT转换
Task 4 (Optional) Restore Tablespaces to Read/Write Mode
Make the transported tablespaces read/write again at the source database, as follows:
ALTER TABLESPACE sales READ WRITE;
ALTER TABLESPACE customers READ WRITE;
ALTER TABLESPACE employees READ WRITE;
Task 5 At the Target Database, Import the Database
Invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role and specify the full transportable export/import options.
impdp user_name full=Y dumpfile=expdat.dmp directory=data_pump_dir
transport_datafiles=
'/u01/app/oracle/oradata/mydb/sales01.dbf',
'/u01/app/oracle/oradata/mydb/cust01.dbf',
'/u01/app/oracle/oradata/mydb/emp01.dbf'
logfile=import.log
Note: During the import, user-defined tablespaces might be temporarily made read/write for metadata loading. Ensure that no user changes are made to the data during the import. At the successful completion of the import, all user-defined tablespaces are made read/write.
(三)Transporting a Database Over the Network
To transport a database over the network, you perform an import using the NETWORK_LINK parameter, the import is performed using a database link, and there is no dump file involved.
These tasks for transporting a database are illustrated more fully in the example that follows, where it is assumed the following data files and tablespaces exist:
Tablespace | Type | Data File |
sales | User-defined | /u01/app/oracle/oradata/mydb/sales01.dbf |
customers | User-defined | /u01/app/oracle/oradata/mydb/cust01.dbf |
employees | User-defined | /u01/app/oracle/oradata/mydb/emp01.dbf |
SYSTEM | Administrative | /u01/app/oracle/oradata/mydb/system01.dbf |
SYSAUX | Administrative | /u01/app/oracle/oradata/mydb/sysaux01.dbf |
This example makes the following additional assumptions:
- The target database is a new database that is being populated with the data from the source database. The name of the source database is sourcedb.
- The source database and target database are running on the same platform with the same endianness. To check the endianness of a platform, run the following query:
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
- The sales tablespace is encrypted. The other tablespaces are not encrypted.
- The source database is an Oracle Database 11g Release 2 (11.2.0.3) database and the target database is an Oracle Database 12c database.
Note: This example illustrates the tasks required to transport an Oracle Database 11g Release 2 (11.2.0.3) or later Oracle Database 11g database to a new Oracle Database 12c PDB inside of a CDB. See Managing a Multitenant Environment. These tasks also illustrate how to transport one non-CDB to another non-CDB.
Complete the following tasks to transport the database over the network:
Task 1 Create a Database Link from the Target Database to the Source Database
Create a database link from the target database to the source database:
CREATE PUBLIC DATABASE LINK sourcedb USING 'sourcedb';
Note: The import operation must be performed by a user on the target database with DATAPUMP_IMP_FULL_DATABASE role, and the database link must connect to a user on the source database with DATAPUMP_EXP_FULL_DATABASE role.
The user on the source database cannot be a user with SYSDBA administrative privilege. If the database link is a connected user database link, then the user on the target database cannot be a user with SYSDBA administrative privilege.
Task 2 Make the User-Defined Tablespaces Read-Only
Make all of the user-defined tablespaces in the database read-only.
ALTER TABLESPACE sales READ ONLY;
ALTER TABLESPACE customers READ ONLY;
ALTER TABLESPACE employees READ ONLY;
Task 3 Transport the Data Files for the User-Defined Tablespaces
Transport the data files to the location of the existing data files of the target database.
In this example, transfer the following data files from the source database to the target database:
sales01.dbf
cust01.dbf
emp01.dbf
Task 4 At the Target Database, Import the Database
Invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role and specify the full transportable export/import options.
impdp user_name full=Y network_link=sourcedb transportable=always
transport_datafiles=
'/u01/app/oracle/oradata/mydb/sales01.dbf',
'/u01/app/oracle/oradata/mydb/cust01.dbf',
'/u01/app/oracle/oradata/mydb/emp01.dbf'
encryption_pwd_prompt=YES version=12 logfile=import.log
Note: During the import, user-defined tablespaces might be temporarily made read/write for metadata loading. Ensure that no user changes are made to the data during the import. At the successful completion of the import, all user-defined tablespaces are made read/write.
Task 5 (Optional) Restore User-Defined Tablespaces to Read/Write Mode
Make the user-defined tablespaces read/write again at the source database, as follows:
ALTER TABLESPACE sales READ WRITE;
ALTER TABLESPACE customers READ WRITE;
ALTER TABLESPACE employees READ WRITE;
You can postpone this task to first ensure that the import process succeeds.