Transporting Tables, Partitions, or Subpartitions

A transportable tables operation moves metadata for the specified tables, partitions, or subpartitions to the target database.

传输表导入时同传输表空间一样会在目标库自动创建相应表空间,只是表空间里只包含导入表,其它对象会被清空

To move the data, you copy the data files for these tablespaces to the target database. The Data Pump import automatically frees the blocks in the data files occupied by tables, partitions, or subpartitions that were not part of the transportable tables operation. It also frees the blocks occupied by the dependent objects of the tables that were not part of the transportable tables operation.

You can transport the tables, partitions, and subpartitions in the following ways:

  1. Using an export dump file
  2. Over the network

Note: if only a subset of a table's partitions are exported in a transportable tables operation, then on import each partition becomes a non-partitioned table

Limitations on Transportable Tables

Be aware of the following limitations for transportable tables:

  1. The general limitations described in "About Transport Data" apply to transportable tables.
  2. You cannot transport a table to a target database that contains a table of the same name in the same schema. However, you can use the REMAP_TABLE import parameter to import the data into a different table.
  3. You cannot transport tables with TIMESTAMP WITH TIMEZONE (TSTZ) data across platforms with different time zone file versions.

  • Transporting Tables, Partitions, or Subpartitions Using an Export Dump File

The following SQL statements create the sales_prt table and its and partitions in the sh schema and the tablespace and data file for the table. The statements also insert data into the partitions by using data in the sh sample schemas.

CREATE TABLESPACE sales_prt_tbs DATAFILE 'sales_prt.dbf' SIZE 20M ONLINE;

CREATE TABLE sh.sales_prt

    (prod_id        NUMBER(6),

     cust_id        NUMBER,

     time_id        DATE,

     channel_id     CHAR(1),

     promo_id       NUMBER(6),

     quantity_sold  NUMBER(3),

     amount_sold    NUMBER(10,2))

   PARTITION BY RANGE (time_id)

      (PARTITION SALES_Q1_2000 VALUES LESS THAN

    (TO_DATE('01-APR-2000','DD-MON-YYYY','NLS_DATE_LANGUAGE = American')),

      PARTITION SALES_Q2_2000 VALUES LESS THAN

        (TO_DATE('01-JUL-2000','DD-MON-YYYY','NLS_DATE_LANGUAGE = American')),

      PARTITION SALES_Q3_2000 VALUES LESS THAN

(TO_DATE('01-OCT-2000','DD-MON-YYYY','NLS_DATE_LANGUAGE = American')),

      PARTITION SALES_Q4_2000 VALUES LESS THAN

        (TO_DATE('01-JAN-2001','DD-MON-YYYY','NLS_DATE_LANGUAGE = American')))

TABLESPACE sales_prt_tbs;

 

INSERT INTO sh.sales_prt PARTITION(sales_q1_2000)

  SELECT * FROM sh.sales PARTITION(sales_q1_2000);

INSERT INTO sh.sales_prt PARTITION(sales_q2_2000)

  SELECT * FROM sh.sales PARTITION(sales_q2_2000);

INSERT INTO sh.sales_prt PARTITION(sales_q3_2000)

  SELECT * FROM sh.sales PARTITION(sales_q3_2000);

INSERT INTO sh.sales_prt PARTITION(sales_q4_2000)

  SELECT * FROM sh.sales PARTITION(sales_q4_2000);

COMMIT;

This example makes the following additional assumptions:

  1. The name of the source database is sourcedb.
  2. The source database and target database are running on the same platform with the same endianness. To check the endianness of a platform, run the following query:

SELECT d.PLATFORM_NAME, ENDIAN_FORMAT

     FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d

     WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

  1. Only the sales_q1_2000 and sales_q2_2000 partitions are transported to the target database. The other two partitions are not transported.

Complete the following tasks to transport the partitions using an export dump file:

Task 1   Generate the Export Dump File

Generate the export dump file by completing the following steps:

  1. Make all of the tablespaces that contain the tables being transported read-only.

ALTER TABLESPACE sales_prt_tbs READ ONLY;

Invoke the Data Pump export utility as a user with DATAPUMP_EXP_FULL_DATABASE role and specify the transportable tables options.

expdp user_name dumpfile=sales_prt.dmp  directory=data_pump_dir

      tables=sh.sales_prt:sales_q1_2000,sh.sales_prt:sales_q2_2000

      transportable=always logfile=exp.log

Note: In a non-CDB, the directory object DATA_PUMP_DIR is created automatically. Read and write access to this directory is automatically granted to the DBA role, and thus to users SYS and SYSTEM. However, the directory object DATA_PUMP_DIR is not created automatically in a PDB. Therefore, when importing into a PDB, create a directory object in the PDB and specify the directory object when you run Data Pump.

Task 2   Transport the Export Dump File

Transport the dump file to the directory pointed to by the DATA_PUMP_DIR directory object on the target database, or to any other directory of your choosing. The new location must be accessible to the target database.

At the target database, run the following query to determine the location of DATA_PUMP_DIR:

SELECT * FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME = 'DATA_PUMP_DIR';

Task 3   Transport the Data Files for the Tables

Transport the data files of the tablespaces containing the tables being transported to a place that is accessible to the target database.

Task 4   (Optional) Restore Tablespaces to Read/Write Mode

Make the tablespaces that contain the tables being transported read/write again at the source database, as follows:

ALTER TABLESPACE sales_prt_tbs READ WRITE;

Task 5   At the Target Database, Import the Partitions

At the target database, invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role and specify the transportable tables options.

impdp user_name dumpfile=sales_prt.dmp directory=data_pump_dir

   transport_datafiles='/u01/app/oracle/oradata/targetdb/sales_prt.dbf'

   tables=sh.sales_prt:sales_q1_2000,sh.sales_prt:sales_q2_2000

   logfile=imp.log

Note: During the import, tablespaces might be temporarily made read/write for metadata loading. Ensure that no user changes are made to the data during the import. At the successful completion of the import, all user-defined tablespaces are made read/write.

Transporting Tables, Partitions, or Subpartitions Over the Network

To transport tables over the network, you perform an import using the NETWORK_LINK parameter, the import is performed using a database link, and there is no dump file involved.

These tasks for transporting tables over the network are illustrated more fully in the example that follows, where it is assumed that the tables exist in the source database:

Table

Tablespace

Data File

hr.emp_ttbs

emp_tsp

/u01/app/oracle/oradata/sourcedb/emp.dbf

oe.orders_ttbs

orders_tsp

/u01/app/oracle/oradata/sourcedb/orders.dbf

This example transports these tables to the target database. To complete the example, these tables must exist on the source database.

The following SQL statements create the tables in the hr schema and the tablespaces and data files for the tables.

CREATE TABLESPACE emp_tsp DATAFILE 'emp.dbf' SIZE 1M ONLINE;

CREATE TABLE hr.emp_ttbs(

   employee_id    NUMBER(6),

   first_name     VARCHAR2(20),

   last_name      VARCHAR2(25),

   email          VARCHAR2(25),

   phone_number   VARCHAR2(20),

   hire_date      DATE,

   job_id         VARCHAR2(10),

   salary         NUMBER(8,2),

   commission_pct NUMBER(2,2),

   manager_id     NUMBER(6),

   department_id  NUMBER(4))

 TABLESPACE emp_tsp;

INSERT INTO hr.emp_ttbs SELECT * FROM hr.employees;

CREATE TABLESPACE orders_tsp DATAFILE 'orders.dbf' SIZE 1M ONLINE;

CREATE TABLE oe.orders_ttbs(

   order_id      NUMBER(12),

   order_date    TIMESTAMP WITH LOCAL TIME ZONE,

   order_mode    VARCHAR2(8),

   customer_id   NUMBER(6),

   order_status  NUMBER(2),

   order_total   NUMBER(8,2),

   sales_rep_id  NUMBER(6),

   promotion_id  NUMBER(6))

 TABLESPACE orders_tsp;

INSERT INTO oe.orders_ttbs SELECT * FROM oe.orders;

COMMIT;

This example makes the following additional assumptions:

  1. The name of the source database is sourcedb.
  2. The source database and target database are running on the same platform with the same endianness. To check the endianness of a platform, run the following query:

SELECT d.PLATFORM_NAME, ENDIAN_FORMAT

     FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d

     WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

Complete the following tasks to transport the tables over the network:

Task 1   Create a Database Link from the Target Database to the Source Database

Start SQL*Plus and connect to the target database as the administrator who will transport the data with Data Pump import. This user must have DATAPUMP_IMP_FULL_DATABASE role to transport the data. Create the database link:

CREATE PUBLIC DATABASE LINK sourcedb USING 'sourcedb';

During the import operation, the database link must connect to a user on the source database with DATAPUMP_EXP_FULL_DATABASE role. The user on the source database cannot be a user with SYSDBA administrative privilege.

Task 2   Make the Tablespaces Containing the Tables Read-Only

Make all of the tablespaces that contain data to be transported read-only.

ALTER TABLESPACE emp_tsp READ ONLY;

ALTER TABLESPACE orders_tsp READ ONLY;

Task 3   Transport the Data Files for the Tables

Transport the data files of the tablespaces containing the tables being transported to a place that is accessible to the target database.

Task 4   At the Target Database, Import the Database

Invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role and specify the full transportable export/import options.

impdp user_name network_link=sourcedb transportable=always

   transport_datafiles=

      '/u01/app/oracle/oradata/targetdb/emp.dbf'

      '/u01/app/oracle/oradata/targetdb/orders.dbf'

   tables=hr.emp_ttbs,oe.orders_ttbs

   logfile=import.log

Note: During the import, user-defined tablespaces might be temporarily made read/write for metadata loading. Ensure that no user changes are made to the data during the import. At the successful completion of the import, all user-defined tablespaces are made read/write.

Task 5   (Optional) Restore Tablespaces to Read/Write Mode

Make the tables that contain the tables being transported read/write again at the source database, as follows:

ALTER TABLESPACE emp_tsp READ WRITE;

ALTER TABLESPACE orders_tsp READ WRITE;

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值