重庆思庄Oracle、Redhat认证学习论坛

 找回密码
 注册

QQ登录

只需一步,快速开始

搜索
查看: 1602|回复: 0
打印 上一主题 下一主题

如何在两个使用asm磁盘组的机器中传输表空间

[复制链接]
跳转到指定楼层
楼主
发表于 2015-9-30 17:52:16 | 只看该作者 回帖奖励 |正序浏览 |阅读模式

How to Create Transportable Tablespaces Where the Source and Destination are ASM-Based (Doc ID 394798.1) To BottomTo Bottom 












In this Document


Goal

Solution

References




Applies to:
Oracle Database - Enterprise Edition - Version 10.1.0.2 to 11.2.0.3 [Release 10.1 to 11.2]
Information in this document applies to any platform.
***Checked for relevance on 28-May-2010***



Goal

The purpose of this note is to describe the method to create transportable tablespaces(TTS) where the source and destination databases are ASM-based. This method uses standard utilities such as; DataPump and the database package dbms_file_transfer. The dbms_file_transfer package will be used to transfer the Datapump metadata and datafiles to the remote database server.

This note will illustrate a working example of creating and plugging in transportable tablespaces. In the example two servers, host1 and host2 are in different locations, with each one on an independent Hitachi storage array. Server host1 is the source database and will house database named db1. The server host2, which is deemed the target server and will subscribe to the transportable tables, will house the database called db2.

Solution

If the platform of the source and target nodes are different with different endianess, then refer to Note 371556.1



This note will illustrate a working example of creating and plugging in transportable tablespaces. In the example two servers, host1 and host2 are in different locations, with each one on an independent Hitachi storage array. Server host1 is the source database and will house database named db1. The server host2, which is deemed the target server and will subscribe to the transportable tables, will house the database called db2.

TTS Implementation illustration

Preliminary effort to setup TTS

1. Create or use two existing tablespaces on the source database. Although having two tablespaces is not necessary, it was merely shown here to exercise object dependency. Note that OMF is being employed in this illustration, thus please set the init.ora parameter DB_CREATE_FILE_DEST to the appropriate disk group name.

SQL> show parameter db_create_file_dest
NAME TYPE VALUE
---------------------- ----------- ------------------------------
db_create_file_dest string +DATA

SQL> create tablespace tts_1;

SQL> create tablespace tts_2;

2. Create a table in TTS_1 and an index in TTS_2 to ensure the tablespaces have object dependencies:

SQL> connect scott/tiger
Connected.
SQL> create table emp_copy tablespace tts_1 as select * from emp;

Select count(*) from emp_copy;
10 rows returned

SQL> create index emp_copy_i on emp_copy (empno)
2 tablespace tts_2;


3. Check to make sure that you have a self contained transportable set. Oracle provides a PLSQL package that aids in this check.

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK(tts_1,tts_2, TRUE);

Then query the TRANSPORT_SET_VIOLATIONS view, to see if any
dependency violations exist.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

No rows returned


4. Create a new Service names entry, which will point to the destination database where the tablespaces will be transported. For example, add the following lines to tnsnames.ora:

DB2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = host2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = host2.us.oracle.com)
(INSTANCE_NAME = db2)
)
)


5. As SYSTEM, create a database link between the two databases. This is needed because we will be using the DBMS_FILE_TRANSFER package to move metadata between the two databases.

SQL> create database link db2 connect to system identified by manager1 using 'db2';

5. Create a directory object in the source database db1, to hold the dumpfile. Since we are using ASM, this needs to be an ASM object:

make sure the directory path ends with an ' '

SQL> create directory tts_dump as '+DATA';

6. Create another directory object in the source database, which points to an operating system path, for the log file:

SQL> create directory tts_dump_log as '/export/home/tts_log';


7. Create a directory object in the source database that points to the datafiles.

SQL> create directory tts_datafile as '+DATA/db1/datafile';

8. Grant read/write access to the user you will perform the export as (only needed if using a non-privileged user):

SQL> grant read, write on directory tts_dump to system;

SQL> grant read, write on directory tts_dump_log to system;

SQL> grant read, write on directory tts_dump_datafile to system;


9. Repeat the last four steps (5-8) on the target database db2, as well.

10. Make all tablespaces in the transportable set, to read-only.

SQL> ALTER TABLESPACE tts_1 READ ONLY;

SQL> ALTER TABLESPACE tts_2 READ ONLY;

11. Check the status of the tablespaces on the source database

SQL> select tablespace_name, status from dba_tablespaces;

TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
TEMP ONLINE
USERS ONLINE
UNDOTBS2 ONLINE
TTS_1 READ ONLY
TTS_2 READ ONLY

Export metadata

12. Export the metadata for the two tablespaces

[ora10g@host1]$ expdp system/manager1 directory=tts_dump dumpfile=tts1_db1.dmp logfile=tts_dump_log:tts.log
transport_tablespaces=tts_1,tts_2 transport_full_check=y

Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_02": system/******** directory=tts_datafile dumpfile=tts1.dmp logfile=tts_dump_log:tts.log transport
_tablespaces=tts_1,tts_2 transport_full_check=y
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_02" successfully loaded/unloaded
***********************************************************************Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_02 is:
+DATA/tts1.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_02" successfully completed at 14:00:34
Movement of data and Enabling TTS

13. Use DBMS_FILE_TRANSFER to send the dump file across to the target

[ora10g@host1]$ sqlplus system/manger1
SQL> begin
2 dbms_file_transfer.put_file
3 (source_directory_object => 'tts_dump',
4 source_file_name => 'tts1_db1.dmp',
5 destination_directory_object => 'tts_dump',
6 destination_file_name => 'tts1_db1.dmp',
7 destination_database => 'db2');
8 end;
9 /

14. Check the file names on the source database for the two tablespaces being transported.

SQL> SELECT file_name FROM dba_data_files
2 WHERE tablespace_name LIKE 'TTS%';

FILE_NAME
-------------------------------------------------+DATA/db2/datafile/tts_1.294.590721319
+DATA/db2/datafile/tts_2.295.586721335

15. Transfer the two datafiles to the target database using DBMS_FILE_TRANSFER.

TTS1 datafile

SQL> begin
2 dbms_file_transfer.put_file
3 (source_directory_object => 'tts_datafile',
4 source_file_name => 'tts_1.294.570721319',
5 destination_directory_object => ' tts_datafile',
6 destination_file_name => 'tts1_db1.dbf',
7 destination_database => 'db2');
8 end;
9 /

TTS2 datafile
SQL> begin
2 dbms_file_transfer.put_file
3 (source_directory_object => 'tts_datafile',
4 source_file_name => 'tts_2.295.586721335',
5 destination_directory_object => 'tts_datafile',
6 destination_file_name => 'tts2_db1.dbf',
7 destination_database => 'db2');
8 end;
9 /


16. On the host2 (target server) import the datafile metadata using DataPump.

imp.par has the following contents:

directory=tts_dump
dumpfile=tts1_db1.dmp
logfile=tts_dump_log:tts1.log
TRANSPORT_DATAFILES='+DATA1/tts1_db1.dbf','+DATA1/tts2_db1.dbf'
keep_master=y


Note, for the TRANSPORT_DATAFILES parameter, you can either use the alias names (files names in the dbms_file_transfer), or use the systems generated names generated by DBMS_FILE_TRANSFER (these start with the name 'File_Transfer.xxxx.xxxxxx'). To determine the name system generated names, use the asmcmd line tool by simply doing a
'cd +DATA/db2/datafile'??, followed by the 'ls -l' command.


[ora10g@host2]$ impdp system/oracle parfile=imp.par

Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_03" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_03": system/******** parfile=impdp.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_03" successfully completed at 15:05:00

17. Switch the tablespaces back to read-write mode.

SQL> ALTER TABLESPACE tts_1 READ WRITE;

SQL> ALTER TABLESPACE tts_2 READ WRITE;


18. Verify that the datafiles are successfully plugged in.

SQL> select name from v$datafile;

NAME
--------------------------------------------------
+DATA/db2/datafile/system.271.599658207
+DATA/db2/datafile/undotbs1.268.599658207
+DATA/db2/datafile/sysaux.270.599658207
+DATA/db2/datafile/users.267.599658209
+DATA/db2/datafile/example.262.599658459
+DATA/db2/datafile/tts2_db1.dbf
+DATA/db2/datafile/tts1_db1.dbf


19. Validate the data got there by selecting the required tables.

SQL> create table emp_copy tablespace tts_1 as select * from emp;

SQL> Select count(*) from emp_copy;
10 rows returned



-----



NOTE: Similar procedure is applicable for filesystem to ASM and viceversa



分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 支持支持 反对反对
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

QQ|手机版|小黑屋|重庆思庄Oracle、Redhat认证学习论坛 ( 渝ICP备12004239号-4 )

GMT+8, 2024-11-25 19:36 , Processed in 0.093630 second(s), 23 queries .

重庆思庄学习中心论坛-重庆思庄科技有限公司论坛

© 2001-2020

快速回复 返回顶部 返回列表