Saturday, July 20, 2024
No menu items!
HomeDatabase ManagementMigrate a cross-account TDE-enabled Amazon RDS for Oracle DB instance with reduced...

Migrate a cross-account TDE-enabled Amazon RDS for Oracle DB instance with reduced downtime using AWS DMS

Migrating a Transparent Data Encryption (TDE)-enabled Amazon Relational Database Service (Amazon RDS) for Oracle database instance from one AWS account to another is a common use case in scenarios when you acquire, sell, or merge operations, or if you’re reorganizing your AWS accounts and organizational structure.

TDE is a permanent and persistent option in RDS for Oracle option groups, and a DB snapshot that uses an option group with permanent or persistent options can’t be shared with another AWS account. For more information, see Sharing a DB snapshot. In this post, we discuss the steps and best practices to migrate a TDE-enabled RDS for Oracle DB instance from one AWS account to another with reduced downtime using Oracle Data Pump and AWS Database Migration Service (AWS DMS). For Amazon RDS for Oracle, AWS DMS doesn’t support TDE on the column level. Therefore, we also demonstrate the extra steps needed to migrate tables with TDE on the column level.

This migration approach involves three phases:

Initial data load – We use Oracle Data Pump to perform the initial data load in the TDE-enabled RDS for Oracle DB instance.
Ongoing replication – AWS DMS is a fully managed service that supports change data capture (CDC) for RDS for Oracle DB instances. You can use AWS DMS for ongoing replication and minimize the outage window.
Migrate tables containing columns encrypted with TDE during cutover – AWS DMS doesn’t support the replication of tables with columns that use TDE for Amazon RDS for Oracle. You need to refresh those tables again during cutover.

Solution overview

In this post, we use the Oracle Data Pump API for the initial data load and an AWS DMS CDC-only task for ongoing replication. The following diagram illustrates this architecture.

To implement this solution, we complete the following steps:

Prepare the source RDS for Oracle DB instance for migration.
Prepare the target RDS for Oracle DB instance for migration.
Capture the System Change Number (SCN).
Export the source database using Oracle Data Pump.
Transfer the Oracle Data Pump export dump file set to the target DB instance.
Load data in the target database using Oracle Data Pump.
Validate the target database.
Enable Amazon RDS backup retention and archive logging on the target DB instance.
Configure ongoing replication using AWS DMS.
Reload tables with TDE on the column level and perform cutover.

For this demonstration, the source AWS account is 6144xxxxxxxx and the target account is 2634xxxxxxxx. Both AWS accounts are part of the same AWS Organization.

The source RDS for Oracle DB instance is named rds-oracle-source-01 and the target is rds-oracle-target-01. Both DB instances are running on non-multitenant Oracle Database 19c version. We use AWS DMS for ongoing replication. In this post, we use DMS_SAMPLE as a sample schema for migration. The DMS_SAMPLE schema contains two tables with TDE-encrypted columns.

Prerequisites

Make sure you meet the following prerequisites:

Your source and target AWS accounts must have connectivity using either VPC peering or AWS Transit Gateway. For more information, see Create a VPC peering connection.
The VPC security group associated with the source and target RDS for Oracle DB instance should allow inbound connections from the AWS DMS replication instance. The security group associated with the replication instance should also allow all outbound connections. For more information, see Setting up a network for a replication instance.
Automatic backup is enabled on the source RDS for Oracle DB instance. For more information about enabling automatic backups, see Enabling automated backups.
To capture ongoing changes, AWS DMS requires that you enable minimal supplemental logging on your Oracle source database. In addition, you need to enable supplemental logging on each replicated table in the database.
You must have a bastion host with the SQL*Plus client installed and connectivity to the source and target RDS for Oracle instances.

Limitations

This solution has the following limitations:

The AWS DMS Binary Reader method supports TDE only for self-managed Oracle databases.
When replicating from Amazon RDS for Oracle, TDE is supported only with encrypted tablespace and using Oracle LogMiner.
AWS DMS supports CDC for RDS for Oracle database tables with primary keys. If a table doesn’t have a primary key enabled, add supplemental logging on all columns of the table to ensure that AWS DMS has enough data to update the target table.
During CDC, AWS DMS supports large object (LOB) data types only in tables that include a primary key.
If your tables use sequences, then the sequences will not be advanced on the target even though data is being copied from the source during ongoing replication with AWS DMS. During cutover, you need to update the NEXTVAL of the sequences in the target database after stopping replication from the source database.

For more information about the limitations of using an Oracle database as source and target using AWS DMS, see Limitations on using a Oracle database as a source for AWS DMS and Limitations on Oracle as a target for AWS Database Migration Service.

Prepare the source RDS for Oracle DB instance for migration

To prepare your source DB instance for migration, complete the following steps:

Create the DMS_USER user account in the source RDS for Oracle database. For more information on the privileges for the DMS_USER user account, see User account privileges required on an AWS-managed Oracle source for AWS DMS.
The source RDS for Oracle DB instance must have sufficient storage to keep the export dump files and archived logs created during database export, as well as during dump file transfer and loading into the destination database instance. We recommend increasing the storage according to the estimated export dump files size and archive log generation. Amazon RDS auto scaling can’t completely prevent storage-full situations for large data loads. This is because further storage modifications can’t be made for either 6 hours or until storage optimization has completed on the instance, whichever is longer. For more information on storage autoscaling limitations, see Limitations.

You can also estimate the size of the Oracle Data Pump export dump file set using the DBMS_DATAPUMP procedure within a SQL*Plus session:

SQL(source)> DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => ‘EXPORT’,
job_mode => ‘SCHEMA’,
job_name => null
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘estimate_dump_size.log’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,’SCHEMA_EXPR’,’IN (”DMS_SAMPLE”)’);
DBMS_DATAPUMP.SET_PARAMETER(
handle => v_hdnl,
name => ‘ESTIMATE_ONLY’,
value => 1
);
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
‘EXCLUDE_NAME_EXPR’,
q'[IN (SELECT NAME FROM SYS.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN (‘RDSADMIN’,’SYS’,’SYSTEM’,’RDS_DATAGUARD’,’RDSSEC’)
)
)
]’,
‘PROCOBJ’
);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
PL/SQL procedure successfully completed.
SQL(source)> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file(‘DATA_PUMP_DIR’,’estimate_dump_size.log’));
TEXT
——————————————————————————–
FLASHBACK automatically enabled to preserve database integrity.
Starting “ADMIN”.”SYS_EXPORT_SCHEMA_03″:
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated “DMS_SAMPLE”.”SPORTING_EVENT_TICKET” 1.812 GB
. estimated “DMS_SAMPLE”.”PERSON” 304 MB
. estimated “DMS_SAMPLE”.”SEAT” 112 MB
. estimated “DMS_SAMPLE”.”MLB_DATA” 640 KB
. estimated “DMS_SAMPLE”.”PLAYER” 320 KB
. estimated “DMS_SAMPLE”.”NFL_DATA” 256 KB
. estimated “DMS_SAMPLE”.”NAME_DATA” 192 KB
. estimated “DMS_SAMPLE”.”SPORTING_EVENT” 128 KB
. estimated “DMS_SAMPLE”.”NFL_STADIUM_DATA” 64 KB
. estimated “DMS_SAMPLE”.”SEAT_TYPE” 64 KB
. estimated “DMS_SAMPLE”.”SPORT_DIVISION” 64 KB
. estimated “DMS_SAMPLE”.”SPORT_LEAGUE” 64 KB
. estimated “DMS_SAMPLE”.”SPORT_LOCATION” 64 KB
. estimated “DMS_SAMPLE”.”SPORT_TEAM” 64 KB
. estimated “DMS_SAMPLE”.”SPORT_TYPE” 64 KB
. estimated “DMS_SAMPLE”.”TICKET_PURCHASE_HIST” 0 KB
Total estimation using BLOCKS method: 2.220 GB
Job “ADMIN”.”SYS_EXPORT_SCHEMA_03″ successfully completed at Thu Aug 24 21:11:32
2023 elapsed 0 00:00:05

To capture ongoing changes, AWS DMS requires that you enable minimal supplemental logging on your source DB instance:

SQL(source)>exec rdsadmin.rdsadmin_util.alter_supplemental_logging(‘ADD’);
PL/SQL procedure successfully completed.

SQL(source)>SELECT supplemental_log_data_min FROM V$DATABASE;
SUPPLEME
——–
YES

Enable supplemental logging on each replicated table in the source database:

If a primary key exists, add supplemental logging on the primary key columns on the database:

SQL(source)>exec rdsadmin.rdsadmin_util.alter_supplemental_logging(‘ADD’, ‘PRIMARY KEY’);
PL/SQL procedure successfully completed.

If the table doesn’t have a primary key, or if that key is disabled or invalid, enable supplemental logging on all columns of the table. Use the following SQL statement to generate the statements to enable the supplemental logging on all the columns of specified tables. We recommend reviewing the SQL statements before running them in the source database.

SQL(source)>SELECT ‘alter table ‘
||at.owner
||’.’
||at.table_name
||’ add supplemental log data (ALL) columns;’ “SQL_STATEMENTS”
FROM ALL_TABLES at
WHERE NOT EXISTS (SELECT 1
FROM ALL_CONSTRAINTS ac
WHERE ac.owner = at.owner
AND ac.table_name = at.table_name
AND ac.constraint_type = ‘P’)
AND at.owner IN (‘DMS_SAMPLE’);
SQL_STATEMENTS
————————————————————————————–
alter table DMS_SAMPLE.MLB_DATA add supplemental log data (ALL) columns;
alter table DMS_SAMPLE.NFL_DATA add supplemental log data (ALL) columns;
alter table DMS_SAMPLE.NFL_STADIUM_DATA add supplemental log data (ALL) columns;

SQL(source)> alter table DMS_SAMPLE.MLB_DATA add supplemental log data (ALL) columns;
Table altered.
SQL(source)> alter table DMS_SAMPLE.NFL_DATA add supplemental log data (ALL) columns;
Table altered.
SQL(source)> alter table DMS_SAMPLE.NFL_STADIUM_DATA add supplemental log data (ALL) columns;
Table altered.

Make sure that ARCHIVELOG mode is enabled on the source DB instance to run the AWS DMS CDC task:

SQL(source)>SELECT log_mode FROM V$DATABASE;
LOG_MODE
————
ARCHIVELOG

The RDS for Oracle DB instance generates archived redo logs when the backup retention period of the DB instance is greater than zero. Because the archived redo logs are retained on your DB instance, ensure that DB instance has enough allocated storage for the retained logs. We recommend running the following SQL statement to analyze archive log generation trends for storage estimation. Use the Amazon CloudWatch metric FreeStorageSpace to calculate the storage available to the RDS for Oracle DB instance. For more information on instance-level metrics for Amazon RDS, see Amazon CloudWatch metrics for Amazon RDS. If the DB instance doesn’t have sufficient storage, increase the storage. For more information, see Increasing DB instance storage capacity.

SQL(source)> ALTER SESSION SET nls_date_format = ‘DD-MON-YYYY HH24:MI:SS’;
Session altered.
SQL(source)> SELECT trunc(completion_time, ‘HH’)Hour,
thread#,
round(SUM(blocks * block_size) / 1024 / 1024 / 1024)GB
FROM V$ARCHIVED_LOG
GROUP BY trunc(completion_time, ‘HH’),thread#
ORDER BY 1;
HOUR THREAD# GB
——————– ———- ———-
24-AUG-2023 18:00:00 1 1
24-AUG-2023 19:00:00 1 2
24-AUG-2023 20:00:00 1 1
24-AUG-2023 21:00:00 1 0

When the archived log retention period expires, Amazon RDS for Oracle removes the archived redo logs from your DB instance. For more information on retaining archived logs, see Retaining archived redo logs.

In this post, we estimate that we would be able to complete the export of the source database, the import in the target database, and starting the AWS DMS CDC task in 4 hours. Define archivelog retention hours to 4 hours using following SQL statement:

SQL(source)> begin
rdsadmin.rdsadmin_util.set_configuration(
name => ‘archivelog retention hours’,
value => ‘4’);
end;
/
PL/SQL procedure successfully completed.

SQL(source)> commit;
Commit complete.

— verify archive details
SQL(source)> set serveroutput on;
SQL(source)> EXEC rdsadmin.rdsadmin_util.show_configuration;
NAME:tracefile retention
VALUE:10080
DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in bdump are automatically deleted.
NAME:archivelog retention hours
VALUE:4
DESCRIPTION:ArchiveLog expiration specifies the duration in hours before archive/redo log files are automatically deleted.

Prepare the target RDS for Oracle DB instance for migration

To prepare the target DB instance, complete the following steps:

It is recommended to create the target DB instance with a larger instance class to get more compute to speed up the initial data loading process.
Keep Multi-AZ disabled for the target DB instance during the initial data load.
Create the DMS_USER user account in the target RDS for Oracle database. For more information on the privileges for the DMS_USER user account, see User account privileges required for using Oracle as a target.
The target RDS for Oracle DB instance should have storage to accommodate the export dump files and database tablespaces.
Create the required tablespaces in the target database. Run the following script in the source database to get tablespace details. Modify the size of the Oracle-provided tablespaces as per the size of the source database and create any additional tablespaces required for the application.

SQL(source)> SET LONG 20000 LONGCHUNKSIZE 20000 PAGESIZE 0 LINESIZE 1000 FEEDBACK OFF VERIFY OFF TRIMSPOOL ON

SQL(source)> BEGIN
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, ‘SQLTERMINATOR’, true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, ‘PRETTY’, true);
END;
/
SQL(source)> SELECT DBMS_METADATA.get_ddl (‘TABLESPACE’, tablespace_name) FROM dba_tablespaces;

CREATE BIGFILE TABLESPACE “SYSTEM” DATAFILE
SIZE 629145600
AUTOEXTEND ON NEXT 629145600 MAXSIZE 33554431M
LOGGING FORCE LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
NOCOMPRESS SEGMENT SPACE MANAGEMENT MANUAL;
……..
<output truncated for blog post>

Create database roles in the target database. Use the following script in the source database to generate scripts. Verify the generated SQL statements and run them in the target database.

SQL(source)> SELECT ‘create role ‘
|| role
|| ‘;’ AS role_ddl
FROM DBA_ROLES
WHERE oracle_maintained = ‘N’
AND role != ‘RDS_MASTER_ROLE’;
ROLE_DDL
————————————————————————————–
create role APP_ROLE;
create role ADMIN_ROLE;

Create database profiles in the target database. Use the following script in the source database to generate scripts. Verify the generated SQL statements and run them in the target database.

SQL(source)> SELECT dbms_metadata.get_ddl(‘PROFILE’, profile) AS profile_ddl
FROM(SELECT DISTINCT profile
FROM DBA_PROFILES
WHERE profile NOT IN( ‘DEFAULT’, ‘ORA_STIG_PROFILE’, ‘RDSADMIN’ ));
PROFILE_DDL
————————————————————————————–
CREATE PROFILE “ADMIN_USERS”
LIMIT
COMPOSITE_LIMIT DEFAULT
SESSIONS_PER_USER UNLIMITED
CPU_PER_SESSION UNLIMITED
CPU_PER_CALL 3000
LOGICAL_READS_PER_SESSION DEFAULT
LOGICAL_READS_PER_CALL DEFAULT
IDLE_TIME DEFAULT
CONNECT_TIME 15
PRIVATE_SGA DEFAULT
FAILED_LOGIN_ATTEMPTS DEFAULT
PASSWORD_LIFE_TIME DEFAULT
PASSWORD_REUSE_TIME DEFAULT
PASSWORD_REUSE_MAX DEFAULT
PASSWORD_VERIFY_FUNCTION DEFAULT
PASSWORD_LOCK_TIME DEFAULT
PASSWORD_GRACE_TIME DEFAULT
INACTIVE_ACCOUNT_TIME DEFAULT
PASSWORD_ROLLOVER_TIME DEFAULT ;

Create public synonyms in the target database. Use the following script in the source database to generate scripts. Verify the generated SQL statements and run them in the target database.

SQL(source)> select ‘CREATE ‘|| decode(owner,’PUBLIC’,’PUBLIC ‘,null) ||
‘SYNONYM ‘ || decode(owner,’PUBLIC’,null, owner || ‘.’) ||
lower(synonym_name) || ‘ FOR ‘ || lower(table_owner) ||
‘.’ || lower(table_name) ||
decode(db_link,null,null,’@’||db_link) || ‘;’ AS synonym_ddl
from sys.dba_synonyms
where owner=’PUBLIC’
and table_owner in (‘DMS_SAMPLE’)
order by owner;

SYNONYM_DDL
————————————————————————————–
CREATE PUBLIC SYNONYM person FOR dms_sample.person;
CREATE PUBLIC SYNONYM player FOR dms_sample.player;
CREATE PUBLIC SYNONYM seat FOR dms_sample.seat;
CREATE PUBLIC SYNONYM seat_type FOR dms_sample.seat_type;
CREATE PUBLIC SYNONYM sporting_event FOR dms_sample.sporting_event;
CREATE PUBLIC SYNONYM sporting_event_ticket FOR dms_sample.sporting_event_ticket;
CREATE PUBLIC SYNONYM sport_division FOR dms_sample.sport_division;
CREATE PUBLIC SYNONYM sport_league FOR dms_sample.sport_league;
CREATE PUBLIC SYNONYM sport_location FOR dms_sample.sport_location;
CREATE PUBLIC SYNONYM sport_team FOR dms_sample.sport_team;
CREATE PUBLIC SYNONYM sport_type FOR dms_sample.sport_type;
CREATE PUBLIC SYNONYM ticket_management FOR dms_sample.ticketmanagement;
CREATE PUBLIC SYNONYM ticket_purchase_hist FOR dms_sample.ticket_purchase_hist;

Verify that the source and target DB instance have the same parameter configuration. You should review the RDS DB parameter groups and run the following query in both the source and target database:

SQL(source)> SELECT property_name,property_value FROM DATABASE_PROPERTIES;
PROPERTY_NAME PROPERTY_VALUE
—————————— ——————————————————-
MAX_PDB_SNAPSHOTS 8
DICTIONARY_ENDIAN_TYPE LITTLE
GLOBAL_DB_NAME ORCL
MAX_STRING_SIZE STANDARD
NO_USERID_VERIFIER_SALT CA03E64837648BBA87D82E71B635A17B
NLS_RDBMS_VERSION 19.0.0.0.0
NLS_NCHAR_CONV_EXCP FALSE
NLS_LENGTH_SEMANTICS BYTE
…… <output truncated>

Temporarily disable automated backups of the target RDS for Oracle DB instance to speed up the Oracle Data Pump Import jobs. You should configure the backup retention period to 0 to disable automated backups. For more information on disabling automated backups, see Disabling automated backups. You should also set archivelog retention to 0 on the target DB instance to speed up the Oracle Data Pump import process:

— set archive log retention
SQL(target)> begin
rdsadmin.rdsadmin_util.set_configuration(
name => ‘archivelog retention hours’,
value => ‘0’);
end;
/
PL/SQL procedure successfully completed.

SQL(target)> commit;
Commit complete.

— verify archive details
SQL(target)> set serveroutput on;

SQL(target)> EXEC rdsadmin.rdsadmin_util.show_configuration;
NAME:archivelog retention hours
VALUE:0
DESCRIPTION:ArchiveLog expiration specifies the duration in hours before archive/redo log files are automatically deleted.
NAME:tracefile retention
VALUE:10080
DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in bdump are automatically deleted.
PL/SQL procedure successfully completed.

Modify the job_queue_processes parameter to 0 in the RDS DB parameter group of the target RDS for Oracle DB instance to prevent any database jobs from running during the migration process. For more details on DB parameter group modification, see Modifying parameters in a DB parameter group.

Capture the SCN

Capture the SCN from the source database to start the Oracle Data Pump export. We recommend stopping the application to capture the SCN and then starting the application again. Complete the following steps:

Shut down your application gracefully.
Wait until all the database sessions from application users are ended.
Run the following command to help identify users or application sessions still connected to the database:

SQL(source)> SELECT username,
count(*)
FROM V$SESSION
WHERE TYPE != ‘BACKGROUND’
AND username IS NOT NULL
GROUP BY username;

USERNAME COUNT(*)
——————– ———-
ADMIN 1
RDSADMIN 1

2 rows selected.

Perform a few redo log switches in the database and force checkpoint:

SQL(source)> EXEC rdsadmin.rdsadmin_util.switch_logfile;
PL/SQL procedure successfully completed.

SQL> EXEC rdsadmin.rdsadmin_util.switch_logfile;
PL/SQL procedure successfully completed.

SQL> EXEC rdsadmin.rdsadmin_util.switch_logfile;
PL/SQL procedure successfully completed.

SQL> EXEC rdsadmin.rdsadmin_util.checkpoint;
PL/SQL procedure successfully completed.

To perform a consistent export, make sure that you use the flashback_scn parameter during Oracle Data Pump export. Run the following commands on the source database to get the current SCN. At this stage, the source database should not have any open transactions.

— get current System Change Number (SCN)
SQL(source)> select current_scn from v$database;
CURRENT_SCN
———–
3357030

We use this SCN in a later step to start the AWS DMS CDC task.

Start the application now.

Export the source database using Oracle Data Pump

Start the export of the source database using the Oracle Data Pump PL/SQL API DBMS_DATAPUMP. For more information on the Oracle Data Pump API, see Using the Oracle Data Pump API.

Get an object count from the source database before starting the export:

SQL(source)> SELECT owner,
object_type,
count(*)
FROM DBA_OBJECTS
WHERE owner IN ( ‘DMS_SAMPLE’ )
GROUP BY owner,
object_type
ORDER BY 1,
2;
OWNER OBJECT_TYPE COUNT(*)
————— ———————– ———-
DMS_SAMPLE INDEX 22
DMS_SAMPLE PACKAGE 1
DMS_SAMPLE PACKAGE BODY 1
DMS_SAMPLE PROCEDURE 6
DMS_SAMPLE SEQUENCE 5
DMS_SAMPLE TABLE 16
DMS_SAMPLE TRIGGER 3
DMS_SAMPLE VIEW 2

8 rows selected.

Export the DMS_SAMPLE schema using DBMS_DATAPUMP:

SQL(source)> DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => ‘EXPORT’,
job_mode => ‘SCHEMA’,
job_name => ‘export_dms_sample_job’
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘export_dms_sample_%U.dmp’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘export_dms_sample.log’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,’SCHEMA_EXPR’,’IN (”DMS_SAMPLE”)’);
DBMS_DATAPUMP.SET_PARAMETER(v_hdnl,’FLASHBACK_SCN’,3357030);
DBMS_DATAPUMP.SET_PARALLEL(v_hdnl,8);
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
‘EXCLUDE_NAME_EXPR’,
q'[IN (SELECT NAME FROM SYS.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN (‘RDSADMIN’,’SYS’,’SYSTEM’,’RDS_DATAGUARD’,’RDSSEC’)
)
)
]’,
‘PROCOBJ’
);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
PL/SQL procedure successfully completed.

You can view the contents of the export log by using the rdsadmin.rds_file_util.read_text_file procedure:

SQL(source)> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file(‘DATA_PUMP_DIR’,’export_dms_sample.log’));

TEXT
——————————————————————————–
FLASHBACK automatically enabled to preserve database integrity.
Starting “ADMIN”.”export_dms_sample_job”:
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
…..
<output truncated>

When the export is complete, list the export dump files in DATA_PUMP_DIR with their size. It helps at later steps to make sure that all the export dump files are transferred correctly to the target RDS for Oracle DB instance.

SQL(source)> SELECT type,filename,round(sum(filesize)/1024/1024/1024,2)”File Size in GiB” FROM
TABLE(rdsadmin.rds_file_util.listdir(‘DATA_PUMP_DIR’))
where filename like ‘export_dms_sample%’
group by type,filename;
TYPE FILENAME File Size in GiB
———- —————————— —————-
file export_dms_sample_09.dmp .31
file export_dms_sample_03.dmp 0
file export_dms_sample.log 0
file export_dms_sample_07.dmp .33
file export_dms_sample_05.dmp .25
file export_dms_sample_01.dmp 0
file export_dms_sample_04.dmp 0
file export_dms_sample_08.dmp .09
file export_dms_sample_02.dmp 0
file export_dms_sample_10.dmp .3
file export_dms_sample_06.dmp .31
file export_dms_sample_11.dmp .31

12 rows selected.

Transfer the Oracle Data Pump export dump file set to the target DB instance

Complete the steps in this section only after verifying that the export is complete on the source DB instance.

Create a database link in the source database pointing to the target database. The following command creates a database link to_target_rds that connects to the target database:

SQL(source)> CREATE DATABASE LINK to_target_rds
CONNECT TO ADMIN IDENTIFIED BY TEMP_1234
USING ‘(DESCRIPTION =(ADDRESS_LIST =(ADDRESS =(PROTOCOL =TCPS)=
(HOST =rds-oracle-target-01.xxxxx.us-east-1.rds.amazonaws.com)(PORT =1526)))
(CONNECT_DATA =(SID =ORCL))
(SECURITY =(SSL_SERVER_CERT_DN =”C=US,ST=Washington,L=Seattle,
O=Amazon.com,OU=RDS,CN=rds-oracle-target-01.xxxxx.us-east-1.rds.amazonaws.com)))’;

Database link created.

Transfer the dump files from the source to target database:

SQL(source)>begin
for fname in (select filename from (select * from
table(RDSADMIN.RDS_FILE_UTIL.LISTDIR(‘DATA_PUMP_DIR’)) order by mtime) where
filename like ‘export_dms_sample%dmp’)
loop
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object =’DATA_PUMP_DIR’,
source_file_name =fname.filename,
destination_directory_object =’DATA_PUMP_DIR’,
destination_file_name =fname.filename,
destination_database =’to_target_rds’
);
end loop;
end;
/
PL/SQL procedure successfully completed.

List the export dump files in DATA_PUMP_DIR with their size in the target RDS for Oracle DB instance and verify the size by matching the file sizes obtained from the source database in step 4 of the previous section.

SQL(target)> SELECT TYPE,
filename,
round(SUM(filesize) / 1024 / 1024 / 1024, 2)”File Size in GiB”
FROM TABLE(rdsadmin.rds_file_util.listdir(‘DATA_PUMP_DIR’))
WHERE filename LIKE ‘export_dms_sample%’
GROUP BY TYPE,filename;

TYPE FILENAME File Size in GiB
———- —————————— —————-
file export_dms_sample_09.dmp .31
file export_dms_sample_03.dmp 0
file export_dms_sample.log 0
file export_dms_sample_07.dmp .33
file export_dms_sample_05.dmp .25
file export_dms_sample_01.dmp 0
file export_dms_sample_04.dmp 0
file export_dms_sample_08.dmp .09
file export_dms_sample_02.dmp 0
file export_dms_sample_10.dmp .3
file export_dms_sample_06.dmp .31
file export_dms_sample_11.dmp .31

12 rows selected.

Load data in the target database using Oracle Data Pump

Perform a data load into the target database using the Oracle Data Pump API DBMS_DATAPUMP. Use the following code snippets to load data in the target database.

Import the data by calling the DBMS_DATAPUMP procedures:

SQL(target)> DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => ‘IMPORT’,
job_mode => ‘SCHEMA’,
job_name => ‘import_dms_sample_job’
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘export_dms_sample_%U.dmp’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘import_dms_sample.log’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,’SCHEMA_EXPR’,’IN (”DMS_SAMPLE”)’);
DBMS_DATAPUMP.SET_PARALLEL(v_hdnl,8);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/

PL/SQL procedure successfully completed.

Data Pump jobs are started asynchronously. You can view the contents of the import log by using the rdsadmin.rds_file_util.read_text_file procedure:

SQL(target)> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file(‘DATA_PUMP_DIR’,’import_dms_sample.log’));

TEXT
——————————————————————————————————————————-
Master table “ADMIN”.”import_dms_sample_job” successfully loaded/unloaded
Starting “ADMIN”.”import_dms_sample_job”:
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported “DMS_SAMPLE”.”SEAT” 89.96 MB 3565082 rows
. . imported “DMS_SAMPLE”.”MLB_DATA” 504.3 KB 2230 rows
. . imported “DMS_SAMPLE”.”PLAYER” 216.3 KB 5157 rows
. . imported “DMS_SAMPLE”.”NFL_DATA” 175.3 KB 2928 rows
. . imported “DMS_SAMPLE”.”NAME_DATA” 105.3 KB 5373 rows
. . imported “DMS_SAMPLE”.”SPORTING_EVENT” 50.03 KB 1142 rows
<truncated ….>

Validate the target database

Complete the following steps to validate the target database:

Verify the objects count in the target database and compare with the objects count captured earlier from the source database:

SQL(target)> SELECT owner,
object_type,
count(*)
FROM DBA_OBJECTS
WHERE owner IN ( ‘DMS_SAMPLE’ )
GROUP BY owner,
object_type
ORDER BY 1,2;
OWNER OBJECT_TYPE COUNT(*)
————— ———————– ———-
DMS_SAMPLE INDEX 22
DMS_SAMPLE PACKAGE 1
DMS_SAMPLE PACKAGE BODY 1
DMS_SAMPLE PROCEDURE 6
DMS_SAMPLE SEQUENCE 5
DMS_SAMPLE TABLE 16
DMS_SAMPLE TRIGGER 3
DMS_SAMPLE VIEW 2

8 rows selected.

List invalid objects of the DMS_SAMPLE schema:

SQL(target)> select owner,
object_type,
status,
count(*)
FROM dba_objects
WHERE status =’INVALID’
and OWNER in (‘DMS_SAMPLE’)
group by owner,object_type,status
order by 1,2;
no rows selected

Recompile the DMS_SAMPLE schema using following SQL statement:

SQL> exec SYS.UTL_RECOMP.RECOMP_PARALLEL(4,’DMS_SAMPLE’);
PL/SQL procedure successfully completed.

Generate SQL statements to recompile the private synonyms, public synonyms, packages, and package body. We recommend reviewing the generated SQL statements before running them in the target database.

–recompile synonyms
SQL(target)> select ‘alter synonym ‘||owner||’.’||object_name||’ compile; ‘
from dba_objects where status=’INVALID’ and object_type=’SYNONYM’
and owner in (‘DMS_SAMPLE’) order by 1;
no rows selected

–recompile public synonyms
SQL(target)> select ‘alter public synonym ‘||object_name||’ compile; ‘
from dba_objects where status=’INVALID’ and object_type=’SYNONYM’
and owner=’PUBLIC’ order by 1;
no rows selected

–recompile Package
SQL(target)> select ‘alter package ‘||owner||’.’||object_name||’ compile;’
from dba_objects where status=’INVALID’ and object_type=’PACKAGE’
and owner in (‘DMS_SAMPLE’) order by 1;
no rows selected

–recompile Package Body
SQL(target)> select ‘alter package ‘||owner||’.’||object_name||’ compile body;’
from dba_objects
where
status=’INVALID’
and object_type=’PACKAGE BODY’
and owner in (‘DMS_SAMPLE’) order by 1;
no rows selected

You can verify the current errors on all the objects of the schema using the following SQL statement:

— Verify Errors
SQL(target)> select owner,name,type,text from dba_errors where owner in (‘DMS_SAMPLE’);
no rows selected

Enable Amazon RDS backup retention and archive logging on the target DB instance and Multi-AZ

Enable Amazon RDS automatic backup and archive logging on the target DB instance after data loading in the target database.

Enable automated backups of your target RDS for Oracle DB instance by setting the backup retention period to a positive non-zero value. For more information, see Enabling automated backups.
Enable archive logging on the target DB instance:

SQL(target)> begin
rdsadmin.rdsadmin_util.set_configuration(
name => ‘archivelog retention hours’,
value => ‘4’);
end;
/

PL/SQL procedure successfully completed.

SQL(target)> set serveroutput on

SQL(target)> EXEC rdsadmin.rdsadmin_util.show_configuration;
NAME:archivelog retention hours
VALUE:4
DESCRIPTION:ArchiveLog expiration specifies the duration in hours before archive/redo log files are automatically deleted.
NAME:tracefile retention
VALUE:10080
DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in bdump are automatically deleted.

PL/SQL procedure successfully completed.

We recommend defining the archivelog retention hours parameter based on your application workload.

You can convert a Single-AZ RDS for Oracle DB instance to a Multi-AZ instance. For more information on best practices for converting to Multi-AZ instances, see Best practices for converting a Single-AZ Amazon RDS instance to a Multi-AZ instance.

Configure ongoing replication using AWS DMS

Prepare the source and target RDS for Oracle databases for ongoing replication using AWS DMS.

Configure VPC security groups for the AWS DMS replication instance in the target AWS account

Complete the following steps to configure your VPC security groups:

Create a VPC security group called aws-dms-sg without any inbound rules in the target AWS account.
Update the inbound rules of the security group that is associated with the target RDS for Oracle DB instance:

For Type, choose Oracle-RDS.
For Port, use the DB port.
For Source, use the security group ID of the aws-dms-sg security group.
For Description, enter Rule to connect AWS DMS Replication Instance and target DB Instance.

Update the inbound rules of the security group that is associated with the source RDS DB instance in the source AWS account:

For Type, choose Oracle-RDS.
For Port, use the DB port.
For Source, use the security group ID of the aws-dms-sg security group that you created in the target AWS account with the target AWS account number; for example, 2634xxxxxxxx/aws-dms-sg.
For Description, enter Rule to connect AWS DMS Replication Instance and source DB Instance.

Disable all triggers and referential integrity constraints

Because AWS DMS doesn’t support TDE on the column level for Amazon RDS for Oracle, you must disable triggers and referential integrity constraints of the user that you intend to replicate in the target database to avoid data inconsistency during the CDC phase. You generate SQL statements to disable and enable the triggers and referential integrity constraints of the DMS_SAMPLE user using the following sample script. Before running, you should test and change it according to your requirements.

Run the following statement in the target database to generate the SQL statements to enable triggers. Don’t run the generated SQL statements now. Save this output to enable triggers again during cutover.

SQL(source)> select ‘alter trigger ‘||owner||’.’||trigger_name||’ enable;’ AS enable_trigger_ddl
from dba_triggers
where owner in (‘DMS_SAMPLE’)
and status=’ENABLED’;
ENABLE_TRIGGER_DDL
————————————————————————————–
alter trigger DMS_SAMPLE.SPORTING_EVENT_ID_TRG enable;
alter trigger DMS_SAMPLE.PLAYER_ID_TRG enable;
alter trigger DMS_SAMPLE.SPORT_TEAM_ID_TRG enable;

Run the following statement in the target database to generate the SQL statements to enable referential integrity constraints. Don’t run the generated SQL statements now. Save this output to use later during cutover.

SQL(source)> select ‘alter table ‘||owner||’.’||table_name||’ enable constraint ‘||
constraint_name||’;’ AS enable_ref_constraints from dba_constraints
where constraint_type=’R’
and owner in (‘DMS_SAMPLE’)
and status=’ENABLED’;

ENABLE_REF_CONSTRAINTS
————————————————————————————–
alter table DMS_SAMPLE.SPORT_TEAM enable constraint ST_SPORT_TYPE_FK;
alter table DMS_SAMPLE.SPORT_TEAM enable constraint HOME_FIELD_FK;
alter table DMS_SAMPLE.SPORTING_EVENT_TICKET enable constraint SET_PERSON_ID;
alter table DMS_SAMPLE.SPORTING_EVENT_TICKET enable constraint SET_SPORTING_EVENT_FK;
alter table DMS_SAMPLE.SPORTING_EVENT_TICKET enable constraint SET_SEAT_FK;
alter table DMS_SAMPLE.TICKET_PURCHASE_HIST enable constraint TPH_TICKETHOLDER_ID;
alter table DMS_SAMPLE.TICKET_PURCHASE_HIST enable constraint TPH_SPORT_EVENT_TIC_ID;
alter table DMS_SAMPLE.TICKET_PURCHASE_HIST enable constraint TPH_TRANSFER_FROM_ID;
alter table DMS_SAMPLE.SPORT_LEAGUE enable constraint SL_SPORT_TYPE_FK;
alter table DMS_SAMPLE.SPORTING_EVENT enable constraint SE_SPORT_TYPE_FK;
alter table DMS_SAMPLE.SPORTING_EVENT enable constraint SE_HOME_TEAM_ID_FK;
alter table DMS_SAMPLE.SPORTING_EVENT enable constraint SE_LOCATION_ID_FK;
alter table DMS_SAMPLE.SPORTING_EVENT enable constraint SE_AWAY_TEAM_ID_FK;
alter table DMS_SAMPLE.SEAT enable constraint S_SPORT_LOCATION_FK;
alter table DMS_SAMPLE.SEAT enable constraint SEAT_TYPE_FK;
alter table DMS_SAMPLE.PLAYER enable constraint SPORT_TEAM_FK;
alter table DMS_SAMPLE.SPORT_DIVISION enable constraint SD_SPORT_LEAGUE_FK;
alter table DMS_SAMPLE.SPORT_DIVISION enable constraint SD_SPORT_TYPE_FK;

18 rows selected.

Run the following statement in the target database to generate the SQL statements to disable triggers. Run the generated SQL statements in the target database.

SQL(target)> select ‘alter trigger ‘||owner||’.’||trigger_name||’ disable;’ AS disable_trigger_ddl
from dba_triggers
where owner in (‘DMS_SAMPLE’)
and status=’ENABLED’;

DISABLE_TRIGGER_DDL
————————————————————————————–
alter trigger DMS_SAMPLE.SPORTING_EVENT_ID_TRG disable;
alter trigger DMS_SAMPLE.PLAYER_ID_TRG disable;
alter trigger DMS_SAMPLE.SPORT_TEAM_ID_TRG disable;

Run the following statement in the target database to generate the SQL statements to disable referential integrity constraints. Run the generated SQL statements in the target database.

SQL(target)> select ‘alter table ‘||owner||’.’||table_name||’ disable constraint ‘||
constraint_name||’;’ AS disable_ref_constraints from dba_constraints
where constraint_type=’R’
and owner in (‘DMS_SAMPLE’)
and status=’ENABLED’;

DISABLE_REF_CONSTRAINTS
————————————————————————————–
alter table DMS_SAMPLE.SPORT_TEAM disable constraint ST_SPORT_TYPE_FK;
alter table DMS_SAMPLE.SPORT_TEAM disable constraint HOME_FIELD_FK;
alter table DMS_SAMPLE.SPORTING_EVENT_TICKET disable constraint SET_PERSON_ID;
alter table DMS_SAMPLE.SPORTING_EVENT_TICKET disable constraint SET_SPORTING_EVENT_FK;
alter table DMS_SAMPLE.SPORTING_EVENT_TICKET disable constraint SET_SEAT_FK;
alter table DMS_SAMPLE.TICKET_PURCHASE_HIST disable constraint TPH_TICKETHOLDER_ID;
alter table DMS_SAMPLE.TICKET_PURCHASE_HIST disable constraint TPH_SPORT_EVENT_TIC_ID;
alter table DMS_SAMPLE.TICKET_PURCHASE_HIST disable constraint TPH_TRANSFER_FROM_ID;
alter table DMS_SAMPLE.SPORT_LEAGUE disable constraint SL_SPORT_TYPE_FK;
alter table DMS_SAMPLE.SPORTING_EVENT disable constraint SE_SPORT_TYPE_FK;
alter table DMS_SAMPLE.SPORTING_EVENT disable constraint SE_HOME_TEAM_ID_FK;
alter table DMS_SAMPLE.SPORTING_EVENT disable constraint SE_LOCATION_ID_FK;
alter table DMS_SAMPLE.SPORTING_EVENT disable constraint SE_AWAY_TEAM_ID_FK;
alter table DMS_SAMPLE.SEAT disable constraint S_SPORT_LOCATION_FK;
alter table DMS_SAMPLE.SEAT disable constraint SEAT_TYPE_FK;
alter table DMS_SAMPLE.PLAYER disable constraint SPORT_TEAM_FK;
alter table DMS_SAMPLE.SPORT_DIVISION disable constraint SD_SPORT_LEAGUE_FK;
alter table DMS_SAMPLE.SPORT_DIVISION disable constraint SD_SPORT_TYPE_FK;

18 rows selected.

Create an AWS DMS replication instance

Create an AWS DMS replication instance called dms-repl-instance-oracle-01 in the target AWS account using the VPC security group aws-dms-sg created earlier. For more information on replication instance sizing, see Choosing the right AWS DMS replication instance for your migration.

Create AWS DMS source and target endpoints

Complete the following steps:

Create an AWS DMS source endpoint called source-ep in the target AWS account for the source database.
Create an AWS DMS target endpoint called target-ep for the target Oracle database.

Use the source and target endpoints to test connectivity with the source and target database.

Create an AWS DMS migration task

Create an AWS DMS database migration task in the target AWS account to specify the schema for ongoing replication. The task details depend on your migration project. For more information, see Working with AWS DMS tasks. Note the following key steps while creating the AWS DMS database migration task in this post:

For Task identifier, enter migration-task-cdc-01.
For Replication instance, choose dms-repl-instance-oracle-01.
For Source database endpoint, choose source-ep.
For Target database endpoint, choose target-ep.
For Migration type, choose Replicate data changes only (CDC).
For CDC start mode for source transactions, select Enable custom CDC start mode.
For Custom CDC start point for source transactions, choose Specify a log sequence number and specify the SCN for System change number captured from the source database. In this post, we specify 3357030. We provided detailed guidance earlier in this post on how to acquire the SCN.
For Custom CDC stop mode for source transactions, choose Disable custom CDC stop mode.
For Target table preparation mode, leave as default.
For LOB column settings, choose Full LOB mode and for LOB chunk size (kb), leave as default. For more information on LOB modes, see Setting LOB support for source databases in an AWS DMS task.
For Validation, choose Turn on.
For Task logs, choose Turn on CloudWatch logs.

AWS DMS task logging uses CloudWatch to log information during the migration process. With task logging settings, you can change which component activities are logged and the severity level of information that is written to the log. We use the default settings in this post.

In the Advanced task settings section, under Control table settings, specify DMS_USER for Create control table in target using schema.

The AWS DMS control tables such as awsdms_apply_exceptions and awsdms_validation_failures_v1 will reside in the schema DMS_USER of the target database. If you don’t enter any information for this option, then AWS DMS uses the target schema for control tables.

For Table mappings, specify the source database schema under Selection rules. In this post, we migrate all tables from the DMS_SAMPLE schema.

When you use Oracle as a target, AWS DMS migrates the data to the schema owned by the target endpoint’s user.

For example, suppose that you’re migrating a schema named DMS_SAMPLE to an Oracle target endpoint, and the target endpoint user name is DMS_USER. AWS DMS connects to the Oracle target as DMS_USER and populates the DMS_USER schema with database objects from the source DMS_SAMPLE.

To override this behavior, provide a transformation rule. For example, to migrate the source DMS_SAMPLE schema objects to the DMS_SAMPLE schema at the target endpoint, use the following transformation.

A premigration assessment warns you of potential migration issues before starting your migration task. Premigration assessments generally have minimal impact on your databases to run. We run the premigration assessment as shown in the following screenshot. For more information on premigration assessments for a task, see Enabling and working with premigration assessments for a task.

For Migration task startup configuration, choose Manually Later.
Choose Create task.

AWS DMS creates a premigration assessment report. We recommend reviewing the findings and taking corrective actions as needed. You can also consider creating another premigration assessment after fixing the earlier issues. It is expected to have some issues with the tables with TDE on the column level.

Run the task.

Monitor replication

You can monitor your AWS DMS task using CloudWatch metrics. AWS DMS provides statistics for replication tasks, including incoming and committed changes, and latency between the replication host and both the source and target databases. For a complete list of the available metrics, see Replication task metrics. You can determine the total latency, or replica lag, for a task by combining the CDCLatencySource and CDCLatencyTarget metric values.

Validate the data

You use AWS DMS data validation to make sure all the preexisting data in the tables is migrated accurately. It also compares the incremental changes for a CDC-enabled task as they occur. During data validation, AWS DMS compares each row in the source with its corresponding row at the target, verifies the rows contain the same data, and reports any mismatches. See AWS DMS data validation for more details.

Perform cutover

You can plan the cutover of the application to the target RDS for Oracle database when the replication lag is minimal. We recommend planning your cutover during a low traffic window. The following are some of key cutover steps:

Stop your application.
Make sure that CDC latency is 0. You can use the CDCLatencySource and CDCLatencyTarget metrics to monitor the replication task.
Make sure that there are no data validation errors in AWS DMS.
Stop the AWS DMS database migration task. For more information, see Stop Replication Task.

AWS DMS does not support TDE on the column level for Amazon RDS for Oracle. You should reload all those unsupported tables again. In this post, the PLAYER and PERSON tables have TDE on the column level.

Take the export for the tables with encrypted columns from the source database using DMS_DATAPUMP:

SQL(source)>DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => ‘EXPORT’,
job_mode => ‘TABLE’,
job_name => ‘export_enc_tables’
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘export_enc_tables_%U.dmp’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘export_enc_tables.log’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,’SCHEMA_EXPR’,’IN (”DMS_SAMPLE”)’);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,’NAME_EXPR’,’IN (”PERSON”,”PLAYER”)’);
DBMS_DATAPUMP.SET_PARAMETER(v_hdnl,’FLASHBACK_TIME’,’SYSTIMESTAMP’);
DBMS_DATAPUMP.SET_PARALLEL(v_hdnl,8);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
PL/SQL procedure successfully completed.

You can view the contents of the import log by using the rdsadmin.rds_file_util.read_text_file procedure:

SQL(source)> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file(‘DATA_PUMP_DIR’,’export_enc_tables.log’));

TEXT
————————————————————————————–
Starting “ADMIN”.”export_enc_tables”:
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/TABLE
….
<output truncated>

Transfer the export dump file set to the target database as demonstrated in earlier steps. Import the encrypted tables into the target database DBMS_DATAPUMP with the TABLE_EXISTS_ACTION parameter set to TRUNCATE:

SQL(target)>DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => ‘IMPORT’,
job_mode => ‘TABLE’,
job_name => ‘import_enc_tables’
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘export_enc_tables_%U.dmp’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => ‘import_enc_tables.log’,
directory => ‘DATA_PUMP_DIR’,
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,’SCHEMA_EXPR’,’IN (”DMS_SAMPLE”)’);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,’NAME_EXPR’,’IN (”PERSON”,”PLAYER”)’);
DBMS_DATAPUMP.SET_PARAMETER(v_hdnl,’TABLE_EXISTS_ACTION’,’TRUNCATE’);
DBMS_DATAPUMP.SET_PARALLEL(v_hdnl,8);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
PL/SQL procedure successfully completed.
SQL(target)>
SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file(‘DATA_PUMP_DIR’,’import_enc_tables.log’));SQL>

TEXT
————————————————————————————–
Master table “ADMIN”.”import_enc_tables” successfully loaded/unloaded
Starting “ADMIN”.”import_enc_tables1″:
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “DMS_SAMPLE”.”PERSON” exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Table “DMS_SAMPLE”.”PLAYER” exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “DMS_SAMPLE”.”PLAYER” 216.3 KB 5157 rows
. . imported “DMS_SAMPLE”.”PERSON” 255.8 MB 7055276 rows
…..
<output truncated>

Forward all the sequences in the target using the source value. AWS DMS doesn’t replicate incremental sequence numbers during CDC from the source database; you need to generate the latest sequence value from the source sequences and apply it on the target database to avoid sequence value inconsistencies. Run the following query in the source database to generate the DDL to reset sequences in the target database. Run the generated SQL statements in the target database to update sequences.

SQL(source)> SELECT ‘ALTER SEQUENCE ‘||sequence_owner||’.’||sequence_name||’ RESTART START WITH ‘
||(last_number+increment_by)||’;’ as sequence_forward
FROM dba_sequences
WHERE sequence_owner IN(‘DMS_SAMPLE’);

SEQUENCE_FORWARD
————————————————————————————–
ALTER SEQUENCE DMS_SAMPLE.PLAYER_SEQ RESTART START WITH 51611;
ALTER SEQUENCE DMS_SAMPLE.SPORTING_EVENT_SEQ RESTART START WITH 11611;
ALTER SEQUENCE DMS_SAMPLE.SPORTING_EVENT_TICKET_SEQ RESTART START WITH 499104011;
ALTER SEQUENCE DMS_SAMPLE.SPORT_LOCATION_SEQ RESTART START WITH 82;
ALTER SEQUENCE DMS_SAMPLE.SPORT_TEAM_SEQ RESTART START WITH 811;

You disabled triggers and referential integrity constraints in the target database before starting the AWS DMS CDC-only task. Enable the triggers and referential integrity constraints in the target database now. Use the SQL statements generated in the previous step to enable the triggers and referential integrity constraints in the target database. We recommend verifying the SQL statements before running them in the target database.
Modify job_queue_processes in the parameter group associated with the target DB instance with the appropriate value as configured in the source DB instance.
Update the application configuration or DNS CNAME records with the target RDS for Oracle DB instance endpoint.
After the application has successfully connected to the target DB instances, you may decide to temporarily stop or delete the Amazon RDS resources in the source AWS account. To stop the RDS DB instance temporarily, refer to Stopping an Amazon RDS DB instance temporarily and to delete the RDS DB instance, refer to Deleting a DB instance.
Start your application.

Clean up

As part of this migration, you deployed AWS DMS-related resources in your AWS account to replicate data. These resources will incur costs as long as they are in use. Be sure to remove any resources you no longer need.

Conclusion

In this post, we discussed the steps involved in migrating a TDE-enabled RDS for Oracle DB instance from one AWS account to another. We also used AWS DMS for ongoing replication to reduce the downtime. With the complexity involved in database migrations, we highly recommend testing the migration steps in non-production environments prior to making changes in production.

We welcome your feedback. If you have any questions or suggestions, leave them in the comments section.

About the Authors

Javeed Mohammed is a Database Specialist Solutions Architect with Amazon Web Services. He works with the Amazon RDS team, focusing on commercial database engines like Oracle. He enjoys working with customers to help design, deploy, and optimize relational database workloads on AWS Cloud.

Alok Srivastava is a Senior Database Consultant at AWS. He works with AWS customers and partners in their journey to the AWS Cloud with a focus on database migration and modernization programs.

Read MoreAWS Database Blog

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments