Monday, July 15, 2024
No menu items!
HomeDatabase ManagementPerform a two-step database migration from an on-premises Oracle database to Amazon...

Perform a two-step database migration from an on-premises Oracle database to Amazon RDS for Oracle using RMAN

Database migration from on premises to the cloud requires planning and prework in order to be successful. Key factors such as the size of the on-premises database, network bandwidth, available downtime, and backup method should be considered while planning your database migration project. In this post, we discuss how to perform a homogeneous migration from an on-premises Oracle database to Amazon Relational Database Service (Amazon RDS) for Oracle.

To move an Oracle database to Amazon RDS for Oracle, tools such as Oracle Data Pump are useful due to its integration with Amazon Simple Storage Service (Amazon S3). Other methods such as Oracle SQL Loader can be quite effective when moving a large number of records. One of the most popular database backup strategies for Oracle databases on-premises has been Oracle Recovery Manager (RMAN), where database administrators have successfully implemented RMAN backups and restore as a part of their large-scale database migrations.

Amazon RDS does not allow you to access the operating system of the host. Therefore, RMAN restore to Amazon RDS is not a viable solution as of this writing. For our solution, we restore the RMAN backup on an Amazon Elastic Compute Cloud (Amazon EC2) instance, then we use Data Pump to export data from the EC2 server to Amazon S3 and restore that in the RDS for Oracle database.

Storage best practice

Moving a large data pump file (for example, 5 TB) requires the same size available in the Data Pump directory. As a best practice, you can delete the dumps from the Data Pump directory after importing the data without indexes and rebuild the indexes later. That way, a large dump file can be reused in the index creation.

Solution overview

In this post, we use a two-step approach to migrate the source database to Amazon RDS for Oracle. First, we use RMAN to restore the RMAN backup on an EC2 instance, then we use Data Pump to export data to Amazon S3 and restore that in the RDS for Oracle database. You can use RMAN instead of performing an Oracle expdp/impdp if you have a large database or large number of records in tables that might take much longer using Oracle Data Pump.

For this post, we assume we don’t have Oracle Golden Gate or Oracle Data Guard licenses. Therefore, to minimize the downtime of the migration, we follow the solution outlined in this post. To limit the size of backup files, you can run RMAN in compressed mode according to your license considerations. You can also apply encryption strategies before moving files from on premises to the AWS Cloud based on your licensing agreements.

The following diagram illustrates our solution architecture.

The workflow includes the following steps:

Perform a full RMAN backup to Amazon S3 using AWS Storage Gateway.
Perform an incremental backup using Storage Gateway.
Perform an incremental backup using Storage Gateway again to reduce the lag time.
The lag time in the context of an incremental backup for an Oracle database refers to the delay between the last recorded data change in the production database and the point at which that change is recorded in the backup.
Continue to perform incremental backups using Storage Gateway until the lag is very minimal (according to your requirements).
Stop any transactions on the database to get a final incremental copy that will be stored in the S3 bucket directly.
In the following steps, you incur downtime as the restore process happens on the EC2 instance.
Restore the database on the EC2 instance from the S3 bucket, which has the full backup as well all the incremental backups, using RMAN commands.
When the restore is complete, use Data Pump to move the data to the RDS instance.
Validate all the data on the RDS instance with the on-premises database server.

You should test these steps thoroughly to predict the correct downtime needed to forecast to the business users. As a general rule, the restore process can be compared to the RMAN backup process twice, with a scope for changes in backup channels and the shape of the EC2 instance that might reduce the duration of the restore.

To cut down the downtime further, you can also use AWS Database Migration Service (AWS DMS), which can logically replicate data after the RMAN backup is restored on the EC2 instance. You can then decide to account for the downtime when the data is moved to Amazon RDS for Oracle using Data Pump. For more information about reducing downtime, see Plan an Oracle migration to Amazon RDS for Oracle using Oracle Data Pump and AWS DMS.

Also, the directory structure on the on-premises side must match the EC2 server, otherwise you need to relocate the database files.

This solution guides you through creating an Oracle RMAN backup using Storage Gateway, and then restoring that backup on an EC2 server. After the backup is restored on the EC2 server, you use Data Pump to restore the database from Amazon EC2 to Amazon RDS for Oracle.

Prerequisites

For instructions on setting up Storage Gateway, refer to File Interface to AWS Storage Gateway.

Perform an RMAN backup to Amazon S3 using Storage Gateway

Complete the following steps to perform an RMAN backup:

Log in to the Oracle database server:

$ rman target /

Recovery Manager: Release 19.0.0.0.0 – Production on Fri May 10 14:28:40 2024
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORARMAN (DBID=3063910461)

Run the following commands to set configuration parameters:

RMAN> show all ;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name ORARMAN are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM ‘AES128’; # default
CONFIGURE COMPRESSION ALGORITHM ‘BASIC’ AS OF RELEASE ‘DEFAULT’ OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/opt/app/oracle/product/19c/dbs/snapcf_orarman.f’; # default

Run the following commands to perform the backup:

RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT ‘/u01/backup/rman/full_%u_%s_%p’;
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT ‘/opt/oracle/backup/rman/full_%u_%s_%p’;
new RMAN configuration parameters are successfully stored
2.4 ) RMAN> BACKUP AS BACKUPSET DATABASE PLUS ARCHIVELOG;
Starting backup at 10-MAY-24
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=22 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=5 RECID=1 STAMP=1050849118
channel ORA_DISK_1: starting piece 1 at 10-MAY-24
channel ORA_DISK_1: finished piece 1 at 10-MAY-24
piece handle=/opt/oracle/backup/rman/full_01va5bqu_1_1 tag=TAG20200911T143158 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 10-MAY-24
Starting backup at 10-MAY-24
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/opt/app/oracle/oradata/ORARMAN/system01.dbf
input datafile file number=00003 name=/opt/app/oracle/oradata/ORARMAN/sysaux01.dbf
input datafile file number=00004 name=/opt/app/oracle/oradata/ORARMAN/undotbs01.dbf
input datafile file number=00007 name=/opt/app/oracle/oradata/ORARMAN/users01.dbf
channel ORA_DISK_1: starting piece 1 at 10-MAY-24
channel ORA_DISK_1: finished piece 1 at 10-MAY-24
piece handle=/opt/oracle/backup/rman/full_02va5br1_2_1 tag=TAG20200911T143201 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 10-MAY-24
Starting backup at 10-MAY-24
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=2 STAMP=1050849136
channel ORA_DISK_1: starting piece 1 at 10-MAY-24
channel ORA_DISK_1: finished piece 1 at 10-MAY-24
piece handle=/opt/oracle/backup/rman/full_03va5brg_3_1 tag=TAG20200911T143216 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-MAY-24
Starting Control File and SPFILE Autobackup at 10-MAY-24
piece handle=/opt/app/oracle/product/19c/dbs/c-3063910461-20200911-00 comment=NONE
Finished Control File and SPFILE Autobackup at 10-MAY-24

Validate the export by making sure all the backup files are in the S3 bucket.

Restore the RMAN backup to an EC2 instance using Storage Gateway

Complete the following steps to restore the RMAN backup:

Connect to RMAN:

ip-XXX-XX-XX-XX.ec2.internal:/opt/oracle>rman target /
Recovery Manager: Release 19.0.0.0.0 – Production on Tue MAY 14 20:45:01 2024
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database (not started)

Perform a startup mount:

RMAN> startup mount;
Oracle instance started
database mounted
Total System Global Area 3758094912 bytes
Fixed Size 8903232 bytes
Variable Size 721420288 bytes
Database Buffers 3019898880 bytes
Redo Buffers 7872512 bytes

Perform a preview of the restore:

RMAN> restore database preview;
Starting restore at 14-MAY-24
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=271 device type=DISK
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
20 Full 1.98G DISK 00:03:04 14-MAY-24
BP Key: 20 Status: AVAILABLE Compressed: NO Tag: TAG20210214T164006
Piece Name: /u01/backup/rmanstby/full_0lvn66j6_21_1
List of Datafiles in backup set 20
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
—- — —- ———- ——— ———– —— —-
1 Full 8621508 14-MAY-24 NO /opt/app/oracle/oradata/ORARMAN/system01.dbf
3 Full 8621508 14-MAY-24 NO /opt/app/oracle/oradata/ORARMAN/sysaux01.dbf
4 Full 8621508 14-MAY-24 NO /opt/app/oracle/oradata/ORARMAN/undotbs01.dbf
7 Full 8621508 14-MAY-24 NO /opt/app/oracle/oradata/ORARMAN/users01.dbf
using channel ORA_DISK_1
List of Archived Log Copies for database with db_unique_name ORARMAN
=====================================================================
Key Thrd Seq S Low Time
——- —- ——- – ———
111 1 115 A 14-MAY-24
Name: /opt/app/oracle/product/19c/dbs/arch1_115_1050842687.dbf
recovery will be done up to SCN 8621508
Media recovery start SCN is 8621508
Recovery must be done beyond SCN 8621508 to clear datafile fuzziness
Finished restore at 14-MAY-24

Restore the database. The restore command syntax will vary depending on the situation. In this example, control files from the on-premises database have been copied to the EC2 instance before starting the restore:

RMAN> restore database;
Starting restore at 14-MAY-24
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /opt/app/oracle/oradata/ORARMAN/system01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /opt/app/oracle/oradata/ORARMAN/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /opt/app/oracle/oradata/ORARMAN/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00007 to /opt/app/oracle/oradata/ORARMAN/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/backup/rmanstby/full_0lvn66j6_21_1
channel ORA_DISK_1: piece handle=/u01/backup/rmanstby/full_0lvn66j6_21_1 tag=TAG20210214T164006
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 14-MAY-24

Recover individual data files or tablespaces depending on your case. You will see what to recover when you issue the alter database open command. The following code shows some examples:

RMAN> recover tablespace system;
Starting recover at 14-MAY-24
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:00
Finished recover at 14-MAY-24
RMAN> recover datafile ‘/opt/app/oracle/oradata/ORARMAN/undotbs01.dbf’
2> ;
Starting recover at 14-MAY-24
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:01
Finished recover at 14-MAY-24

Open the database:

RMAN> alter database open;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 05/14/2024 17:04:36
ORA-01113: file 7 needs media recovery
ORA-01110: data file 7: ‘/opt/app/oracle/oradata/ORARMAN/users01.dbf’
RMAN> recover datafile ‘/opt/app/oracle/oradata/ORARMAN/users01.dbf’;
Starting recover at 14-MAY-24
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:00
Finished recover at 14-MAY-24
RMAN> alter database open;
Statement processed

Export from Amazon EC2 to Amazon RDS for Oracle using Data Pump

After the database is restored on the EC2 instance, complete the following steps to export your data to an S3 bucket. You can also do a direct import into Amazon RDS using a database link from the EC2 server.

Perform an export using Data Pump. For this post, on our source EC2 database, we have schema1, which contains the table emp_master:

$expdp dumpfile=schema1.dmp userid=schema1/yourpassword schemas=schema1

This will produce a dump file in the directory:

Dump file set for SCHEMA1.SYS_EXPORT_SCHEMA_01 is:
/opt/app/oracle/admin/orarman/dpdump/schema3.dmp
Job “SCHEMA1”.”SYS_EXPORT_SCHEMA_01″ successfully completed at Tue May 14 00:43:25 2024 elapsed 0 00:00:33

Install the AWS Command Line Interface (AWS CLI) on the EC2 database server and move the dump file to the S3 bucket.
You can also use a storage gateway to save time to export directly to Amazon S3. You can use this method if Storage Gateway isn’t deployed into this EC2 server.

$ curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o “awscliv2.zip”
$ unzip awscliv2.zip
$ aws –version
aws-cli/2.15.61 Python/3.11.8 Linux/5.10.217-183.860.amzn2int.x86_64 exe/x86_64.amzn.2
$ aws configure
$ aws s3 cp schema1.dmp s3://rmanbucket1807/export/
upload: ./schema1.dmp to s3://rmanbucket1807/export/schema1.dmp

Enable the S3 integration feature in Amazon RDS.
Move the dump file created earlier from the source EC2 instance database to the RDS Data Pump directory using the following commands:

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => ‘rmanbucket1807’,
p_s3_prefix => ‘export/schema1’,
p_directory_name => ‘DATA_PUMP_DIR’)
AS TASK_ID FROM DUAL;

The preceding command will generate the task ID (in our case, 1613433867971-645).

Check the status of the task using the following command with your task ID:

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file(‘BDUMP’,’dbtask-1613433867971-645.log’)); where 1613433867971-645 is the task number generated earlier

A successful output should look like the following code:

[INFO] This task is about to list the Amazon S3 objects for AS Region us-east-1, bucket name rmanbucket1807, and prefix export/schema1.
[INFO] The task successfully listed the Amazon S3 objects for AWS Region us-east-1, bucket name rmanbucket1807, and prefix export/schema1.
[INFO] This task is about to download the Amazon S3 object or objects in /rdsdbdata/datapump from bucket name rmanbucket1807 and key export/schema1.dmp.
[INFO] The task successfully downloaded the Amazon S3 object or objects from bucket name rmanbucket1807 with key export/schema1.dmp to the location /rdsdbdata/datapump.
[INFO] The task finished successfully.

Verify if the file exists in the Data Pump directory using the following command:

SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR(‘DATA_PUMP_DIR’)) order by mtime;

The output will look like the following code:

schema1.dmp file 16384 14-MAY-24
datapump/ directory 4096 14-MAY-24

You can import the dump file by using either the command line or the SQL Developer import jobs option . Also, you can use DBMS_DATAPUMP pl/sql packages to do the import. You can run the following command on a server with SQL Net port (for example, 1521) open between this server and the RDS database. The SID should be listed in the tnsnames.ora file of the server where the following command is being run.

impdp dumpfile=schema1.dmp userid=schema1/yourpassword@SID schemas=schema1 remap_schema=schema1:schema1

Query the target schema to check if the rows are successfully imported.

Clean up

To avoid incurring future charges, delete the resources you created:

Delete the S3 bucket you created for the Oracle RMAN backup.
Delete your storage gateway.
Delete the EC2 server you used to host your Oracle database.
Delete the RDS for Oracle DB instance that you created to test this solution.

Conclusion

In this post, we showed you how to migrate an on-premises Oracle database to an RDS for Oracle database by using RMAN backup and following a two-step approach. There are other options to migrate on-premises databases using Oracle Data Guard and Oracle Golden Gate that would require additional licensing considerations. The solution we presented is useful in cases where the size of the source database is large enough that the Oracle Data Pump option may not be ideal and Data Guard or Golden Gate are not available. Now you can migrate your on-premises Oracle databases using RMAN backup to Amazon for RDS for Oracle following the information in this post.

About the Authors

Varun Mahajan is a Principal Solutions Architect at Amazon Web Services. He works with enterprise customers, helping them align their business goals with the art of the possible using cloud-based technologies. He enjoys working with data and solving problems using the AWS database and analytics portfolio.

Raghavarao Sodabathina is a Principal Solutions Architect at AWS, focusing on data analytics, AI/ML, and cloud security. He engages with customers to create innovative solutions that address customer business problems and accelerate the adoption of AWS services. In his spare time, Raghavarao enjoys spending time with his family, reading books, and watching movies.

Read MoreAWS Database Blog

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments