Thursday, October 3, 2024
No menu items!
HomeDatabase ManagementMigrate an Oracle database to Amazon RDS for Oracle using Oracle GoldenGate...

Migrate an Oracle database to Amazon RDS for Oracle using Oracle GoldenGate Microservices Architecture

Many AWS customers use AWS managed services to handle the heavy lifting in their daily operations and modernize their applications. Among these services, Amazon Relational Database Service (Amazon RDS) is ideal for relational database deployments. Amazon RDS can greatly reduce the administrative burden of running and maintaining a relational database.

In this post, we show you how to migrate an Oracle database to an Amazon RDS for Oracle DB instance. The solution uses Oracle Data Pump for initial data load and Oracle GoldenGate Microservices Architecture installed on an Amazon Elastic Compute Cloud (Amazon EC2) instance as the Oracle GoldenGate hub for change data capture.

Solution overview

This post uses Oracle GoldenGate Microservices Architecture installed on an EC2 instance as the Oracle GoldenGate hub. Oracle GoldenGate recommends using a file-based initial load process for data replication in Oracle GoldenGate Microservices Architecture. For more information, see About Instantiating with Initial Load Extract. A variety of initial load techniques are available, such as Oracle Data Pump for high-speed movement of data and metadata, AWS Database Migration Service (AWS DMS) for a managed experience, and Oracle transportable tablespaces for migration which requires outage on the source for the final incremental and metadata export while other mechanisms do not require source database outage for initial data loading. In this post, you use Oracle Data Pump to load the initial data into the target RDS for Oracle DB instance and Oracle GoldenGate for change data capture (CDC).

The following diagram illustrates this architecture.

To implement this solution, you need to complete the following steps:

Install Oracle GoldenGate Microservices Architecture on Amazon EC2.
Prepare the target RDS for Oracle database instance.
Prepare the source Oracle database.
Set up the Oracle GoldenGate hub on Amazon EC2.
Complete the initial load using Oracle Data Pump.
Start change data capture.
Perform application cutover to Amazon RDS for Oracle.

Migration rollback consideration:

Before you dive into migration steps, it is important to consider what options you have in case you would want to roll back the migration.

If the target is “RDS for Oracle non-CDB architecture” then you can initiate reverse replication before starting application on Amazon RDS for Oracle by implementing Oracle GoldenGate extract to capture changes from RDS for Oracle and apply them using Oracle GoldenGate Replicat on your on-premises database.
If the target is “21c RDS for Oracle Single or Multi-Tenant” then you can use Oracle GoldenGate for reverse replication using the per-PDB extract feature.
If the target is “19c RDS for Oracle Single or Multi-Tenant” then you cannot use Oracle GoldenGate for reverse replication, as the per-PDB extract feature is only supported for Oracle database 21c onward. In this case, you can use AWS Database Migration Service (AWS DMS) to setup reverse replication or other methods such as Oracle DataPump to roll back the migration.

The following table summarizes these options:

Source
Target
Rollback Plan

On-premises Non-CDB Oracle Database
RDS for Oracle non-CDB architecture
Setup replication from RDS for Oracle to on-premises Oracle database using Oracle GoldenGate

On-premises Multitenant Oracle Database
RDS for Oracle CDB architecture (single-tenant or multi-tenant)

RDS for Oracle 19c – Setup replication from RDS for Oracle to on-premises Oracle database using AWS DMS.

RDS for Oracle 21c – Setup replication from RDS for Oracle to on-premises Oracle database using Oracle GoldenGate per-PDB extract.

In this post, the source on-premises Oracle database uses the Oracle multitenant architecture, and the target RDS for Oracle DB instance uses multi-tenant configuration of the CDB architecture. Both databases are running on Oracle Database 19c Enterprise Edition 19.4.0.0.0 engine version. For ongoing replication, you use Oracle GoldenGate 21c Microservices Architecture. The source Oracle database has the container database (CDB) ORCL and pluggable database (PDB) PDB1. The target RDS for Oracle DB instance has PDB1 as its pluggable database. In this post, we also use dms_sample as a sample schema to migrate from the on-premises pluggable database PDB1 to the RDS for Oracle pluggable database PDB1.

Additionally, if the source database is not running in the same region as your target RDS for Oracle instance, then you can choose to have another Oracle GoldenGate instance near the source and create a distribution path from this instance to Oracle GoldenGate Hub instance on EC2. This helps with extract performance implications due to network latency.

Prerequisites

Make sure you have the following prerequisites:

An active AWS account.
Corporate Data Center and AWS account with connectivity using either AWS Direct Connect or AWS Site-to-Site VPN. For more information, see Network-to-Amazon VPC connectivity options.
An EC2 Linux instance in the target AWS account with sufficient storage for the Oracle GoldenGate hub. For more information, see Disk Requirements.
An Amazon Virtual Private Cloud (Amazon VPC) security group associated with the target RDS for Oracle DB instance, allowing allow inbound connections from the EC2 instance used as the Oracle GoldenGate Microservices Architecture hub.

We recommend to review licensing requirements for Oracle GoldenGate. For more information, refer Oracle GoldenGate Licensing Information . For more on RDS for Oracle licensing options, refer to RDS for Oracle licensing options.

Limitations

This solution has the following limitations:

Due to its managed service nature, you can’t use the RMAN to restore database backup to RDS for Oracle DB instance.
Oracle GoldenGate does not support Oracle sequences by default. You must install some database procedures to support Oracles sequence. For more information, refer to Support for Oracle Sequences.
Oracle GoldenGate does not enable sequence replication for schemas that contain system-generated sequences, such as one created for an identity column.
Refer to Understanding What’s Supported to review the source database and get a list of unsupported data types and objects.

Setting up Oracle GoldenGate Microservices Architecture on Amazon EC2

Oracle GoldenGate 21c (and higher releases) introduces unified build support, so a single software installation supports capturing and applying replicated data between different major Oracle Database versions (like 11g Release 2 to 21c). If your source and target database software versions differ, Oracle GoldenGate 21c doesn’t need to be installed with different Oracle binaries. This is possible because an Oracle GoldenGate installation includes the required Oracle Database client libraries without requiring a separate database ORACLE_HOME installation. Refer to Installing Oracle GoldenGate Microservices Architecture to install Oracle GoldenGate Microservices Architecture on the EC2 instance.

In this post, you use the following configurations to install Oracle GoldenGate Microservices Architecture on your EC2 instance. For more details about Oracle GoldenGate Microservices Architecture and its components, see Oracle GoldenGate Microservices Architecture.

Parameter
Configuration

OGG_SOFTWARE_HOME
/data/app/oracle/ogg_home

DEPLOYMENT_NAME
ogg21c

SERVICEMANAGER_DEPLOYMENT_HOME
/data/app/oracle/deployments/ServiceManager

OGG_DEPLOYMENT_HOME
/data/app/oracle/deployments/ogg21c

ADMINISTRATOR_USER
oggadmin

ENV_TNS_ADMIN
/data/app/oracle/ogg_home

SECURITY_ENABLED
TRUE

SERVER_WALLET
/data/wallet_ogg/$(hostname)

CLIENT_WALLET
/data/wallet_ogg/dist_client

REGISTER_SERVICEMANAGER_AS_A_SERVICE
TRUE

PORT_SERVICEMANAGER
7820

PORT_ADMINSRVR
7821

PORT_DISTSRVR
7822

PORT_RCVRSRVR
7823

PORT_PMSRVR
7824

UDP_PORT_PMSRVR
7825

OGG_SCHEMA
ggadmin

METRICS_SERVER_ENABLED
TRUE

PMSRVR_DATASTORE_TYPE
BDB

PMSRVR_DATASTORE_HOME
/data/app/oracle/deployments/ogg21c/pmsrvr

We also recommend configuring NGINX reverse proxy to provide access to microservices and enable enhanced security and manageability. The Oracle GoldenGate reverse proxy feature allows a single point of contact for the GoldenGate microservices associated with a GoldenGate deployment. Without reverse proxy, the microservices are accessed using a URL consisting of a host name or IP address and separate port numbers, one for each of the services. With reverse proxy, port numbers are not required to connect to the microservices, because they are replaced with the deployment name and VIP for the hostname. For the sake of simplicity, we have not configured NGINX proxy in this post.

Prepare the target RDS for Oracle DB instance

In this step, you set up the target RDS for Oracle DB instance for migration.

Create and modify the RDS for Oracle DB instance for migration

Complete the following steps to create and configure your RDS for Oracle DB instance:

Create the RDS for Oracle DB instance and pluggable database PDB1. We recommend using a custom DB parameter group while creating the DB instance with engine version 19c. For more information, see Creating and connecting to an Oracle DB instance.
It’s recommended to create the target database instance with a larger instance class to get compute to speed up the initial data loading process.
Keep Multi-AZ disabled for the target database instance during the initial data load.
Allocate enough initial storage to the target RDS for Oracle DB instance for backup dump files and database tablespaces. You can also enable storage auto scaling for the DB instance. For more information, refer to Working with storage for Amazon RDS DB instances.
Create the required tablespaces in the target DB instance.
Modify the size of the database tablespaces and Temp as per the size of the source database:

— alter tablespace size
SQL(target-pdb)> ALTER TABLESPACE example-tablespace RESIZE nG;

— alter TEMP tablespace size
SQL(target-pdb)> EXEC rdsadmin.rdsadmin_util.resize_temp_tablespace(‘TEMP’,’nG’);

Create database roles in the target DB instance. Use the following script in the source database to generate scripts. Review those generated SQLs before running them in the target database.

SQL(target-pdb)> SELECT ‘create role ‘|| ROLE ||’;’ FROM dba_roles where ORACLE_MAINTAINED=’N’;

Create database profiles in the target DB instance. Use the following script in the source database to generate scripts. Review those generated SQLs before running them in the target database.

SQL(target-pdb)> SELECT dbms_metadata.get_ddl(‘PROFILE’, profile) as profile_ddl FROM
(SELECT distinct profile FROM dba_profiles);

Create public synonyms in the target database. Use the following script in the source database to generate scripts. Review those generated SQLs before running them in the target database.

SQL(target-pdb)> select ‘CREATE ‘|| decode(owner,’PUBLIC’,’PUBLIC ‘,null) ||’SYNONYM ‘ ||
decode(owner,’PUBLIC’,null, owner || ‘.’) ||lower(synonym_name) || ‘ FOR ‘ ||
lower(table_owner) ||’.’ || lower(table_name) ||
decode(db_link,null,null,’@’||db_link) || ‘;’ “public_synonym” from sys.dba_synonyms
where owner=’PUBLIC’
and table_owner in (‘DMS_SAMPLE’)
order by owner;

Make sure to have the same size for redo logs that you have in the source database. For more information, see Resizing online redo logs.
Verify that the source and target DB instance have the same configuration for parameters. Review the RDS DB parameter groups and check the database properties using the following SQL:

SQL(target-pdb)> select property_name,property_value from database_properties;

We recommend that you temporarily disable automated backups of the target RDS for Oracle database instance to speed up the Oracle Data Pump import. There will be a brief outage during this activity. Configure Backup-retention-period to 0 to disable automated backups. For more information, see Disabling automated backups.
Modify job_queue_processes to 0 in the DB parameter group of the target database instance to stop it from running database jobs during the migration process. For more information, see Modifying parameters in a DB parameter group.

Set the ENABLE_GOLDENGATE_REPLICATION parameter

The ENABLE_GOLDENGATE_REPLICATION parameter must be set to true in order to support logical replication. Modify ENABLE_GOLDENGATE_REPLICATION to true in the DB parameter group of the target RDS for Oracle DB instance using the following code. For more information, see Modifying parameters in a DB parameter group.

SQL(target-pdb)> show parameter ENABLE_GOLDENGATE_REPLICATION
NAME                                     TYPE     VALUE
———————————— ———– ——————————
enable_goldengate_replication         boolean     TRUE

Create an Oracle GoldenGate user account in RDS for Oracle

Oracle GoldenGate runs as a database user and requires the appropriate database privileges. If the GoldenGate administrator user has the database administrator (DBA) role, additional object privileges are not needed. However, there might be security constraints granting the DBA role to the GoldenGate administrator user. The DBA role is not necessarily required for Oracle GoldenGate. For more information on user privileges, refer to Grant User Privileges for Oracle Database 21c and Lower and Grant account privileges on the target database. For simplicity, we have granted DBA privileges to the Oracle GoldenGate administrator user in this post.

Connect to the respective PDB and use the following statements to create an Oracle GoldenGate user account named ggadmin on the target DB instance:

SQL(target-pdb)> CREATE TABLESPACE GG_DATA DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M;
Tablespace created

SQL(target-pdb)> CREATE USER ggadmin IDENTIFIED by xxxx DEFAULT TABLESPACE GG_DATA
TEMPORARY TABLESPACE temp CONTAINER=CURRENT;
User created

SQL(target-pdb)> GRANT CREATE SESSION, ALTER ANY TABLE TO ggadmin CONTAINER=CURRENT;
Grant succeeded

SQL(target-pdb)> GRANT DBA, RESOURCE TO ggadmin CONTAINER=CURRENT;
Grant succeeded

SQL(target-pdb)> EXEC rdsadmin.rdsadmin_dbms_goldengate_auth.grant_admin_privilege
(grantee =>’GGADMIN’,privilege_type =>’apply’,grant_select_privileges => true,
do_grants =>TRUE);
PL/SQL procedure successfully completed.

Configure the streams pool

For Integrated Replicat, Oracle recommend having 1 GB for STREAMS_POOL_SIZE per replicat process and an additional 25%. Modify the STREAMS_POOL_SIZE parameter in the DB parameter group of the target RDS for Oracle DB instance using the following code. For more information, see Modifying parameters in a DB parameter group.

SQL(target-pdb)> show parameter streams_pool_size
NAME                              TYPE       VALUE
———————————— ———– ——————————
streams_pool_size                    big integer 1280M

Prepare the source Oracle database

Oracle GoldenGate requires additional configuration on the Oracle database before you can use it as a source for ongoing replication.

Unsupported objects

Use the dictionary view DBA_GOLDENGATE_SUPPORT_MODE to determine if there are objects that are not fully supported for extraction by Oracle GoldenGate due to data type replication limitations. Tables that are listed in the following query must be excluded from capture using the GoldenGate Extract parameter TABLEEXCLUDE owner.object_name. These objects must be manually copied to the target database during the cutover phase. Refer to Understanding What’s Supported to review the source database and get a list of unsupported data types and objects.

SQL(source-pdb)> SELECT owner, object_name FROM DBA_GOLDENGATE_SUPPORT_MODE WHERE
support_mode NOT IN (‘FULL’) and owner=’DMS_SAMPLE’;
no rows selected

Row uniqueness

Oracle GoldenGate requires a unique row identifier on the source and target tables to locate the correct target rows for the replication of update and delete statements. This is normally taken care of with primary key or unique key indexes. Use the data dictionary view DBA_GOLDENGATE_NOT_UNIQUE to find the tables without primary keys and tables with unique indexes that have NULL values if the underlying database is 12c release 2 or later.

SQL(source-pdb)> select owner,table_name,bad_column
from DBA_GOLDENGATE_NOT_UNIQUE where owner=’DMS_SAMPLE’;
OWNER                          TABLE_NAME           BAD_COLUMN
—————————– ——————– ——————————
DMS_SAMPLE                     MLB_DATA             N
DMS_SAMPLE                     NFL_DATA             N
DMS_SAMPLE                     NFL_STADIUM_DATA     N

When you enable the supplemental logging, Oracle GoldenGate enables TRANDATA on all the columns for such tables, although doing so results in additional information being recorded in the redo log of the source database. We recommend assessing the impact of enabling supplemental logging for such tables on the database in a non-production environment.

As an alternative, you can create a substitute key using columns that are guaranteed to have unique values. This replacement key can be specified by adding a KEYCOLS clause to the EXTRACT TABLE and REPLICAT MAP parameters. Existing primary or unique keys that Oracle GoldenGate discovers are replaced with the provided key.

Enable ARCHIVELOG mode for database

The source database must be set to ARCHIVELOG mode. The Oracle GoldenGate Extract process mines Oracle redo for data replication. The LogMiner server mines redo from the redo log buffer, online redo logs, and archive log files when using Extract in integrated capture mode.

SQL(source-cdb)> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL(source-cdb)> startup mount
ORACLE instance started.

SQL(source-cdb)> alter database archivelog;
Database altered.

SQL(source-cdb)> alter database open;
Database altered.

SQL(source-cdb)> select log_mode from v$database;
LOG_MODE
———–
ARCHIVELOG

It’s crucial to have an archive log retention policy in place that retains archive logs for a sufficient duration, allowing GoldenGate to read and process them effectively. Additionally, adequate storage space should be maintained on the source database host for these archive logs. This becomes particularly important when the Extract process needs to perform a recovery following events like network outages or maintenance on the Oracle GoldenGate Hub.

Enable database force logging mode and minimal database-level supplemental logging

To make sure that the required redo information is contained in the Oracle redo logs for data replication, NOLOGGING activities that might prevent the required redo information from being generated must be overridden. Enable database force logging mode if you are replicating the full database, or you can consider enabling it on specific tablespaces. Before you configure Oracle GoldenGate, it’s critical to examine the implications of force logging mode on database performance.

Oracle recommends putting the source database into forced logging mode and enabling minimal supplemental logging at the database level when using Oracle GoldenGate. This adds row chaining information, if they exist, to the redo log for update operations. For more information, see Configuring Logging Properties. The redo volume increases as the result of this force logging. We recommend assessing the impact of changes on the database in a non-production environment.

Enable Force logging on the source database:

SQL(source-cdb)> SELECT FORCE_LOGGING FROM V$DATABASE;
FORCE_LOGGING
—————————————
NO

SQL(source-cdb)> ALTER DATABASE FORCE LOGGING;
Database altered.

SQL(source-cdb)> ALTER SYSTEM SWITCH LOGFILE;
System altered.

SQL(source-cdb)> SELECT FORCE_LOGGING FROM V$DATABASE;
FORCE_LOGGING
—————————————
YES

Enable the Minimal Supplemental Logging on source database:

SQL(source-cdb)> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEMENTAL_LOG_DATA_MIN
—————————
NO

SQL(source-cdb)> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

SQL(source-cdb)> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEMENTAL_LOG_DATA_MIN
—————————
YES

Configure the streams pool

For the Oracle GoldenGate Integrated Extract process, an area of Oracle memory called the streams pool (STREAMS_POOL_SIZE) must be configured in the System Global Area (SGA) of the database. The size requirement of the streams pool is based on the number of Extract processes. MAX_SGA_SIZE (which is not the same as the database initialization parameter, SGA_MAX_SIZE) controls the amount of shared memory used by the LogMiner server. MAX_SGA_SIZE takes memory from the memory allocated to STREAMS_POOL_SIZE. By default, one integrated Extract requests the LogMiner server to run with MAX_SGA_SIZE of 1 GB and a PARALLELISM of 2. In this post, you create one Integrated Extract and allocate 1 GB. You also add 25% additional memory for other processes.

SQL(source-cdb)> alter system set STREAMS_POOL_SIZE=1280M scope=both;
System altered.

SQL(source-cdb)> show parameter STREAMS_POOL_SIZE
NAME TYPE VALUE
———————————— ———– ——————————
streams_pool_size big integer 1280M
System altered.

SQL(source-cdb)> show parameter STREAMS_POOL_SIZE

NAME TYPE VALUE
———————————— ———– ——————————
streams_pool_size big integer 1280M

Set the ENABLE_GOLDENGATE_REPLICATION parameter

Configure the ENABLE_GOLDENGATE_REPLICATION parameter to true in the source database:

SQL(source-cdb)> ALTER SYSTEM SET enable_goldengate_replication=TRUE scope=both;
System altered.

SQL(source-cdb)> select NAME,DESCRIPTION from DBA_FEATURE_USAGE_STATISTICS
where NAME=’GoldenGate’;
NAME                 DESCRIPTION
—————————————————————
GoldenGate           Oracle GoldenGate Capabilities are in use.

Create user accounts for Oracle GoldenGate

The user privileges that are required for connecting to an Oracle database from Oracle GoldenGate depends on the type of user. Privileges should be granted depending on the actions that the user needs to perform as the GoldenGate user on the source and target databases. If the GoldenGate administrator user has the DBA role, additional object privileges are not needed. However, there might be security constraints granting the DBA role to the GoldenGate administrator user. The DBA role is not necessarily required for Oracle GoldenGate. For more information on user privileges, refer to Grant User Privileges for Oracle Database 21c and Lower. For simplicity, we grant DBA privileges to the database user in this post.

Create the following ggadmin database user account in each pluggable database you plan to replicate from:

SQL(source-pdb)>CREATE TABLESPACE GG_DATA DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M;
Tablespace created.

SQL(source-pdb)>CREATE USER ggadmin IDENTIFIED by xxxx DEFAULT TABLESPACE GG_DATA
TEMPORARY TABLESPACE temp CONTAINER=CURRENT;
User created.

SQL(source-pdb)>GRANT CREATE SESSION, ALTER ANY TABLE TO ggadmin CONTAINER=CURRENT;
Grant succeeded.

SQL(source-pdb)>GRANT DBA, RESOURCE TO ggadmin CONTAINER=CURRENT;
Grant succeeded.

SQL(source-pdb)>EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(‘ggadmin’);
Grant succeeded.

Create the c##ggadmin common database user account in the source container database:

SQL(source-cdb)>CREATE TABLESPACE GG_DATA DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M;
Tablespace created.

SQL(source-cdb)>CREATE USER c##ggadmin IDENTIFIED BY xxxxx CONTAINER=ALL
DEFAULT TABLESPACE GG_DATA TEMPORARY TABLESPACE TEMP;
User created.

SQL(source-cdb)>GRANT ALTER SYSTEM TO c##ggadmin CONTAINER=ALL;
Grant succeeded.

SQL(source-cdb)>GRANT CREATE SESSION, ALTER ANY TABLE TO c##ggadmin CONTAINER=ALL;
Grant succeeded.

SQL(source-cdb)>GRANT DBA, RESOURCE TO c##ggadmin CONTAINER=ALL;
Grant succeeded.

SQL(source-cdb)>EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE
(‘c##ggadmin’,CONTAINER=>’ALL’);
PL/SQL procedure successfully completed.

Verify the database user account:

SQL(source-cdb)> SELECT name, username FROM cdb_goldengate_privileges a,
v$pdbs b
WHERE a.con_id = b.con_id
UNION
SELECT decode(a.con_id,1,’CDB$ROOT’), username FROM cdb_goldengate_privileges a,
v$pdbs b
WHERE a.con_id=1;
NAME       USERNAME
———- ——————–
CDB$ROOT   C##GGADMIN
PDB1       C##GGADMIN
PDB1       GGADMIN

If you have Oracle Database 21c as the source database and you plan to use the per-PDB-Extract feature, you don’t need to create the common user c##ggadmin in the source container database. For more information, see Requirements for Configuring Container Databases for Oracle GoldenGate.

Set up the Oracle GoldenGate hub on Amazon EC2

You can now configure the Oracle GoldenGate hub on Amazon EC2, which already has Oracle GoldenGate Microservices installed. In this section, you create the tnsnames.ora file, database credentials, Extract process, heartbeat table, and checkpoint table. Because you’re using Oracle GoldenGate Microservices Architecture as the Oracle GoldenGate hub on Amazon EC2, you don’t need to create a distribution path to transfer the trail files from an Extract (source system) to a Replicat (target system). For more information on the Oracle GoldenGate distribution service, see Distribute.

Create tnsnames.ora

Create a tnsnames.ora file at the location defined for the ENV_TNS_ADMIN parameter during Oracle GoldenGate Microservices Architecture installation. Add the following entry names to tnsnames.ora for the source and target databases:

[]$ pwd
/data/app/oracle/ogg_home

[]$ vi tnsnames.ora
# tns alias for source CDB
orcl=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

# tns alias for source PDB
pdb1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb1)
)
)

# tns alias for target PDB
target_pdb1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = orcl-xxx.us-east-1.rds.amazonaws.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb1)
)
)

Create Oracle GoldenGate database credentials

Connect to the administration server of your Oracle GoldenGate deployment using Oracle GoldenGate admin client. You use the public IPv4 DNS of your EC2 instance.

[hub]$ cd $OGG_HOME/bin
[hub bin]$ ./adminclient
OGG(notconnected)> connect https://xxxx.amazonaws.com:7821 deployment ogg21c as oggadmin
Password for ‘oggadmin’ at ‘https://xxxx.amazonaws.com:7821/ogg21c’:

Create a credential store for the source container database and the pluggable databases being migrated. If you’re migrating multiple pluggable databases, add credentials for all the respective pluggable databases. If you have Oracle Database 21c as your source database and you plan to use the per-PDB-Extract feature, you don’t need to add the common user c##ggadmin in the Oracle GoldenGate credential store.

OGG(xx ogg21c)> ALTER CREDENTIALSTORE ADD USER c##ggadmin@orcl ALIAS source_cdb
DOMAIN OracleGoldenGate
Password:
2024-01-20  INFO    OGG-15102  Credential store created.
2024-01-20  INFO    OGG-15116  No information found in credential store.
2024-01-20  INFO    OGG-15114  Credential store altered.

OGG(xx ogg21c)> ALTER CREDENTIALSTORE ADD USER ggadmin@pdb1 ALIAS source_pdb1
DOMAIN OracleGoldenGate
Password:
2024-01-20T12:52:38Z  INFO    OGG-15114  Credential store altered.

Create a credential store for the target pluggable databases to which you are migrating. For a multitenant database, separate credentials must be created for the PDBs being migrated to and not the CDB.

OGG(xx ogg21c)> ALTER CREDENTIALSTORE ADD USER ggadmin@target_pdb1 ALIAS target_pdb1
DOMAIN OracleGoldenGate
Password:
2024-01-20T20:01:36Z  INFO    OGG-15114  Credential store altered.

OGG (xx ogg21c)> INFO CREDENTIALSTORE
Default domain: OracleGoldenGate
Alias: source_cdb
Userid: c##ggadmin@orcl
Alias: source_pdb1
Userid: ggadmin@pdb1
Alias: target_pdb1
Userid: ggadmin@target_pdb1

Create a heartbeat table for the source database

The GoldenGate heartbeat table is useful to monitor the replication latency between the source and target databases. For a multitenant database, the heartbeat table must be located in the PDB that is being migrated. Refer to ADD HEARTBEATTABLE for more information. For Oracle multitenant databases, the heartbeat objects and jobs are created in the user’s schema that is connected to the database using the DBLOGIN command. Use Oracle GoldenGate adminclient to add a heartbeat table in the source pluggable database.

— login into source PDB1
OGG(xx ogg21c) > dblogin useridalias source_pdb1
Successfully logged into database.

— add heartbeat table
OGG(xx ogg21c as source_pdb1) > add heartbeattable
2024-01-20  INFO    OGG-14101  Successfully added heartbeat table.

— get heartbeat table details
OGG(xx ogg21c as source_pdb1) > info heartbeattable
2024-01-20  INFO    OGG-08100  HEARTBEAT table ggadmin.gg_heartbeat exists.
<output truncated…>
2024-01-20  INFO    OGG-08100  Retention time: 30 days.

If you’re migrating multiple pluggable databases, log in to the respective pluggable databases to create a heartbeat table.

Enable supplemental logging for source tables and schemas

Enable supplemental logging on the table level to create the primary Extract. You can use either ADD TRANDATA or ADD SCHEMATRANDATA to enable supplemental logging at the table level. It’s recommended to use ADD SCHEMATRANDATA to enable supplemental logging for current and future tables in a given schema.

From GoldenGate version 12.2, there is tighter integration with Oracle Data Pump export and import. If you use Oracle Data Pump for initial data loading, use the PREPARESCN parameter while enabling supplemental logging to make sure that the Data Pump export will have information about the instantiation CSNs for each table part of the export. This parameter populates the system tables and views with instantiation CSNs on the import.

In this post, you use ADD SCHEMATRANDATA with PREPARECSN WAIT to enable the supplemental logging. The WAIT option specifically instructs the system to wait for in-flight transactions and prepare table instantiation.

— login into source PDB1
OGG(xx ogg21c) > dblogin useridalias source_pdb1
Successfully logged into database.

— Enable supplemental logging
OGG(xx ogg21c as source_pdb1) > ADD SCHEMATRANDATA DMS_SAMPLE PREPARECSN WAIT
2024-01-20  INFO    OGG-01788  SCHEMATRANDATA has been added on schema “DMS_SAMPLE”.
2024-01-20  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema “DMS_SAMPLE”.
2024-01-20  INFO    OGG-10154  Schema level PREPARECSN set to mode WAIT on schema “DMS_SAMPLE”

— Get supplemental logging details
OGG (xx ogg21c as source_pdb1) > info schematrandata DMS_SAMPLE
Schema level supplemental logging, excluding non-validated keys, is enabled on schema “DMS_SAMPLE”.
Schema level supplemental logging is enabled on schema “DMS_SAMPLE” for all scheduling columns.
Schema “DMS_SAMPLE” has 16 prepared tables for instantiation

If you’re migrating multiple pluggable databases, log in to the respective pluggable databases to enable supplemental logging.

Configure and start Extract for the source

Connect to the pluggable database using the DBLOGIN command from Oracle GoldenGate adminclient to add the extract. You also connect to the CDB using the alias source_cdb to register the Extract.

Add Extract:

— login into source PDB1
OGG (xx ogg21c) > dblogin useridalias source_pdb1
Successfully logged into database.

— Add Extract
OGG (xx ogg21c as source_pdb1)> add extract ext_pdb1, integrated tranlog, begin now
2024-01-20T INFO OGG-08100 Integrated Extract added.

Edit Extaract Parameters:

— Edit Extract Parameters
OGG (xx ogg21c as source_pdb1)> edit params ext_pdb1
2024-01-20T INFO OGG-10183 Parameter file EXT_PDB1.prm passed validity check.

— add following parameters in Extract parameter file and save
EXTRACT ext_pdb1
USERIDALIAS source_cdb DOMAIN OracleGoldenGate
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 1024)

DDL EXCLUDE ALL
EXTTRAIL pdb1/lt
TABLE PDB1.DMS_SAMPLE.*;

Add Extract Trail

— Add Extract Trail
OGG (xx ogg21c as source_pdb1)> ADD EXTTRAIL pdb1/lt,EXTRACT ext_pdb1
2024-01-20T INFO OGG-08100 EXTTRAIL added.

Register and Start the Extract

— Connect to Root Conatainer
OGG (xx ogg21c as source_pdb1)> dblogin useridalias source_cdb
Successfully logged into database CDB$ROOT.

— Register Extract
OGG (xx ogg21c as source_cdb) > REGISTER EXTRACT ext_pdb1 DATABASE CONTAINER (pdb1)
2024-01-20T INFO OGG-02003 Extract group EXT_PDB1 successfully registered with dat
abase at SCN 4151294.

— Start Extract
OGG (xx ogg21c as source_cdb)> start extract ext_pdb1
2024-01-20T INFO OGG-00975 Extract group EXT_PDB1 starting.
2024-01-20T INFO OGG-15426 Extract group EXT_PDB1 started

— Check Status
OGG (xx ogg21c as source_cdb) 52> info all
Program Status Group Type Lag at Chkpt Time Since Chkpt
EXTRACT RUNNING EXT_PDB1 INTEGRATED 00:01:34 00:00:07

If you want to migrate from multiple PDBs, repeat the same steps for <schema-name>@<pdb-name> to create separate Extracts. If you have Oracle Database 21c as your source database and you plan to use the per-PDB-Extract feature, you don’t need to connect to the root container with the common user c##ggadmin to register the extract. You can create a per-PDB-Extract connecting to the local ggadmin user in a specific pluggable database to create and register the Extract.

We recommend to freeze DDL changes in the source database during database migration. Once your database migration is completed, and source and target databases are synced using Replicat, you can apply DDL changes to the source database and use the DDL INCLUDE MAPPED parameter in Extract to enable DDL replication until you conduct the cutover to the target database. For more information on DDL replication, refer DDL Replication.

Create a GoldenGate heartbeat table for the target database

A GoldenGate heartbeat table is useful to monitor the replication latency between the source and target databases. For a multitenant database, the heartbeat table must be located in the PDB replicated. For more details on adding a heartbeat table, see ADD HEARTBEATTABLE. For Oracle multitenant databases, the heartbeat objects and jobs are created in the user’s schema that is connected to the database using the DBLOGIN command.

— login into target PDB1
OGG(xx ogg21c)> dblogin useridalias target_pdb1
Successfully logged into database.

— add heartbeat table
OGG (xx ogg21c as target_pdb1)> ADD HEARTBEATTABLE, TARGETONLY
2024-01-20T  INFO    OGG-14101  Successfully added heartbeat table.

–get heartbeat table details
OGG (xx ogg21c as target_pdb1) > INFO HEARTBEATTABLE
2024-01-20T  INFO    OGG-08100  HEARTBEAT table ggadmin.gg_heartbeat exists.
2024-01-20T  INFO    OGG-08100  HEARTBEAT table ggadmin.gg_heartbeat_seed exists.
2024-01-20T  INFO    OGG-08100  HEARTBEAT table ggadmin.gg_heartbeat_seed supplemental logging DISABLED.
2024-01-20T  INFO    OGG-08100  HEARTBEAT table ggadmin.gg_heartbeat_history partitioning DISABLED.
<truncated…output>
2024-01-20T  INFO    OGG-08100  Retention time: 30 days.

If you’re migrating multiple PDBs, repeat the same steps to create a heartbeat table in the PDBs being migrated.

Create a GoldenGate checkpoint table in the target database

Use ADD CHECKPOINTTABLE to create a checkpoint table in the target database. It is recommended to use the checkpoint table for Replicat.

— login into target PDB1
OGG (xx ogg21c)> dblogin useridalias target_pdb1
Successfully logged into database.

— add checkpoint table
OGG (xx ogg21c as target_pdb1)> ADD CHECKPOINTTABLE ggadmin.gg_checkpoint

— get checkpoint table details
OGG (xx ogg21c as target_pdb1)> INFO CHECKPOINTTABLE ggadmin.gg_checkpoint
2024-01-20T20:15:47Z INFO OGG-08100 Checkpoint table GGADMIN.GG_CHECKPOINT has been created on 2024-01-20T20:15:47Z.

Complete the initial load using Oracle Data Pump

Before initiating the Oracle Data Pump export for source database schemas, it is crucial to verify that active transactions initiated prior to the creation of the GoldenGate Extract have completed. This step verifies that ongoing transactions are captured by either the GoldenGate extract or the Oracle Data Pump export process.

The source for this information is the V$TRANSACTION performance view. After starting your Extract, determine what transactions exist in this view. When a transaction is complete, it will no longer exist in this view. You have to query this view many times. If a particular transaction is running for longer than expected, investigate who is running that transaction and what that transaction is doing. Ultimately, you may discover that you will need to kill the session that owns the long-running transaction to begin the instantiation of your target system.

Use the following query on the source database to determine when the transactions that started before the Extract was created have committed or rolled back, making it safe to start the backup of the source database:

SQL(source-pdb)> SELECT capture_name, c.start_scn, t.start_scn
FROM dba_capture c, v$transaction t WHERE t.start_scn < c.start_scn;

no rows selected

Backup the source database

We use the Oracle Data Pump Export utility with the expdp command to take the backup of the required schemas. You don’t need to use the Data Pump parameter FLASHBACK_SCN because the objects being replicated by Oracle GoldenGate were already prepared (using PREPARECSN while adding schematrandata) in an earlier step. Therefore, after importing the objects into the target database, Replicat determines which transactions from the trail files to apply without introducing duplicate data.

SQL(source-pdb)> create directory expdp_dir as ‘/oraarchive/export_dump/’;
Directory created.

[]$ expdp ggadmin@pdb1 directory=expdp_dir schemas=dms_sample dumpfile=dms_sample_%U.dmp logfile=expdp_dump.log
Export: Release 19.0.0.0.0 – Production on Sat Jan 20 20:23:49 2024
Version 19.19.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Password:
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Production
FLASHBACK automatically enabled to preserve database integrity.
Starting “GGADMIN”.”SYS_EXPORT_SCHEMA_01″:  ggadmin/********@pdb1 directory=expdp_dir schemas=dms_sample dumpfile=dms_sample_%U.dmp logfile=expdp_dump.log
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
<truncated … output>
Master table “GGADMIN”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for GGADMIN.SYS_EXPORT_SCHEMA_01 is:
/oraarchive/export_dump/dms_sample_01.dmp
Job “GGADMIN”.”SYS_EXPORT_SCHEMA_01″ successfully completed at Sat Jan 20 20:25:25 2024 elapsed 0 00:01:30

Transfer dump files to the target RDS for Oracle DB instance

Create a database link between your source DB instance and your target DB instance. Your local Oracle instance must have network connectivity to the DB instance in order to create a database link and transfer your export dump file.

SQL(source-pdb)> CREATE DATABASE LINK to_target_rds
CONNECT TO ADMIN IDENTIFIED BY xxxx
USING ‘(DESCRIPTION =(ADDRESS_LIST =(ADDRESS =(PROTOCOL =TCPS)=
(HOST =orcl-xxxxx.us-east-1.rds.amazonaws.com)(PORT =1521)))
(CONNECT_DATA =(SID =PDB1))
(SECURITY =(SSL_SERVER_CERT_DN =”C=US,ST=Washington,L=Seattle,
O=Amazon.com,OU=RDS,CN=orcl-xxxxx.us-east-1.rds.amazonaws.com)))’;

Database link created.

Use DBMS_FILE_TRANSFER to copy the dump files from the source database instance to the target DB instance. You can also transfer dump files from the source database to the target DB instance using Amazon Elastic File System (Amazon EFS) integration with Amazon RDS for Oracle. For more information, see Transferring files between RDS for Oracle and an Amazon EFS file system.

SQL(source-pdb)> BEGIN
DBMS_FILE_TRANSFER.put_file(
source_directory_object      => ‘EXPDP_DIR’,
source_file_name             => ‘dms_sample_01.dmp’,
destination_directory_object => ‘DATA_PUMP_DIR’,
destination_file_name        => ‘dms_sample_01.dmp’,
destination_database         => ‘to_target_rds’);
END;
/

PL/SQL procedure successfully completed.

Import the schema in the target RDS for Oracle DB instance

Use the Oracle Data Pump Import utility to import the schema in the target DB instance to perform the initial data load. For more information on using Oracle Data Pump on an RDS for Oracle DB instance, see Importing using Oracle Data Pump.

SQL(target-pdb)> DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => ‘IMPORT’,
job_mode  => ‘SCHEMA’,
job_name  => ‘import_dms_sample_job’
);
DBMS_DATAPUMP.ADD_FILE(
handle    => v_hdnl,
filename  => ‘dms_sample_%U.dmp’,
directory => ‘DATA_PUMP_DIR’,
filetype  => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle    => v_hdnl,
filename  => ‘import_dms_sample.log’,
directory => ‘DATA_PUMP_DIR’,
filetype  => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,’SCHEMA_EXPR’,’IN (”DMS_SAMPLE”)’);
DBMS_DATAPUMP.SET_PARALLEL(v_hdnl,8);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
PL/SQL procedure successfully completed.

Data Pump jobs are started asynchronously. You can view the contents of the import log by using the rdsadmin.rds_file_util.read_text_file procedure:

SQL(target-pdb)> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file
(‘DATA_PUMP_DIR’,’import_dms_sample.log’));

After you complete the data import to the target RDS for Oracle DB instance, enable the Amazon RDS automatic backup and archive logging on the target database instance after instantiating the target database.

Enable automated backups of your target RDS for Oracle database instance. You enable automated backups by setting the backup retention period to a positive non-zero value. For more information, see Enabling automated backups.

Enable archive logging on the target DB instance:

— set archive log retention
begin
rdsadmin.rdsadmin_util.set_configuration(
name  => ‘archivelog retention hours’,
value => ‘<provide retention hours>’);
end;
/
commit;

— get archive details
set serveroutput on
EXEC rdsadmin.rdsadmin_util.show_configuration;

Start change data capture

You have already started the extract process to capture the transactions happening on the source database and Oracle GoldenGate accumulates them in the trail files until you start the replicat process to apply the transactions on the target database. Oracle GoldenGate supports different types of Replicat. Refer to Which Replicat to choose to understand the suitable type based on your use case. In this post, you configure Integrated Replicat for ongoing replication using Oracle GoldenGate.

Create and start Replicat

In this step, you create and start the Replicat process.

Add Replicat

— connect to Oracle GoldenGate Administration server
[hub]$ cd $OGG_HOME/bin

[hub bin]$ ./adminclient

OGG (not connected)> connect https://xxxx.amazonaws.com:7821 deployment ogg21c as oggadmin
Password for ‘oggadmin’ at ‘https://xxxx.amazonaws.com:7821/ogg21c’:

— login into target PDB1
OGG (xx ogg21c)> dblogin useridalias target_pdb1
Successfully logged into database.

— add replicat
OGG (xx ogg21c as target_pdb1) > ADD REPLICAT rep_pdb1, INTEGRATED, EXTTRAIL pdb1/lt, CHECKPOINTTABLE ggadmin.gg_checkpoint
2024-01-20T20:16:29Z INFO OGG-08100 Integrated Replicat added.

Edit Replicat Parameters

— edit replicat parameters
OGG (xx ogg21c as target_pdb1) > edit params rep_pdb1
2024-01-20T20:17:29Z INFO OGG-10183 Parameter file REP_PDB1.prm passed validity check.

>> Add following parameters in replicat parameter file:
REPLICAT rep_pdb1
USERIDALIAS target_pdb1
DBOPTIONS ENABLE_INSTANTIATION_FILTERING
MAP pdb1.dms_sample.*,TARGET pdb1.dms_sample.*;

— view replicat parameters
OGG (xx ogg21c as target_pdb1) > view params rep_pdb1
REPLICAT rep_pdb1
USERIDALIAS target_pdb1
DBOPTIONS ENABLE_INSTANTIATION_FILTERING
MAP pdb1.dms_sample.*,TARGET pdb1.dms_sample.*;

Start Replicat

You have used the PREPARECSN parameter while enable supplemental logging on the database schema DMS_SAMPLE and Oracle Data Pump for the initial data load, so you can start the Replicat without specifying positioning parameters (for example, AFTERCSN or ATCSN).

— login into target PDB1
OGG (xx ogg21c)> dblogin useridalias target_pdb1
Successfully logged into database.

— start replicat
OGG (xx ogg21c as target_pdb1) > START REPLICAT rep_pdb1
2024-01-21T00:36:27Z  INFO    OGG-00975  Replicat group REP_PDB1 starting.
2024-01-21T00:36:27Z  INFO    OGG-15445  Replicat group REP_PDB1 started.

— check process status
OGG (xx ogg21c as target_pdb1) > info all
Program     Status      Group       Type             Lag at Chkpt  Time Since Chkpt
EXTRACT     RUNNING     EXT_PDB1    INTEGRATED       00:00:00      00:00:05
REPLICAT    RUNNING     REP_PDB1    INTEGRATED       00:00:00      00:00:00

Monitor replication

Now you can test the DDL(if enabled) and DML replication by creating sample tables and inserting dummy data. You can also count the number of rows in the source and target tables to match the records count. Use the following commands to monitor the replication using admin client:

OGG (xx ogg21c)> dblogin useridalias target_pdb1
Successfully logged into database.

OGG (xx ogg21c as target_pdb1) > info all

OGG (xx ogg21c as target_pdb1) > info rep_pdb1 detail

OGG (xx ogg21c as target_pdb1) > lag replicat rep_pdb1

OGG (xx ogg21c as target_pdb1) > view report rep_pdb1

Log in to the target pluggable database using the ggadmin database user account and check the lag through the heartbeat table created in the target database:

[]$ sqlplus ggadmin@target_pdb1
SQL(target-pdb)>set lines 1000
col local_database format a20
col current_local_ts format a30
col remote_database format a20
col incoming_path format a60
col heartbeat_received_ts format a30

SQL(target-pdb)>select local_database, current_local_ts, remote_database, incoming_path,
incoming_lag from gg_lag;
LOCAL_DATABASE      CURRENT_LOCAL_TS            REMOTE_DATABASE    INCOMING_PATH                              INCOMING_LAG
——————– —————————— ——————– ———————————————————— ————
RDSCDB:PDB1         21-JAN-24 02.26.58.075051 AM   orcl:PDB1       EXT_PDB1 ==> REP_PDB1                       5.268496

You can also use Oracle GoldenGate Performance Metrics Service for real-time monitoring of Oracle GoldenGate processes configured.

Setup automated tasks to purge trail files

To purge trail files when Oracle GoldenGate has finished processing them, you have to setup automated tasks. This will mitigate the excessive consumption of disk space on Oracle GoldenGate Hub. Follow documentation to add Purge Trail task.

Rollback Plan

Any migration plan is incomplete without having a proper rollback strategy. In this post, you use RDS for Oracle 19c CDB architecture(multi-tenant) as target database. RDS for Oracle 19c CDB architecture (single tenant or multi-tenant) database does not support configuring Oracle GoldenGate Integrated Extract. One strategy for creating rollback plan can be configuring the reverse replication from RDS for Oracle CDB architecture (single tenant or multi-tenant) to source Oracle database using AWS DMS Binary Reader for change data capture (CDC). This requires setting up a AWS DMS CDC-only task to capture and replicate the changes from the time of application cutover to source on-premises database. For more information, see Working with an AWS-managed Oracle database as a source for AWS DMS .This will make sure you have original database in sync with your RDS for Oracle instance and can be used to fail back in case of issues.

Application cutover to Amazon RDS for Oracle

Complete the following steps to cut over your application:

Stop the application to make sure that you don’t have transactions on the source database.
After new connections and transactions are no longer allowed on the source database, monitor the database for active transactions to complete. You will get zero rows for the following query from an idle database:

SQL(source-pdb)> select XIDUSN, XIDSLOT, XIDSQN, STATUS, USED_UBLK, USED_UREC
from v$transaction where RECURSIVE=’NO’;

no rows selected

Monitor Extract to make sure it has caught up with extracting outstanding transactions. When you see the Extract no longer moving forward, you can stop it.

— login into source PDB1
OGG (xx ogg21c)> dblogin useridalias source_pdb1
Successfully logged into database.

–stop extract
OGG (xx ogg21c as source_pdb1) > stop extract ext_pdb1
2024-01-22T00:36:27Z  INFO    OGG-08100  Sending STOP request to Extract group EXT_PDB1.
2024-01-22T19:36:40Z  INFO    OGG-02964  Extract group EXT_PDB1 is down (gracefully).

After Extract has been stopped, you must wait for Replicat to apply the outstanding trail file data. Use the SEND REPLICAT command to check if Replicat has applied the outstanding trail file data:

— login into target PDB1
OGG (xx ogg21c)> dblogin useridalias target_pdb1
Successfully logged into database.

— send replicat
OGG (xx ogg21c as target_pdb1)> SEND REPLICAT rep_pdb1, LOGEND
Sending LOGEND request to Replicat group REP_PDB1 …
YES

When Replicat has applied all the trail file data, the value YES is returned.

Now that the replicated data is applied to the target database, stop the Replicat:

— stop replicat
OGG (xx ogg21c as target_pdb1)> stop replicat REP_PDB1
2024-01-22T00:56:27Z  INFO    OGG-08100  Sending STOP request to Replicat group REP_PDB1.
2024-01-22T19:57:25Z  INFO    OGG-02965  Replicat group REP_PDB1 is down (gracefully).

Recede the sequences in the target using the source value. Because you haven’t enabled Oracle GoldenGate support for Oracle sequences during CDC from the source database, you will generate the latest sequence value from the source for the sequences and apply it on the target database to help avoid sequence value inconsistencies. Run the following query in the source database to generate the DDL to reset sequences in the target database. Run the generated DDL statements in the target database to update sequences.

SQL(source-pdb)> SELECT ‘ALTER SEQUENCE ‘||sequence_owner||’.’||sequence_name||’ RESTART START WITH ‘
||(last_number+increment_by)||’;’
FROM dba_sequences
WHERE sequence_owner IN  (‘DMS_SAMPLE’);

Stop the source database.
Modify the RDS for Oracle DB instance to enable Multi-AZ (if required).
Modify job_queue_processes in the parameter group associated with target database instance with the appropriate value as configured in the source database instance.
Update the application configuration to point to the target database.
Update the application configuration or DNS CNAME records with the target RDS for the Oracle database instance endpoint.
Update the DNS with the new RDS for Oracle endpoint.
Before you start the application, if you are configuring reverse replication for fallback strategy, start the extract on RDS for Oracle and replicat on original source Oracle database.
Start the application.

Clean up

As part of this migration, you made changes in the source database, and deployed Amazon EC2 and Oracle GoldenGate resources on Amazon EC2 and Amazon RDS for Oracle to replicate the data from the on-premises database. Be sure to remove any resources and configurations you no longer need.

Conclusion

In this post, we discussed the steps involved in migrating an on-premises multitenant Oracle database to an RDS for Oracle CDB architecture (multi-tenant) using Oracle Data Pump. We also used GoldenGate Microservices Architecture for ongoing replication to reduce downtime. With the complexity involved in database migrations, we highly recommend testing the migration steps in non-production environments prior to making changes in production.

We welcome your feedback. If you have any questions or suggestions, leave them in the comments section.

About the Authors

Alok Srivastava is a Senior Database Consultant and Data Architect at AWS, specializing in database migration and modernization programs. Leveraging his expertise in both traditional and cutting-edge technologies, he guides AWS customers and partners through their journey to the AWS Cloud. Alok’s role encompasses not only database solutions but also the integration of GenAI to enhance data-driven insights and innovation.

Vishal Patil is a Senior Database Specialist Solutions Architect with Amazon Web Services. Vishal helps customers migrate and optimize Oracle workloads in AWS.

Pramod Yadav is a Cloud Support Database Engineer at AWS. He has been with AWS for almost 3 years. With over 10 years of experience in database engineering and infra-architecture, Pramod is also a subject matter expert in the AWS DMS. With a focus on providing excellent customer experiences in the AWS Cloud, he works with external customers to handle a variety of scenarios, such as troubleshooting Amazon RDS infrastructure, assisting with AWS DMS migrations.

Read MoreAWS Database Blog

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments