Sunday, February 5, 2023
No menu items!
HomeDatabase ManagementIntegrate Amazon RDS for Oracle with Amazon EFS

Integrate Amazon RDS for Oracle with Amazon EFS

As customers migrate their Oracle databases to the Amazon Relational Database Service for Oracle, they may often benefit from a shared file system to be available on their Oracle database systems. This is either to share files between the database and application servers or to act as a staging location to keep backups, data loads, and more. Amazon RDS for Oracle now supports integration with Amazon Elastic File System (Amazon EFS), which provides a simple, serverless, set-and-forget elastic file system that lets you share file data without provisioning or managing storage. It’s built to scale on demand to petabytes without disrupting applications.

Amazon RDS for Oracle with Amazon EFS is well-suited to support a broad spectrum of use cases, such as the following:

Share a file system between applications and multiple database servers
Use it as an upload location for the native dumps and backups required for migration
Store and share RMAN backup and recovery logs without the allocation of additional storage space on the server
Use Oracle utilities such as UTL_FILE to read and write files

Benefits of integrating an Amazon RDS for Oracle instance with Amazon EFS

Once Amazon RDS for Oracle has been integrated with Amazon EFS, you can transfer files between your Amazon RDS for Oracle DB instance and an Amazon EFS file system. This integration provides the following benefits:

You can export/import Oracle Data Pump files to and from Amazon EFS to your Amazon RDS for Oracle DB instance. You don’t need to copy the dump files onto Amazon RDS for Oracle storage. These operations are performed directly from the Amazon EFS file system.
Faster migration of data as compared to migration over database link. You can use Amazon EFS file system mounted on Amazon RDS for Oracle DB instances as a landing zone for various Oracle files required for migration or data transfer.
Using it as a landing zone helps to save the allocation of extra storage space on the Amazon RDS instance to hold the files.
The Amazon EFS file systems can automatically scale from gigabytes to petabytes of data without needing to provision storage.
There are no minimum fees or setup costs, and you pay only for what you use.

In this post, we walk through a step-by-step configuration to set up Amazon EFS on an Amazon RDS for Oracle database instance. We also talk about the benefits of this integration, and the best practices to consider while doing so. Before starting, review the requirements and restrictions of Amazon EFS integration with Oracle Database.

Create an Amazon EFS file system

Your first step is to create an Amazon EFS file system:

On the Amazon EFS console, choose Create file system.
For Name, enter a name for your file system.
For Amazon Virtual Private Cloud (Amazon VPC), choose the VPC in which you have an Amazon RDS for Oracle instance deployed.
For Storage class, select Standard.
Choose Create. (by default, it will inherit the default security group, so make sure to select appropriate security group under customize while creating)

Configure Amazon EFS file system permissions

After you create a new Amazon EFS file system, by default only the root user (UID 0) has read, write, and run permissions. For other users to modify the file system, the root user must explicitly grant them access. You must mount the Amazon EFS file system locally on your Amazon Elastic Compute Cloud (Amazon EC2) instance and set up fine-grained permissions to allow Amazon RDS instances to read and write files from and to the Amazon EFS file system. For example, you can run chmod 777 against the Amazon EFS file system root to grant other users the permission to write to this directory. Complete the following steps:

Select the security group of the Amazon EC2 instance where Amazon EFS file system will be mounted.

Edit the inbound rules and add port 2049 (default port for NFS) and click Save rules

Now attach this security group to the Amazon EFS mount point in each AZ. On the Network tab for the Amazon EFS, select your mount point and choose Manage.

Connect to the Amazon EC2 instance and mount your file system on the Amazon EC2 instance:

$sudo mount -t efs -o tls fs-05cef2152acda9175:/ /efsdir

Please ensure NFS Client is installed on your Amazon EC2 instance (please refer to Step 3.2).

Add an entry to /etc/fstab to make it persistent across reboots.

Create a directory on the mounted EFS file system

$sudo mkdir /efsdir/datapump

Change permissions so that Amazon RDS for Oracle can write to this directory:

$sudo chmod -R 777 /efsdir

Create an option group

Now you create an Amazon RDS option group:

On the Amazon RDS console, choose Option groups in the navigation pane.

Choose Create group.
For Name, enter a name for your group (for example, efs-integration-option-grp).
For Description, enter a brief description (for example, EFS integration with RDS Oracle).
For Engine, choose oracle-ee.
For Major Engine Version, choose 19.
Choose Create.

On the Option groups page, select the option group that you created and choose Add option.

For Option name, choose EFS_INTEGRATION.
Under Option settings, for EFS_ID, enter the ID of the file system that you created earlier.
Choose Add option.

Add the option group to your Amazon RDS for Oracle instance

To add the option group to your Amazon RDS for Oracle instance, complete the following steps:

On the Amazon RDS console, choose Databases in the navigation pane.
Select your instance and choose Modify.

For Option group, choose the option group that you created

Choose Continue and check the summary of modifications

Choose Modify DB instance and select Apply immediately.

Configure network and security group permissions for Amazon RDS for Oracle integration with Amazon EFS

For Amazon RDS for Oracle to integrate with Amazon EFS, your DB instance must have network access to an Amazon EFS file system. Your VPC should have the enableDnsSupport attribute enabled. To allow access to the Amazon EFS file system for your DB instance, make sure the Amazon EFS file system has a mount target in each Availability Zone (AZ) and that the security group attached to the mount targets has an inbound rule allowing the Amazon RDS instance to make a TCP connection to the NFS port 2049 and receive return traffic.

On the Amazon RDS console, select your Amazon RDS for Oracle instance. Under Connectivity & Security click on the security group.

Select the security group for your Amazon RDS for Oracle instance and on the Actions menu, choose Edit inbound rules.

Add a rule allowing access on port 2049. ( default NFS port).
Choose Save.

Now attach this security group to the Amazon EFS mount point in each AZ. On the Network tab for the Amazon EFS, select your mount point and choose Manage.

Add the security group for Amazon RDS for Oracle to the mount point in each AZ.
Choose Save.

Transfer files between Amazon RDS for Oracle and an Amazon EFS file system

To transfer files between an Amazon RDS for Oracle instance and an Amazon EFS file system, you must create an Oracle directory on Amazon RDS for Oracle. See the following code.

Example commands are run by admin user, other users may need proper privileges to run these commands. Note that file system path must begin with /rdsefs-

BEGIN
rdsadmin.rdsadmin_util.create_directory_efs(
p_directory_name => ‘DATA_PUMP_DIR_EFS’,
p_path_on_efs => ‘/rdsefs-fs-05cef2152acda9175/datapump’);
END;
/

1. Use DATAPUMP to export a table

The following PL/SQL code example shows how to export the table EMP in the HR_USER schema, and the dump file is created in the Oracle directory DATA_PUMP_DIR_EFS, which is on Amazon EFS:
declare

l_dp_handle       number;
begin
  — Open a table export job.
  l_dp_handle := dbms_datapump.open(
    operation   => ‘EXPORT’,
    job_mode    => ‘TABLE’,
    remote_link => NULL,
    job_name    => ‘HR_USER_TBL_EXP’,
    version     => ‘LATEST’);
  — Specify the dump file name and directory object name.
  dbms_datapump.add_file(
    handle    => l_dp_handle,
    filename  => ‘hr_user.dmp’,
    directory => ‘DATA_PUMP_DIR_EFS’);
  — Specify the log file name and directory object name.
  dbms_datapump.add_file(
    handle    => l_dp_handle,
    filename  => ‘h_user.log’,
    directory => ‘DATA_PUMP_DIR_EFS’,
    filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
  — Specify the table to be exported, filtering the schema and table.
  dbms_datapump.metadata_filter(
    handle => l_dp_handle,
    name   => ‘SCHEMA_EXPR’,
    value  => ‘= ”HR_USER”’);
  dbms_datapump.metadata_filter(
    handle => l_dp_handle,
    name   => ‘NAME_EXPR’,
    value  => ‘= ”EMP”’);
  dbms_datapump.start_job(l_dp_handle);
  dbms_datapump.detach(l_dp_handle);
end;
/

This creates the file hr_user.dmp in the Oracle directory ‘DATA_PUMP_DIR_EFS‘ and you can check from Amazon EC2 where Amazon EFS is mounted.

2. Use UTL_FILE to write and read from files

To write and read from files, use the following code:

declare
  fhandle  utl_file.file_type;
begin
  fhandle := utl_file.fopen(
                ‘DATA_PUMP_DIR_EFS’     — File location
              , ‘test_file.txt’ — File name
              , ‘w’ — Open mode: w = write.
 );
  utl_file.put(fhandle, ‘Hello world!’  || CHR(10));
  utl_file.put(fhandle, ‘Hello again!’);
  utl_file.fclose(fhandle);
exception
  when others then
    dbms_output.put_line(‘ERROR: ‘ || SQLCODE || ‘ – ‘ || SQLERRM);
    raise;
end;
/

You can check from the Amazon EC2 instance that the file was created.

 3. Use Oracle RMAN to back up your Amazon RDS for Oracle database to the shared EFS file system

The following section lists steps to perform the backup of your Amazon RDS for Oracle database using Oracle Recovery Manager (RMAN) and store the backup pieces in the Amazon EFS file system

Create a directory on the OS and change permissions from the Amazon EC2 server where you have mounted the Amazon EFS

sudo mkdir /efsdir/rman
sudo chmod -R 777 /efsdir

Create Oracle Directory.

BEGIN
rdsadmin.rdsadmin_util.create_directory_efs(
p_directory_name => ‘RMAN_DIR_EFS’,
p_path_on_efs => ‘/rdsefs-fs-05cef2152acda175/rman’);
END;
/

This will create the database directory name RMAN_DIR_EFS to store the RMAN Backups. The file system path value for the p_path_on_efs parameter is prefixed with the string “/rdsefs- <EFS FILE SYSTEM ID>”.

Make sure the archive logs are retained on the RDS database server as long as the Oracle RMAN tool requires them. In our example we have used 2 hours as retention of archive logs.

begin
rdsadmin.rdsadmin_util.set_configuration(
name => ‘archivelog retention hours’,
value => ‘2’);
end;
/
commit;

Run the RMAN Backup for the Amazon RDS for Oracle instance. Note that the backup location is specified as the Oracle Directory pointing to Amazon EFS File system. Refer to this document for common RMAN tasks related to the backup.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_database_full(
p_owner => ‘SYS’,
p_directory_name => ‘RMAN_DIR_EFS’,
p_parallel => 4,
p_section_size_mb => 10,
p_tag => ‘FULL_DB_BACKUP’,
p_rman_to_dbms_output => FALSE);
END;
/

Check the RMAN backup logs.

RMAN log files are stored in the bdump directory. Use the following query to get the logfile name:

SELECT * FROM table(rdsadmin.rds_file_util.listdir(‘BDUMP’)) order by mtime;

Run the following query to open the log file and check for any errors. Replace the file name obtained from the previous query and add it in the next query.

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file(‘BDUMP’, ‘rds-rman-backup-database-2023-01-04.14-01-58.933233000.txt’));

Log in to the EC2 instance where the same Amazon EFS file system is mounted. The RMAN Backup pieces are stored in the Amazon EFS file system.

Conclusion

In this post, we showed how to integrate and configure Amazon EFS with Amazon RDS for Oracle. You can use this feature to share data between databases and application servers and among database servers. We demonstrated use cases where you can also use it when performing RMAN backups and native Data Pump from Amazon RDS for Oracle without consuming database storage on the RDS instance.

For more information about this feature, refer to Amazon EFS integration.

About the Authors

 Devinder Singh is an SA Manager with AWS. He has over 25 years of experience working with various database and storage technologies. Devinder focuses on helping customers on their journey to AWS and helping them architect highly available and scalable database solutions based on various relational and NoSQL AWS Database services. When not working with customers, you can always find Devinder enjoying long hikes or biking.

Manash Kalita is an AWS Senior Database Specialist Solutions Architect for APJ, having extensive experience in Enterprise Cloud Architecture.

Arnab Saha is a Senior Database Specialist Solutions Architect at AWS. Arnab specializes in Amazon RDS, Amazon Aurora and Amazon Elastic Block Store. He provides guidance and technical assistance to customers thus enabling them to build scalable, highly available and secure solutions in AWS Cloud.

Read MoreAWS Database Blog

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments