Oracle Multitenant Architecture uses a container-based architecture specifically designed for the cloud. It enables the Oracle database to function as multitenant container database (CDB) where application databases are created as pluggable databases (PDB) inside the container database (CDB). A PDB is a collection of schemas, schema objects and non-schema objects, and self-contained for an application or tenant. Many pluggable databases (PDBs) can be created inside a single container database (CDB) in Oracle’s multitenant architecture.
While working with a customer migrating their on-premises Oracle databases to Amazon Relational Database Service (Amazon RDS) for Oracle, the customer Database Administrators (DBAs) experienced non-responsive connections from AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) to the on-premises Oracle Real Application Clusters (RAC) 19c database. Necessary user accounts with required roles and privileges were already created on the on-premises Oracle database, which was running RAC with a muti-tenant architecture. The hostname and port number provided by the customer DBA were correct, and we had the necessary certificates from the DBAs for the SSL connectivity.
The customer network team confirmed the firewall had been opened on the given database port for the SSL connectivity from AWS to on-premises. However, the connectivity test to the database didn’t succeed, and the connections were closed after some time without any errors or warning messages.
In this post, we review the troubleshooting steps, the reason behind this issue, and the resolution. Even though we worked on a RAC database to solve this problem, the troubleshooting and solution applies to any Oracle multitenant architecture database.
In Oracle’s multitenant architecture each PDB, while inheriting the parameters from the CDB, can have its own set of parameters as well. The LOCAL_LISTENER parameter is one which can be configured at both CDB and individual PDB level. The LOCAL_LISTENER parameter can be configured to listen at different port than that of main listener service. When a client connects to the database either through the standard listener or the SCAN listener, the listener sends the address details of the local listener on that node back to the client, if configured. The client connects to the local listener on specified node and port, and the local listener starts a dedicated server process for the connection to the database.
Solution overview
The following architecture diagram illustrates the connectivity of applications and users from both on-premises and AWS Cloud to the Application database running at the customer’s data center. Apart from AWS DMS, there are other applications running on AWS EC2 instances and connecting to this on-premises database using JDBC / ODBC connectivity. All these connections pass through the firewall configured at the customer’s side which allows connectivity based on predefined rule sets.
The customer DBAs configured SCAN listeners with the Automatic Storage Management (ASM) instance for their RAC database. The status of the SCAN listener and remote Transparent Network Substrate (TNS) entry from their existing application servers were verified OK.
The protocol was set to TCPS, and the database service was configured on Port 4060. Oracle Wallet was holding the SSL certificates, and the certificates were shared with all the clients connecting to the database. It was a live production database in the customer data center and with multiple on-premises applications utilizing the database.
Since all their on-premises applications and clients could connect to the database, I focused on the AWS infrastructure, which was newly configured, and tried to establish connectivity to this on-premises database.
The troubleshooting approach
To resolve this connectivity issue, I teamed up with the customer DBAs and took the following actions:
The customer opened an AWS support ticket and the support team confirmed that DMS tried to open a connection but didn’t get a response.
Since AWS DMS had issues connecting to SCAN IP due to network complexity that required resolving on-premises DNS records of the Oracle cluster, we decided to use the physical IPs to connect to the RAC database. However, the result remained the same.
Next, we tried to connect from an EC2 instance to the database using native Oracle Client by creating Oracle Wallet and installing SSL certificates in the wallet. After adding the TNS entries, we tried to connect to the on-premises database using Oracle SQL*Plus Client. There was no response from the on-premises database, and after some time, the connection attempt was cancelled with the ORA-12170: TNS Connect Timeout Occurred error message.
The TNS ping from the Oracle Client returned as successful on all possible IP and hostname configurations, but connection attempts to the database were not going through. The listener logs did not provide enough information on whether the connection had been established to the database or not because we could only detect an incoming connection request from the AWS source IP to the database listener.
We did an operating system (OS) ping test to confirm that the listener ports were open in the firewall, and these connection tests were successful on both physical IP, SCAN IP, and hostnames.
The network team could trace the TCPS connection originating from AWS and reaching the DB Server on the port the DBA specified. But the connections were dropped after the handshake with the listener port, and no packets were sent back to clients.
Using SQL*Plus we intentionally gave the wrong password to the connection attempt and expected the database to throw an invalid username/password We did this to make sure that the connections were reaching the database and were authenticated. However, there was no error message from the database, and it was still trying to establish a connection.
At this point we concluded the connection was not even reaching the database to authenticate.
Reviewing the connection configurations
As mentioned previously, the target RAC database was configured with multi-tenant architecture and had a single PDB for the application. The PDB had the parameter LOCAL_LISTENER configured to a text value (sapdb_listener). After reviewing the listener configuration file and listener status and I couldn’t find any entry related to this name sapdb_listener. The following screenshot shows the values for local_listener parameter for both the container and pluggable database:
While there are no entries found in the listener.ora file for this sapdb_listener, we found an entry for this in the tnsnames.ora file as shown below:
According to Oracle Documentation, this local listener name was resolved through entries in the tnsnames.ora configuration file. We found an entry in the name of sapdb_listener in the tnsnames.ora file on both RAC database hosts, and this local listener was configured on a different port number (port 5060) than that of SCAN listener. The Oracle SCAN listener always re-routes the connections to the local listeners.
The SCAN listener for this database was listening on TCPS port 4060, and this local listener for the PDB was listening on TCPS port 5060. Since their applications were running on the same IP subnet of their database server, there were no firewall restrictions within their on-premises data center. However, only the SCAN Listener ports (4060) were opened in their firewall between AWS and the target database server. This prevented the clients from connecting to the pluggable application database, which is configured with its own LOCAL_LISTENER, even though the connections could reach the container database using the SCAN listeners.
Resolution
We opened the firewall for this LOCAL_LISTENER port (5060) between AWS and the database server. Once the firewall rules were implemented, we were able to connect to the database using SQL*Plus from the EC2 instance in AWS. As a result, providing the wrong password in the connections now threw an invalid username/password error, as expected.
After validating the database connectivity using SQL*Plus, we tested connectivity using AWS DMS and AWS SCT Everything worked fine, and we managed to migrate the schema and data from this on-premises database. The issue was resolved with this single additional firewall rule change.
The following image shows the updated solution architecture diagram:
While working with modern Oracle multi-tenant architecture databases, for each PDB, we need to find the value for the LOCAL_LISTENER parameter and validate its settings from the TNSNAMES.ORA file of the database server. While establishing firewall rule configurations, the connectivity should also be opened for the ports listed in this LOCAL_LISTENER parameter in addition to the Oracle container and SCAN listener ports. Oracle SCAN always re-routes the connections to local listeners, if configured.
Conclusion
In this post, I showed how we diagnosed and resolved a connectivity issue and ORA-12170: TNS Connect Timeout Occurred error while connecting to an on-premises Oracle RAC database with multi-tenant configuration from AWS. This solution applies to any services or applications using native Java Database Connectivity (JDBC) or Open Database Connectivity (ODBC), residing behind a firewall connecting to an Oracle database running in multi-tenant architecture with local listeners configured. It is required to collect the LOCAL_LISTENER details for each PDB in addition to the SCAN and standard database listener service details, and add the ports configured in LOCAL_LISTENER also to the firewall rules to allow remote access. Refer to Oracle documentation specific to your version on configuring local listener for PDBs.
If you have any comments or questions, leave them in the comments section.
About the Author
Bala K is a Senior Database Consultant at AWS and brings along vast experience in database technologies. Starting with mainframes, Bala has worked on all modern database engines like Oracle, SQL Server, DB2, Sybase, PostgreSQL, and Amazon Redshift. At AWS, he helps empower customers with their digital transformation and database modernization, and accelerate migration of their database workloads to AWS.
Read MoreAWS Database Blog