Adopting lift and shift or rehost migration approach can be a viable option for time-sensitive scenarios, particularly when transitioning from an on-premises data center environment. However, it’s crucial to ensure that the chosen migration strategy aligns with your organization’s overall cloud adoption strategy to realize the long-term benefits of cloud-native architectures. For many applications, a combination of lift and shift and other migration approaches, such as re-platforming or refactoring may yield optimal results. Each workload migration project should be carefully evaluated to determine the most appropriate strategy based on the specific requirements and constraints of the application and organization.
Organizations often migrate their SQL Server workloads to managed database services like Amazon Relational Database Service (Amazon RDS) for SQL Server to alleviate the burden of infrastructure management and operational overhead. Amazon RDS offers several advantages, including automated backups, high availability, and scalability, significantly reducing the complexities of managing SQL Server workloads. In this post, we outline common challenges encountered during the SQL Server migration assessment phase and suggest effective solutions to expedite this process, particularly for transitioning to managed platforms such as Amazon RDS for SQL Server and Amazon RDS Custom for SQL Server.
Migration Assessment Challenges
One of the challenges lies in the manual and time-consuming process of assessing SQL Server migration. It is imperative to identify all features that are compatible with Amazon RDS. Amazon RDS clearly specifies unsupported features and those with limited support. During the assessment phase, a comprehensive inventory of SQL Server features used in your environment is essential. Recognizing features with no or limited support is a critical step for both database administrators and system administrators.
Another challenge revolves around selecting the appropriate Amazon RDS compute and storage types. Amazon RDS offers a range of instance types and storage options to cater to various database performance and capacity requirements. Choosing the right RDS DB instance involves matching it with your on-premises SQL Server instance’s CPU and memory specifications. Although the diverse selection offered by Amazon RDS provides flexibility, correctly sizing your SQL Server workload is crucial for effective cost management. This process involves analyzing your current workload, particularly peak and average resource utilization, to accurately determine performance requirements.
RDSTools
RDSTools is a suite of PowerShell-based planning tools designed for sizing and cost-optimization during SQL Server migration to AWS. RDSTools compiles a detailed inventory of your SQL Server environment, including CPU and memory usage, and conducts a compatibility and sizing assessment for AWS, particularly for Amazon RDS for SQL Server.
This solution accelerates assessment and planning by automating the following tasks:
Capture a detailed SQL Server inventory, including virtual processor, memory, and storage specifications.
Perform Amazon RDS compatibility assessments and report unsupported features.
Recommend suitable platforms based on assessed resources, including Amazon RDS, RDS Custom, or Amazon Elastic Compute Cloud (Amazon EC2).
Suggest appropriate RDS DB instance types based on SQL Server utilization metrics.
Conduct read/write analyses of individual databases for Amazon ElastiCache recommendations. Amazon Elasticache is a fully managed, in-memory data store that can significantly improve the performance of web applications by caching frequently accessed data. By offloading read and write operations from the primary database to the in-memory cache, Elasticache can help reduce the load on the database and improve response times.
RDSTools offers support for all SQL Server versions and editions from SQL Server 2008 onwards.
Note that the tool requires SQL Server system administrator login credentials and is designed to be lightweight, ensuring minimal impact on production environments.
The tool consists of two main components: RDS Discovery and SQLServerAssessment.
RDS Discovery
The RDS Discovery Tool is a lightweight tool that provides the capability to scan a fleet of on-premises SQL Server or EC2 instances. It conducts automated assessments for over 20 features, confirms the compatibility of enabled features with Amazon RDS, and produces a comprehensive report. This helps validate the supportability of the enabled features on Amazon RDS and generates a report with recommendations to migrate to Amazon RDS, RDS Custom, or Amazon EC2.
You can perform the initial assessment with RDS Discovery:
Gather a detailed SQL Server inventory that includes SQL Server version, edition, features, and high availability configuration such as FCI and Always On availability groups.
Assess Amazon RDS compatibility.
Identify SQL Server Enterprise edition features in use.
SQLServerAssessment
The SQLServerAssessment Tool (SSAT) streamlines the evaluation of your on-premises SQL Server workloads to find the necessary system utilization for proper sizing on Amazon RDS. SSAT efficiently measures CPU, memory, IOPS, and throughput usage over a specified time frame, providing tailored suggestions to right-sizing your SQL Server on AWS. This versatile tool is capable of assessing both single and multiple SQL Server instances.
Before you start using SSAT, it’s important to familiarize yourself with how the tool interfaces with your SQL Server. Its primary goal is to gauge the level of system utilization needed for a smooth transition to Amazon RDS for SQL Server. SSAT collects a variety of performance metrics, including CPU utilization, memory usage, IOPS, and throughput, all within a predetermined period. This data is then used to formulate tailored recommendations for your RDS for SQL Server instance.
To accomplish this, SSAT employs dynamic management views (DMVs), a robust feature for capturing a wide spectrum of metrics specifically at the database level. This approach provides a focused and accurate assessment, minimizing any potential noise that might arise when collecting data at the server level.
The following table provides detailed explanations of the DMVs used by the tool to gather metrics.
Metrics
DMV
Columns
Comments/Notes
CPU
sys.dm_os_ring_buffers
SQLServerCPUUtilization
SystemIdLe
OtherProCpuUT
SQL Server CPU usage %
System idle %
Other processes %
Memory
sys.dm_os_performance_counters
sys.dm_os_sys_memory
sys.dm_os_sys_info
PLE
Committed_KB
committed_target_kb
total_physical_memory_kb
available_physical_memory_kb
Page life expectancy
Memory committed within the SQL Server memory manager
The memory that the SQL Server memory manager can consume
Total physical memory available to the operating system (KB)
The size of the currently available physical memory (KB)
Disk IOPS
dmv sys.dm_io_virtual_file_stats
Read
Write
Bread
Bwrite
Read and write IOPS
Byte read and Byte write
Performance utilization metrics
Upon its first run, the assessment tool creates an agent job dedicated to capturing performance metrics, which are temporarily stored in a staging table in MSDB. After the data collection phase is complete, the tool transfers the data from the staging table to a CSV file. A key setting in this process is the collection time, which defaults to 60 minutes. However, for a more thorough analysis of the collected metrics, it is advisable to extend the running time to 4–7 days. The tool is designed to start the agent job at 1-minute intervals, maintaining this frequency until the specified collection time is reached.
During this collection process, it creates five tables in the MSDB database to store the collected metrics:
Sql_CollectionStatus – This table maintains information about the collection job, including its start time, finish time, and status.
Sql_CPUCollection – In this table, three crucial metrics are collected: SQL Server CPU utilization, system idle, and other process utilization. All three metrics are captured as percentages.
Sql_MemCollection – This table stores various memory-related metrics, including SQL Server memory usage, SQL maximum memory target, OS total memory, OS available memory, and page life expectancy.
Sql_DBIO – This table records user database IOPS and throughput metrics between each collection time, specifically capturing the delta changes.
Sql_DBIOTotal – Here, the tool captures the total user database I/O, encompassing both read and write operations.
By organizing this data into these dedicated tables, the tool ensures efficient storage and direct access to crucial performance metrics, enabling you to analyze and optimize your SQL Server environments effectively.
The lifecycle of the agent job is managed through tooling using different switches. To learn more about these switches, refer to the GitHub repo.
Post-assessment steps
The following are actionable steps you can consider following in the assessment:
Follow best practices – Make sure you are following SQL Server best practices. For more information, refer to Best practices for working with SQL Server.
Optimize cost by right-sizing – Evaluate your resource utilization to ensure that your SQL Server deployments are appropriately sized for your workload. For more information, refer to Tips for Right Sizing.
Optimize your performance by using ElastiCache – Enhance your application performance by implementing caching strategies with ElastiCache. For more information, refer to Using AWS Services for distributed caching in ASP.NET Core Web Applications.
Optimize cost by consolidation – Additionally, you can also consider consolidating your SQL Server instances and databases into single RDS DB Instances where possible to reduce overhead and improve resource utilization.
Conclusion
In this post, we explored common challenges encountered during the SQL Server migration assessment phase and provided effective solutions to streamline and expedite this process. If you have any questions or suggestions, leave a comment.
About the Authors
Bachar Rifai is a senior Partner Solutions Architect with a focus on databases at AWS. In this capacity, Bachar collaborates with AWS Partners, offering them expert advice and technical support for their database projects. His goal is to enhance the effectiveness and value of their solutions leveraging AWS technologies.
Sudhir Amin is a Sr. Solutions Architect at Amazon Web Services. In his role based out of New York, he provides architectural guidance and technical assistance to enterprise customers across different industry verticals, accelerating their cloud adoption. He is a big fan of snooker, combat sports such as boxing and UFC, and loves traveling to countries with rich wildlife reserves where he gets to see world’s most majestic animals up close.
Sudarshan Roy is a Senior Database Specialist Cloud Solution Architect with World Wide AWS Database Services Organization (WWSO). He has led large scale Database Migration & Modernization engagements for Enterprise Customers and his passionate of solving complex migration challenges while moving database workload to AWS Cloud.
Read MoreAWS Database Blog