- Blog
- AWS Pricing and Services
- Amazon RDS Cost Optimization: The Essential Guide
Amazon RDS Cost Optimization: The Essential Guide
Last Updated: December 24, 2025, AWS Pricing and Services
A crash course in RDS pricing
Compute Costs (The Biggest Component)
RDS instances are billed per hour or per second, with pricing based on instance size and database engine. As an example, a small db.t3.micro costs around $0.017/hour (~$12/month), while a large db.r6i.8xlarge can exceed $2.00/hour (~$1,500/month). Open-source databases like MySQL and PostgreSQL only incur AWS infra costs, but SQL Server and Oracle require additional licensing fees, making them significantly more costly.
Storage Costs
Backup and Snapshot Costs
Data Transfer Costs
I/O Costs
How to cost optimize on RDS
#1: Rightsize Your RDS Instances
#2: Reduce Unused and Idle Resources
Non-production RDS instances running 24/7 are a major source of wasted spend. If your dev, test, or staging databases aren’t needed outside business hours, schedule them to shut down automatically. nOps Scheduler can help automate this process by using Machine Learning to learn your environments and cut costs without disrupting workflows.
Old backups and snapshots are another hidden cost driver. RDS snapshots persist until manually deleted, and AWS charges for storage at $0.095 per GB per month. Regularly clean up unneeded snapshots and backups—especially redundant automated snapshots that AWS keeps when upgrading an instance—to prevent unnecessary storage charges.
For databases that are rarely used, consider switching to Aurora Serverless or RDS on-demand stop/start, which allows you to pause and resume instances as needed instead of paying for constant uptime.
#3: Use AWS Commitments & Discount models
If you have steady usage and a stable architecture, Reserved Instances (RIs) and Savings Plans can cut RDS costs by up to 69%. RIs lock in savings for specific instance types and regions, making them ideal for predictable, consistent workloads. Savings Plans offer similar discounts but provide more flexibility across instance families and regions, which can be useful if your workloads shift over time.
A common mistake is overcommitting and paying for unused capacity—use AWS Cost Explorer or FinOps tools to analyze past usage before committing. If you have unused Reserved Instances, you can often list them for sale on the Amazon Marketplace.
RDS commitment management is the hard part. With RDS Reserved Instances and Database Savings Plans in the mix, keeping coverage correct over time gets tricky as workloads resize, move regions, shift instance families, or migrate between RDS and Aurora. Done manually, this turns into constant cleanup—and the penalty is predictable: stranded RIs, low Savings Plans utilization, and creeping on-demand spend. Tools that automate commitment management can continuously evaluate live usage and existing commitments, then adjust your coverage strategy as things change so discounts keep applying instead of getting stuck.
#4: Use Aurora Serverless for Non-Prod and Non-Critical Databases
Provisioned RDS instances billed hourly or per second add unnecessary costs for environments that aren’t used continuously, like dev, test, or staging databases.
Aurora Serverless automatically scales capacity down when idle, significantly reducing compute charges. For example, instead of paying around $0.29/hour (~$210/month) for a db.r6g.large that’s idle most of the time, Aurora Serverless charges only for active usage. Switching your infrequently accessed or non-critical databases to Aurora Serverless can quickly cut costs without sacrificing availability.
#5: Optimize storage by switching from io1 or gp2 to gp3
Many assume that io1 is the best choice for running databases because of its high IOPS capabilities. However, in practice, most workloads don’t need the extra performance and can cut costs significantly by switching to gp3—often saving up to 50%.
gp3 volumes provide a baseline of 3,000 IOPS and 125 MiB/s throughput, independent of volume size. Unlike gp2, where IOPS scales with volume size, gp3 allows you to configure IOPS and throughput separately, so you can right-size performance to your workload’s needs.
For most workloads, migrating from gp2 to gp3 is a no-brainer, typically yielding around 20% savings without performance loss. You can use nOps to easily filter out eligible volumes and make the switch in minutes.
#6: Stop paying Extended Support fees
Your databases are not automatically upgraded to the latest version, so if you don’t make the update, you’ll be auto-enrolled into extended support. This is a common AWS billing pitfall, but easy to fix. A simple configuration change can eliminate these costly charges.
For provisioned instances on RDS for MySQL, RDS Extended Support is priced per vCPU per hour and depends on the AWS Region and calendar date. nOps makes it trivially easy to find these costs — just type in “extended” and “RDS” into the filters and voilà!
#7: Scale Efficiently with Read Replicas & Aurora
Amazon RDS read replicas help scale read-heavy workloads by distributing queries across multiple instances. They can improve performance for read-heavy workloads, but adding them without checking actual usage can lead to wasted spend. Before spinning up a read replica, check if your primary instance is consistently over 30% CPU and I/O utilization. Otherwise, optimizing queries or adding caching (e.g., Redis) could be a better first step.
For existing read replicas, if CPU and I/O utilization are below 30%, consider downsizing or consolidating back to the primary instance. But be mindful of I/O bandwidth limits when downsizing. Each instance type has a cap—if your workload relies on high throughput, a smaller instance might cause bottlenecks.
For workloads with multiple read replicas, Aurora can be a better fit, as its Multi-AZ standby is also a read replica, reducing unnecessary duplication. Aurora also auto-scales storage and replicas, removing much of the manual tuning required with standard RDS.
#8: Minimize Data Transfer costs
RDS cross-region replication starts at $0.02 per GB, and internet-bound traffic costs even more, making frequent data movement a hidden expense. Keep traffic within the same region when possible, and for unavoidable transfers, compress data or batch syncs to minimize costs.
For inter-service communication, use VPC endpoints or AWS PrivateLink to avoid unnecessary egress charges. If cross-region replication is required, evaluate whether all replicas are necessary—removing even one can significantly cut costs. For more tips on reducing data transfer costs, check out the complete guide.
#9: nOps makes RDS cost optimization easy
nOps simplifies RDS cost optimization by giving you complete visibility into your RDS spend, storage, and instance utilization—so you can cut waste, optimize resources, and avoid surprise charges.
And with nOps automated commitment management, getting the most out of every dollar you spend on RDS is easy.
Understand & optimize your RDS costs: nOps gives you complete visibility into RDS spend, storage, and utilization—so you can quickly spot overprovisioning, underutilization, and other waste drivers across your environments.
Automate storage and resource optimization: from one-click storage optimization (like migrating to gp3) to automated non-production shutdowns, nOps reduces ongoing waste without relying on manual schedules.
Automated commitment management (RIs + Database Savings Plans): nOps continuously evaluates live RDS usage and existing commitments, then maintains the right coverage mix over time—helping prevent stranded RIs, improve Savings Plans utilization, and keep more spend discounted as workloads evolve.
nOps was recently ranked #1 in G2’s cloud cost management category, and we optimize $2 billion in cloud spend for our startup and enterprise customers.
Join our customers using nOps to understand your cloud costs and leverage automation with complete confidence by booking a demo today!
