Interested in understanding how Amazon calculates your S3 Storage Costs? If you’re looking for ways to monitor, optimize and reduce your S3 spend, we’ve got you covered with this complete guide.

We will discuss all the elements that contribute to your bill in detail, explain how to compute your S3 storage costs, and discuss some strategies and tips for optimizing your spending.

Here’s a preview of this guide’s structure:

  • What is Amazon S3, and why is S3 pricing so complicated?
  • How does S3 pricing work, simplified
  • A complete breakdown of Amazon S3 pricing by component
  • Best practices for reducing your S3 costs
  • Tools to help simplify your S3 bill

What is Amazon S3, and why is S3 pricing so complicated?

An infographic explaining the process of AWS S3

Amazon S3 (Simple Storage Service) offers highly scalable object storage with pricing that reflects its variety and flexibility. S3 is widely used for diverse purposes like data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.

With multiple storage classes designed for different access frequencies and data lifecycles—from frequently accessed data in S3 Standard to rarely accessed in S3 Glacier—pricing can get complex. Knowing the right pricing plan to choose is one thing; there are also extra charges based on factors like storage volume, data requests, retrieval rates, and management features, all of which vary by region and specific usage. Additional costs stem from add-on features and management services like S3 Replication and S3 Object Lock.

To optimize your S3 spending, it’s important to get a solid understanding of all of these components — we’ll make it as easy and straightforward as possible in this detailed guide.

How does S3 pricing work, simplified

While taking into account the total S3 Storage costs there are six components that matter the most:

  • Storage: the amount of data you store (in GB)
  • Requests and data retrievals: operations that you execute to retrieve data, like GET, PUT, DELETE, etc.
  • Data transfer modes: how and where you transfer data
  • Management and analytics: management and analytics features and tools
  • Replication: copying data to multiple storage locations for increased availability and durability
  • S3 Object Lambda: data transformation and processing through S3 Object Lambda

These six components are the main factors determining your S3 costs. Amazon S3 uses a pay-as-you-go pricing model, without any upfront payment or commitment required. S3’s pricing is usage-based, so you pay for the resource that you’ve used.

It’s worth noting that AWS offers a free tier to new AWS customers, involving 5GB of Amazon S3 storage in the S3 Standard storage class; 20,000 GET Requests; 2,000 PUT, COPY, POST, or LIST Requests; and 100 GB of Data Transfer Out each month.

S3 Pricing broken down by component

Now, let’s dive into each component of your S3 costs, with key considerations and best practices.

#1: S3 Storage Classes

Storage is the most significant component of S3 pricing, contributing the most towards total cost. Different S3 storage classes are suitable for various use cases. The frequency of storage access, speed of storage access, and required amount of redundancy are the main deciding factors in determining which storage class to choose.

S3 Standard Storage: Frequently Accessed Data

This storage class is designed for frequently accessed data and provides high durability, performance and availability (data is stored in a minimum of three Availability Zones). S3 Standard is best suited for general-purpose storage for a range of use cases requiring frequent data access, such as websites, content distribution or data lake (in fact, more than 93% of S3 objects are stored in this class).

Costs are higher for S3 Standard than for other S3 storage classes. Pricing is tiered; you pay $0.023 per GB per month for the first 50 TB per month. The next tier is $0.022 per GB, then storage above 500 TB per month is priced at $0.021.

S3 Standard Storage

S3 Standard – Infrequent Access (IA) Storage:

This storage class offers a lower storage cost option for data that is less frequently accessed, but still requires rapid access when needed. It is similar to Amazon S3 Standard, with a 40–46% lower storage price but fees for data retrieval. It is ideal for long-term storage, disaster recovery, and backups.

S3 Standard — Infrequent Access tier is priced starting $.0125 per GB.

S3 Standard – Infrequent Access (IA) Storage

S3 One Zone – Infrequent Access Tier:

Typically, the data is replicated across multiple storage zones to ensure durability and high availability. However, with S3 One Zone, data is kept in a single AWS Availability Zone. This makes it appropriate for less frequently accessed data that needs quick retrieval. However, it is not intended to be resilient to the actual loss of an AZ.

Thus, if regional redundancy is not something you require (for example, if you’re storing secondary backup copies or other data that can be recreated), you can benefit from rates that are 20% less expensive than S3 Standard-Infrequent Access (starting at $.01 per GB per month).

S3 One Zone – Infrequent Access Tier

S3 Intelligent-Tiering

For automated cost optimization, S3 Intelligent Tiering uses built-in monitoring and automated features to shift data between such a frequent-access tier (FA) and an infrequent-access tier (IA). The use of S3 Intelligent-Tiering means that you will not be charged for FA storage for data that isn’t frequently accessed; files kept in FA are charged at the S3 Standard rate, while those kept in Infrequent Access are discounted by 40–46%.

There is a monthly tracking and auto-tiering cost associated with S3 Intelligent Tiering, but there are no fees for data retrieval.

S3 Glacier Instant Retrieval

This storage class offers the lowest storage cost for long-lived, rarely accessed data that requires millisecond retrieval times. This archive instant access tier is suitable for storing long-term secondary backup copies and older data that might still need to be accessed quickly, such as certain compliance records or seldom-used digital access.

S3 Glacier Flexible Retrieval

Aimed at data archiving, this storage class provides extremely low-cost storage with retrieval times ranging from a few minutes to several hours (much slower than the other classes). This class is suitable for data only anticipated to be retrieved once or twice a year, not requiring instant access.

S3 Glacier Deep Archive

This is the most cost-effective storage class for long-term archiving and digital preservation, where data retrieval times of 12 to 48 hours are acceptable. It is designed for infrequent access data — for example, if you’re in a highly regulated sector that needs to keep data sets for legal compliance.
S3 Glacier Deep Archive

S3 on Outposts Storage Class

This storage class is designed for workloads that require S3 data storage on-premises (for example, to meet low latency, local data processing, or data residency needs). It allows you to securely store and retrieve data on your AWS Outposts as you would in the cloud, ensuring consistency across your hybrid environments. Pricing is tiered as following:
S3 on Outposts Storage Class

AWS Storage classes: A side-by-side comparison


S3 Standard

S3 Intelligent-Tiering*

S3 Express One Zone**

S3 Standard-IA

S3 One Zone-IA**

S3 Glacier

Instant Retrieval

S3 Glacier Flexible Retrieval***

S3 Glacier

Deep Archive***

Use cases

General purpose storage for frequently accessed data

Automatic cost savings for data with unknown or changing access patterns

High performance storage for your most frequently accessed data

Infrequently accessed data that needs millisecond access

Re-creatable infrequently accessed data

Long-lived data that is accessed a few times per year with instant retrievals

Backup and archive data that is rarely accessed and low cost

Archive data that is very rarely accessed and very low cost

First byte latency



single-digit milliseconds




minutes or hours



Amazon S3 provides the most durable storage in the cloud. Based on its unique architecture, S3 is designed to exceed 99.999999999% (11 nines) data durability. Additionally, S3 stores data redundantly across a minimum of 3 Availability Zones by default, providing built-in resilience against widespread disaster. Customers can store data in a single AZ to minimize storage cost or latency, in multiple AZs for resilience against the permanent loss of an entire data center, or in multiple AWS Regions to meet geographic resilience requirements.

Designed for availability









Availability SLA









Availability Zones









Minimum storage duration charge



1 hour

30 days

30 days

90 days

90 days

180 days

Retrieval charge




per GB retrieved

per GB retrieved

per GB retrieved

per GB retrieved

per GB retrieved

Lifecycle transitions









Table source: AWS S3 Storage Class Comparison Page
Related Content

AWS Cloud Cost Allocation: The Complete Guide

How to tag and allocate every dollar of your AWS spend

#2: Requests and data retrieval:

AWS charges for the number of requests made to your Amazon S3 buckets, such as PUT, GET, COPY, and POST requests. (Data retrieval is a type of S3 request).

Each of these request accrues specific charges that add to overall S3 Storage costs based on your tier and request volume. For example, Standard Storage charges $0.005 per 1,000 requests for PUT, COPY, POST, or LIST, compared to $0.05 for S3 Glacier Deep Archive access tier meant for infrequent access.

#3: Data transfer modes:

Transferring data out of Amazon S3 to the internet or to other AWS regions (inter region data transfer) incurs charges.

Transfers into Amazon S3 (ingress) are generally free, but egress (outbound) transfers over the free tier limit are charged per gigabyte. There are also additional charges if you want to accelerate your data transfer.

#4: Storage Management Features and Analytics

AWS S3 management and analytics costs can increase due to functionalities such as S3 Inventory, S3 Storage Class Analysis, S3 Storage Lens, and S3 Object Tagging, each providing detailed insights and management capabilities that, while enhancing operational efficiency, add to the overall expense.

Exact costs depend on the particular service (for example, S3 Storage Lens bills the first 25B objects monitored monthly at $0.20, the next 75B at $0.16, and all objects beyond 100B at $0.12 per million objects).

#5: Replication:

AWS S3 Replication involves duplicating S3 Storage data to another destination within the AWS ecosystem, increasing cloud usage costs. Typically, Amazon bills these replications as regular S3 usage, with costs based on the data transfer methods employed.

Same Region Replication (SRR) is generally the most cost-effective, incurring charges based on standard S3 Storage rates plus any associated data transfer fees from PUT requests. For Infrequent Access tiers, data retrieval charges are also added.

The total cost for SRR includes these charges plus the original storage costs. Conversely, Cross Region Replication (CRR) incurs additional fees for inter-region data transfers, potentially leading to higher overall expenses.

S3 Object Lambda:

AWS S3 Object Lambda integrates with your existing applications, allowing for the on-the-fly processing of S3 data using AWS Lambda functions. This AWS service modifies data retrieved from S3 Storage, transforming it for compatibility with applications that could not previously process it directly. Simply add your custom code, and S3 Object Lambda will handle the transformation and return the processed data to your application.

This service incurs a fee of $0.005 per GB of data returned.

Best practices for reducing your S3 storage costs

To strategically minimize your S3 Storage costs, you should consider these techniques — you can also check out the full guide to AWS S3 cost optimization.

1. Use Lifecycle Policies

Amazon S3 Lifecycle policies automate data management by transitioning objects to more cost-effective storage classes or deleting them based on predefined rules. Using the S3 Management Console, you can set rules to move infrequently accessed data to S3 Standard-IA after 30 days and to S3 Glacier Flexible Retrieval after 90 days for rarely accessed data.

Additionally, setting expiration actions to delete outdated logs or incomplete multipart uploads and using tagging for data categorization can provide more precise control over how lifecycle rules apply to specific datasets.

2. Delete Unused Data

Because you incur charges for data stored on S3, you should periodically find and delete data that you’re no longer using or data you could recreate relatively easily if you needed to. Or if you’re not sure about deleting objects forever, you can archive vast amounts of data at a low cost with S3 Glacier Deep Archive.

3. Compress Data Before You Send to S3

You can reduce Amazon S3 charges related to data storage and transfer by compressing data before uploading it. Compression reduces the volume of data, impacting both storage space and transfer costs.

Common compression algorithms include GZIP and BZIP2, which are ideal for text and offer good compression ratios. LZMA, although more processing-intensive, achieves higher compression rates. For binary data or rapid compression, LZ4 is recommended due to its fast speeds. Furthermore, utilizing file formats like Parquet, which supports different compression codecs, optimizes storage by facilitating efficient querying and storage of complex, columnar datasets.

4. Use S3 Select to retrieve only the data you need

When you use S3 Select, you can specify SQL-like statements to filter the data and return only the information that is relevant to your query. This means you can avoid downloading the entire file, process it on your application side, and then discard unnecessary data. By doing this, you reduce the data transfer and processing costs.

5. Choose the right AWS Region and Limit Data Transfers

Selecting the right AWS region for your S3 storage can have a significant impact on costs, especially when it comes to data transfer fees. Data stored in a region closer to your users or applications typically reduces latency and transfer costs, because AWS charges for data transferred out of an S3 region to another region or the internet. Check out this full guide to data transfer for practical tips on reducing your costs.

Tools for S3 cost visibility and management

One of the most important ways to reduce your S3 costs is with automated monitoring, analytics, and optimization tools — here are the platforms that can help.

AWS Pricing Calculator

The AWS Pricing Calculator is a useful tool for estimating and managing the costs of various AWS services, including S3. Users can model their solutions before building them, explore pricing options for different scenarios, and create templates for recurrent use.

While the AWS Pricing Calculator is a great free tool, it isn’t always the most beginner-friendly option. It requires a certain amount of expertise and knowledge to appropriately fill out its highly technical data fields.

S3 Storage Lens

Amazon S3 Storage Lens is an analytics tool that enhances visibility and management of your S3 storage.

It offers a dashboard and metrics to assess operational and cost efficiencies. Key features include identifying costly data access patterns to optimize costs, redistributing data across storage classes to save money, and monitoring replication to avoid unnecessary redundancy costs. You can also set customizable metrics and alerts to manage and mitigate potential issues.

However, it’s worth noting that S3 isn’t free, with charges starting at $0.023 per GB.


Whether you’re looking to understand and optimize just your S3 costs or your entire cloud bill, nOps can help. Its free cloud cost management tool, Business Contexts, gives you complete cost visibility and intelligence across your entire AWS infrastructure. Analyze S3 costs by product, feature, team, deployment, environment, or any other dimension.

If your AWS bill is a big mystery, you’re not alone. nOps makes it easy to understand and allocate 100% of your AWS bill, even fixing mistagged and untagged resources for you.

nOps Business Contexts Dashboard

nOps also offers a suite of ML-powered cost optimization features that help cloud users reduce their costs by up to 50% on autopilot, including:

Compute Copilot: automatically selects the optimal compute resource at the most cost-effective price in real time for you — also makes it easy to save with Spot discounts

ShareSave: automatic life-cycle management of your EC2/RDS/EKS commitments with risk-free guarantee

nOps Essentials: set of easy-apply cloud optimization features including EC2 and ASG rightsizing, resource scheduling, idle instance removal, storage optimization, and gp2 to gp3 migration

nOps processes over 1.5 billion dollars in cloud spend and was recently named #1 in G2’s cloud cost management category.

You can book a demo to find out how nOps can help you start saving today.