Developer Guide
API Version 20060301Amazon Simple Storage Service Developer GuideAmazon Simple Storage Service Developer Guide
Amazon Simple Storage Service Developer Guide
Copyright © 2016 Amazon Web Services Inc andor its affiliates All rights reserved
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any
manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other
trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to
or sponsored by AmazonAmazon Simple Storage Service Developer Guide
Table of Contents
What Is Amazon S3 1
How Do I 1
Introduction 2
Overview of Amazon S3 and This Guide 2
Advantages to Amazon S3 2
Amazon S3 Concepts 3
Buckets 3
Objects 3
Keys 4
Regions 4
Amazon S3 Data Consistency Model 4
Features 6
Reduced Redundancy Storage 6
Bucket Policies 7
AWS Identity and Access Management 8
Access Control Lists 8
Versioning 8
Operations 8
Amazon S3 Application Programming Interfaces (API) 8
The REST Interface 9
The SOAP Interface 9
Paying for Amazon S3 9
Related Services 9
Making Requests 11
About Access Keys 11
AWS Account Access Keys 11
IAM User Access Keys 12
Temporary Security Credentials 12
Request Endpoints 13
Making Requests over IPv6 13
Getting Started with IPv6 13
Using IPv6 Addresses in IAM Policies 14
Testing IP Address Compatibility 15
Using DualStack Endpoints 16
Making Requests Using the AWS SDKs 19
Using AWS Account or IAM User Credentials 20
Using IAM User Temporary Credentials 25
Using Federated User Temporary Credentials 36
Making Requests Using the REST API 49
DualStack Endpoints (REST API) 50
Virtual Hosting of Buckets 50
Request Redirection and the REST API 55
Buckets 58
Creating a Bucket 59
About Permissions 60
Accessing a Bucket 60
Bucket Configuration Options 61
Restrictions and Limitations 62
Rules for Naming 63
Examples of Creating a Bucket 64
Using the Amazon S3 Console 65
Using the AWS SDK for Java 65
Using the AWS SDK for NET 66
Using the AWS SDK for Ruby Version 2 67
Using Other AWS SDKs 67
API Version 20060301
ivAmazon Simple Storage Service Developer Guide
Deleting or Emptying a Bucket 67
Delete a Bucket 68
Empty a Bucket 71
Bucket Website Configuration 73
Using the AWS Management Console 73
Using the SDK for Java 73
Using the AWS SDK for NET 76
Using the SDK for PHP 79
Using the REST API 81
Transfer Acceleration 81
Why use Transfer Acceleration 81
Getting Started 82
Requirements for Using Amazon S3 Transfer Acceleration 83
Transfer Acceleration Examples 83
Requester Pays Buckets 92
Configure with the Console 93
Configure with the REST API 93
DevPay and Requester Pays 96
Charge Details 96
Access Control 96
Billing and Reporting 96
Cost Allocation Tagging 96
Objects 98
Object Key and Metadata 99
Object Keys 99
Object Metadata 101
Storage Classes 103
Subresources 105
Versioning 106
Lifecycle Management 109
What Is Lifecycle Configuration 109
How Do I Configure a Lifecycle 110
Transitioning Objects General Considerations 110
Expiring Objects General Considerations 112
Lifecycle and Other Bucket Configurations 112
Lifecycle Configuration Elements 113
GLACIER Storage Class Additional Considerations 124
Specifying a Lifecycle Configuration 125
CrossOrigin Resource Sharing (CORS) 131
CrossOrigin Resource Sharing Usecase Scenarios 131
How Do I Configure CORS on My Bucket 132
How Does Amazon S3 Evaluate the CORS Configuration On a Bucket 134
Enabling CORS 134
Troubleshooting CORS 142
Operations on Objects 142
Getting Objects 143
Uploading Objects 157
Copying Objects 212
Listing Object Keys 229
Deleting Objects 237
Restoring Archived Objects 259
Managing Access 266
Introduction 266
Overview 267
How Amazon S3 Authorizes a Request 272
Guidelines for Using the Available Access Policy Options 277
Example Walkthroughs Managing Access 280
Using Bucket Policies and User Policies 308
API Version 20060301
vAmazon Simple Storage Service Developer Guide
Access Policy Language Overview 308
Bucket Policy Examples 334
User Policy Examples 343
Managing Access with ACLs 364
Access Control List (ACL) Overview 364
Managing ACLs 369
Protecting Data 380
Data Encryption 380
ServerSide Encryption 381
ClientSide Encryption 409
Reduced Redundancy Storage 420
Setting the Storage Class of an Object You Upload 421
Changing the Storage Class of an Object in Amazon S3 421
Versioning 423
How to Configure Versioning on a Bucket 424
MFA Delete 424
Related Topics 425
Examples 426
Managing Objects in a VersioningEnabled Bucket 428
Managing Objects in a VersioningSuspended Bucket 444
Hosting a Static Website 449
Website Endpoints 450
Key Differences Between the Amazon Website and the REST API Endpoint 451
Configure a Bucket for Website Hosting 452
Overview 452
Syntax for Specifying Routing Rules 454
Index Document Support 457
Custom Error Document Support 459
Configuring a Redirect 460
Permissions Required for Website Access 462
Example Walkthroughs 462
Example Setting Up a Static Website 463
Example Setting Up a Static Website Using a Custom Domain 464
Notifications 472
Overview 472
How to Enable Event Notifications 473
Event Notification Types and Destinations 475
Supported Event Types 475
Supported Destinations 476
Configuring Notifications with Object Key Name Filtering 476
Examples of Valid Notification Configurations with Object Key Name Filtering 477
Examples of Notification Configurations with Invalid PrefixSuffix Overlapping 479
Granting Permissions to Publish Event Notification Messages to a Destination 481
Granting Permissions to Invoke an AWS Lambda Function 481
Granting Permissions to Publish Messages to an SNS Topic or an SQS Queue 481
Example Walkthrough 1 483
Walkthrough Summary 483
Step 1 Create an Amazon SNS Topic 484
Step 2 Create an Amazon SQS Queue 484
Step 3 Add a Notification Configuration to Your Bucket 485
Step 4 Test the Setup 489
Example Walkthrough 2 489
Event Message Structure 489
CrossRegion Replication 492
Usecase Scenarios 492
Requirements 493
Related Topics 493
What Is and Is Not Replicated 493
API Version 20060301
viAmazon Simple Storage Service Developer Guide
What Is Replicated 493
What Is Not Replicated 494
Related Topics 495
How to Set Up 495
Create an IAM Role 495
Add Replication Configuration 497
Walkthrough 1 Same AWS Account 500
Walkthrough 2 Different AWS Accounts 501
Using the Console 505
Using the AWS SDK for Java 505
Using the AWS SDK for NET 507
Replication Status Information 509
Related Topics 510
Troubleshooting 511
Related Topics 511
Replication and Other Bucket Configurations 511
Lifecycle Configuration and Object Replicas 512
Versioning Configuration and Replication Configuration 512
Logging Configuration and Replication Configuration 512
Related Topics 512
Request Routing 513
Request Redirection and the REST API 513
Overview 513
DNS Routing 514
Temporary Request Redirection 514
Permanent Request Redirection 516
DNS Considerations 516
Performance Optimization 518
Request Rate and Performance Considerations 518
Workloads with a Mix of Request Types 519
GETIntensive Workloads 521
TCP Window Scaling 521
TCP Selective Acknowledgement 522
Monitoring with Amazon CloudWatch 523
Amazon S3 CloudWatch Metrics 523
Amazon S3 CloudWatch Dimensions 524
Accessing Metrics in Amazon CloudWatch 524
Related Resources 525
Logging API Calls with AWS CloudTrail 526
Amazon S3 Information in CloudTrail 526
Using CloudTrail Logs with Amazon S3 Server Access Logs and CloudWatch Logs 528
Understanding Amazon S3 Log File Entries 528
Related Resources 530
BitTorrent 531
How You are Charged for BitTorrent Delivery 531
Using BitTorrent to Retrieve Objects Stored in Amazon S3 532
Publishing Content Using Amazon S3 and BitTorrent 533
Amazon DevPay 534
Amazon S3 Customer Data Isolation 534
Example 535
Amazon DevPay Token Mechanism 535
Amazon S3 and Amazon DevPay Authentication 535
Amazon S3 Bucket Limitation 536
Amazon S3 and Amazon DevPay Process 537
Additional Information 537
Error Handling 538
The REST Error Response 538
Response Headers 539
API Version 20060301
viiAmazon Simple Storage Service Developer Guide
Error Response 539
The SOAP Error Response 540
Amazon S3 Error Best Practices 540
Retry InternalErrors 540
Tune Application for Repeated SlowDown errors 540
Isolate Errors 541
Troubleshooting Amazon S3 542
General Getting my Amazon S3 request IDs 542
Using HTTP 542
Using a Web Browser 543
Using an AWS SDK 543
Using the AWS CLI 544
Using Windows PowerShell 544
Related Topics 544
Server Access Logging 546
Overview 546
Log Object Key Format 547
How are Logs Delivered 547
Best Effort Server Log Delivery 547
Bucket Logging Status Changes Take Effect Over Time 548
Related Topics 548
Enabling Logging Using the Console 548
Enabling Logging Programmatically 550
Enabling logging 550
Granting the Log Delivery Group WRITE and READ_ACP Permissions 550
Example AWS SDK for NET 551
Log Format 553
Custom Access Log Information 556
Programming Considerations for Extensible Server Access Log Format 556
Additional Logging for Copy Operations 556
Deleting Log Files 559
AWS SDKs and Explorers 560
Specifying Signature Version in Request Authentication 561
Set Up the AWS CLI 562
Using the AWS SDK for Java 563
The Java API Organization 564
Testing the Java Code Examples 564
Using the AWS SDK for NET 565
The NET API Organization 565
Running the Amazon S3 NET Code Examples 566
Using the AWS SDK for PHP and Running PHP Examples 566
AWS SDK for PHP Levels 566
Running PHP Examples 567
Related Resources 568
Using the AWS SDK for Ruby Version 2 568
The Ruby API Organization 568
Testing the Ruby Script Examples 568
Using the AWS SDK for Python (Boto) 569
Appendices 570
Appendix A Using the SOAP API 570
Common SOAP API Elements 570
Authenticating SOAP Requests 571
Setting Access Policy with SOAP 571
Appendix B Authenticating Requests (AWS Signature Version 2) 573
Authenticating Requests Using the REST API 574
Signing and Authenticating REST Requests 575
BrowserBased Uploads Using POST 586
Resources 602
API Version 20060301
viiiAmazon Simple Storage Service Developer Guide
Document History 604
AWS Glossary 614
API Version 20060301
ixAmazon Simple Storage Service Developer Guide
How Do I
What Is Amazon S3
Amazon Simple Storage Service is storage for the Internet It is designed to make webscale
computing easier for developers
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount
of data at any time from anywhere on the web It gives any developer access to the same highly
scalable reliable fast inexpensive data storage infrastructure that Amazon uses to run its own global
network of web sites The service aims to maximize benefits of scale and to pass those benefits on to
developers
This guide explains the core concepts of Amazon S3 such as buckets and objects and how to work
with these resources using the Amazon S3 application programming interface (API)
How Do I
Information Relevant Sections
General product overview and pricing Amazon S3
Get a quick handson introduction to
Amazon S3
Amazon Simple Storage Service Getting Started Guide
Learn about Amazon S3 key
terminology and concepts
Introduction to Amazon S3 (p 2)
How do I work with buckets Working with Amazon S3 Buckets (p 58)
How do I work with objects Working with Amazon S3 Objects (p 98)
How do I make requests Making Requests (p 11)
How do I manage access to my
resources
Managing Access Permissions to Your Amazon S3
Resources (p 266)
API Version 20060301
1Amazon Simple Storage Service Developer Guide
Overview of Amazon S3 and This Guide
Introduction to Amazon S3
This introduction to Amazon Simple Storage Service is intended to give you a detailed summary of this
web service After reading this section you should have a good idea of what it offers and how it can fit
in with your business
Topics
• Overview of Amazon S3 and This Guide (p 2)
• Advantages to Amazon S3 (p 2)
• Amazon S3 Concepts (p 3)
• Features (p 6)
• Amazon S3 Application Programming Interfaces (API) (p 8)
• Paying for Amazon S3 (p 9)
• Related Services (p 9)
Overview of Amazon S3 and This Guide
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of
data at any time from anywhere on the web
This guide describes how you send requests to create buckets store and retrieve your objects
and manage permissions on your resources The guide also describes access control and the
authentication process Access control defines who can access objects and buckets within Amazon S3
and the type of access (eg READ and WRITE) The authentication process verifies the identity of a
user who is trying to access Amazon Web Services (AWS)
Advantages to Amazon S3
Amazon S3 is intentionally built with a minimal feature set that focuses on simplicity and robustness
Following are some of advantages of the Amazon S3 service
• Create Buckets – Create and name a bucket that stores data Buckets are the fundamental
container in Amazon S3 for data storage
• Store data in Buckets – Store an infinite amount of data in a bucket Upload as many objects as
you like into an Amazon S3 bucket Each object can contain up to 5 TB of data Each object is stored
and retrieved using a unique developerassigned key
API Version 20060301
2Amazon Simple Storage Service Developer Guide
Amazon S3 Concepts
• Download data – Download your data or enable others to do so Download your data any time you
like or allow others to do the same
• Permissions – Grant or deny access to others who want to upload or download data into your
Amazon S3 bucket Grant upload and download permissions to three types of users Authentication
mechanisms can help keep data secure from unauthorized access
• Standard interfaces – Use standardsbased REST and SOAP interfaces designed to work with any
Internetdevelopment toolkit
Note
SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon
S3 features will not be supported for SOAP We recommend that you use either the REST
API or the AWS SDKs
Amazon S3 Concepts
Topics
• Buckets (p 3)
• Objects (p 3)
• Keys (p 4)
• Regions (p 4)
• Amazon S3 Data Consistency Model (p 4)
This section describes key concepts and terminology you need to understand to use Amazon S3
effectively They are presented in the order you will most likely encounter them
Buckets
A bucket is a container for objects stored in Amazon S3 Every object is contained in a bucket For
example if the object named photospuppyjpg is stored in the johnsmith bucket then it is
addressable using the URL httpjohnsmiths3amazonawscomphotospuppyjpg
Buckets serve several purposes they organize the Amazon S3 namespace at the highest level they
identify the account responsible for storage and data transfer charges they play a role in access
control and they serve as the unit of aggregation for usage reporting
You can configure buckets so that they are created in a specific region For more information see
Buckets and Regions (p 60) You can also configure a bucket so that every time an object is added
to it Amazon S3 generates a unique version ID and assigns it to the object For more information see
Versioning (p 423)
For more information about buckets see Working with Amazon S3 Buckets (p 58)
Objects
Objects are the fundamental entities stored in Amazon S3 Objects consist of object data and
metadata The data portion is opaque to Amazon S3 The metadata is a set of namevalue pairs
that describe the object These include some default metadata such as the date last modified and
standard HTTP metadata such as ContentType You can also specify custom metadata at the time
the object is stored
An object is uniquely identified within a bucket by a key (name) and a version ID For more information
see Keys (p 4) and Versioning (p 423)
API Version 20060301
3Amazon Simple Storage Service Developer Guide
Keys
Keys
A key is the unique identifier for an object within a bucket Every object in a bucket has exactly
one key Because the combination of a bucket key and version ID uniquely identify each object
Amazon S3 can be thought of as a basic data map between bucket + key + version and the
object itself Every object in Amazon S3 can be uniquely addressed through the combination of
the web service endpoint bucket name key and optionally a version For example in the URL
httpdocs3amazonawscom20060301AmazonS3wsdl doc is the name of the bucket and
20060301AmazonS3wsdl is the key
Regions
You can choose the geographical region where Amazon S3 will store the buckets you create You
might choose a region to optimize latency minimize costs or address regulatory requirements
Amazon S3 currently supports the following regions
• US East (N Virginia) Region Uses Amazon S3 servers in Northern Virginia
• US West (N California) Region Uses Amazon S3 servers in Northern California
• US West (Oregon) Region Uses Amazon S3 servers in Oregon
• Asia Pacific (Mumbai) Region Uses Amazon S3 servers in Mumbai
• Asia Pacific (Seoul) Region Uses Amazon S3 servers in Seoul
• Asia Pacific (Singapore) Region Uses Amazon S3 servers in Singapore
• Asia Pacific (Sydney) Region Uses Amazon S3 servers in Sydney
• Asia Pacific (Tokyo) Region Uses Amazon S3 servers in Tokyo
• EU (Frankfurt) Region Uses Amazon S3 servers in Frankfurt
• EU (Ireland) Region Uses Amazon S3 servers in Ireland
• South America (São Paulo) Region Uses Amazon S3 servers in Sao Paulo
Objects stored in a region never leave the region unless you explicitly transfer them to another region
For example objects stored in the EU (Ireland) region never leave it For more information about
Amazon S3 regions and endpoints go to Regions and Endpoints in the AWS General Reference
Amazon S3 Data Consistency Model
Amazon S3 provides readafterwrite consistency for PUTS of new objects in your S3 bucket in all
regions with one caveat The caveat is that if you make a HEAD or GET request to the key name (to
find if the object exists) before creating the object Amazon S3 provides eventual consistency for read
afterwrite
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions
Updates to a single key are atomic For example if you PUT to an existing key a subsequent read
might return the old data or the updated data but it will never write corrupted or partial data
Amazon S3 achieves high availability by replicating data across multiple servers within Amazon's data
centers If a PUT request is successful your data is safely stored However information about the
changes must replicate across Amazon S3 which can take some time and so you might observe the
following behaviors
• A process writes a new object to Amazon S3 and immediately lists keys within its bucket Until the
change is fully propagated the object might not appear in the list
• A process replaces an existing object and immediately attempts to read it Until the change is fully
propagated Amazon S3 might return the prior data
API Version 20060301
4Amazon Simple Storage Service Developer Guide
Amazon S3 Data Consistency Model
• A process deletes an existing object and immediately attempts to read it Until the deletion is fully
propagated Amazon S3 might return the deleted data
• A process deletes an existing object and immediately lists keys within its bucket Until the deletion is
fully propagated Amazon S3 might list the deleted object
Note
Amazon S3 does not currently support object locking If two PUT requests are simultaneously
made to the same key the request with the latest time stamp wins If this is an issue you will
need to build an objectlocking mechanism into your application
Updates are keybased there is no way to make atomic updates across keys For example
you cannot make the update of one key dependent on the update of another key unless you
design this functionality into your application
The following table describes the characteristics of eventually consistent read and consistent read
Eventually Consistent Read Consistent Read
Stale reads possible No stale reads
Lowest read latency Potential higher read latency
Highest read throughput Potential lower read throughput
Concurrent Applications
This section provides examples of eventually consistent and consistent read requests when multiple
clients are writing to the same items
In this example both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2
(read 2) For a consistent read R1 and R2 both return color ruby For an eventually consistent
read R1 and R2 might return color red color ruby or no results depending on the amount
of time that has elapsed
In the next example W2 does not complete before the start of R1 Therefore R1 might return color
ruby or color garnet for either a consistent read or an eventually consistent read Also
depending on the amount of time that has elapsed an eventually consistent read might return no
results
For a consistent read R2 returns color garnet For an eventually consistent read R2 might
return color ruby color garnet or no results depending on the amount of time that has
elapsed
API Version 20060301
5Amazon Simple Storage Service Developer Guide
Features
In the last example Client 2 performs W2 before Amazon S3 returns a success for W1 so the
outcome of the final value is unknown (color garnet or color brick) Any subsequent reads
(consistent read or eventually consistent) might return either value Also depending on the amount of
time that has elapsed an eventually consistent read might return no results
Features
Topics
• Reduced Redundancy Storage (p 6)
• Bucket Policies (p 7)
• AWS Identity and Access Management (p 8)
• Access Control Lists (p 8)
• Versioning (p 8)
• Operations (p 8)
This section describes important Amazon S3 features
Reduced Redundancy Storage
Customers can store their data using the Amazon S3 Reduced Redundancy Storage (RRS) option
RRS enables customers to reduce their costs by storing noncritical reproducible data at lower levels
of redundancy than Amazon S3 standard storage RRS provides a costeffective highly available
API Version 20060301
6Amazon Simple Storage Service Developer Guide
Bucket Policies
solution for distributing or sharing content that is durably stored elsewhere or for storing thumbnails
transcoded media or other processed data that can be easily reproduced The RRS option stores
objects on multiple devices across multiple facilities providing 400 times the durability of a typical disk
drive but does not replicate objects as many times as standard Amazon S3 storage and thus is even
more cost effective
RRS provides 9999 durability of objects over a given year This durability level corresponds to an
average expected loss of 001 of objects annually
AWS charges less for using RRS than for standard Amazon S3 storage For pricing information see
Amazon S3 Pricing
For more information see Storage Classes (p 103)
Bucket Policies
Bucket policies provide centralized access control to buckets and objects based on a variety of
conditions including Amazon S3 operations requesters resources and aspects of the request
(eg IP address) The policies are expressed in our access policy language and enable centralized
management of permissions The permissions attached to a bucket apply to all of the objects in that
bucket
Individuals as well as companies can use bucket policies When companies register with Amazon S3
they create an account Thereafter the company becomes synonymous with the account Accounts
are financially responsible for the Amazon resources they (and their employees) create Accounts have
the power to grant bucket policy permissions and assign employees permissions based on a variety of
conditions For example an account could create a policy that gives a user write access
• To a particular S3 bucket
• From an account's corporate network
• During business hours
• From an account's custom application (as identified by a user agent string)
An account can grant one application limited read and write access but allow another to create and
delete buckets as well An account could allow several field offices to store their daily reports in a
single bucket allowing each office to write only to a certain set of names (eg Nevada* or Utah*)
and only from the office's IP address range
Unlike access control lists (described below) which can add (grant) permissions only on individual
objects policies can either add or deny permissions across all (or a subset) of objects within a bucket
With one request an account can set the permissions of any number of objects in a bucket An account
can use wildcards (similar to regular expression operators) on Amazon resource names (ARNs) and
other values so that an account can control access to groups of objects that begin with a common
prefix or end with a given extension such as html
Only the bucket owner is allowed to associate a policy with a bucket Policies written in the access
policy language allow or deny requests based on
• Amazon S3 bucket operations (such as PUT acl) and object operations (such as PUT Object
or GET Object)
• Requester
• Conditions specified in the policy
An account can control access based on specific Amazon S3 operations such as GetObject
GetObjectVersion DeleteObject or DeleteBucket
API Version 20060301
7Amazon Simple Storage Service Developer Guide
AWS Identity and Access Management
The conditions can be such things as IP addresses IP address ranges in CIDR notation dates user
agents HTTP referrer and transports (HTTP and HTTPS)
For more information see Using Bucket Policies and User Policies (p 308)
AWS Identity and Access Management
For example you can use IAM with Amazon S3 to control the type of access a user or group of users
has to specific parts of an Amazon S3 bucket your AWS account owns
For more information about IAM see the following
• Identity and Access Management (IAM)
• Getting Started
• IAM User Guide
Access Control Lists
For more information see Managing Access with ACLs (p 364)
Versioning
For more information see Object Versioning (p 106)
Operations
Following are the most common operations you'll execute through the API
Common Operations
• Create a Bucket – Create and name your own bucket in which to store your objects
• Write an Object – Store data by creating or overwriting an object When you write an object you
specify a unique key in the namespace of your bucket This is also a good time to specify any access
control you want on the object
• Read an Object – Read data back You can download the data via HTTP or BitTorrent
• Deleting an Object – Delete some of your data
• Listing Keys – List the keys contained in one of your buckets You can filter the key list based on a
prefix
Details on this and all other functionality are described in detail later in this guide
Amazon S3 Application Programming Interfaces
(API)
The Amazon S3 architecture is designed to be programming languageneutral using our supported
interfaces to store and retrieve objects
Amazon S3 provides a REST and a SOAP interface They are similar but there are some differences
For example in the REST interface metadata is returned in HTTP headers Because we only support
API Version 20060301
8Amazon Simple Storage Service Developer Guide
The REST Interface
HTTP requests of up to 4 KB (not including the body) the amount of metadata you can supply is
restricted
Note
SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
features will not be supported for SOAP We recommend that you use either the REST API or
the AWS SDKs
The REST Interface
The REST API is an HTTP interface to Amazon S3 Using REST you use standard HTTP requests to
create fetch and delete buckets and objects
You can use any toolkit that supports HTTP to use the REST API You can even use a browser to fetch
objects as long as they are anonymously readable
The REST API uses the standard HTTP headers and status codes so that standard browsers and
toolkits work as expected In some areas we have added functionality to HTTP (for example we
added headers to support access control) In these cases we have done our best to add the new
functionality in a way that matched the style of standard HTTP usage
The SOAP Interface
Note
SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
features will not be supported for SOAP We recommend that you use either the REST API or
the AWS SDKs
The SOAP API provides a SOAP 11 interface using document literal encoding The most common
way to use SOAP is to download the WSDL (go to httpdocs3amazonawscom20060301
AmazonS3wsdl) use a SOAP toolkit such as Apache Axis or Microsoft NET to create bindings and
then write code that uses the bindings to call Amazon S3
Paying for Amazon S3
Pricing for Amazon S3 is designed so that you don't have to plan for the storage requirements of your
application Most storage providers force you to purchase a predetermined amount of storage and
network transfer capacity If you exceed that capacity your service is shut off or you are charged high
overage fees If you do not exceed that capacity you pay as though you used it all
Amazon S3 charges you only for what you actually use with no hidden fees and no overage charges
This gives developers a variablecost service that can grow with their business while enjoying the cost
advantages of Amazon's infrastructure
Before storing anything in Amazon S3 you need to register with the service and provide a payment
instrument that will be charged at the end of each month There are no setup fees to begin using the
service At the end of the month your payment instrument is automatically charged for that month's
usage
For information about paying for Amazon S3 storage see Amazon S3 Pricing
Related Services
Once you load your data into Amazon S3 you can use it with other services that we provide The
following services are the ones you might use most frequently
API Version 20060301
9Amazon Simple Storage Service Developer Guide
Related Services
• Amazon Elastic Compute Cloud – This web service provides virtual compute resources in the
cloud For more information go to the Amazon EC2 product details page
• Amazon EMR – This web service enables businesses researchers data analysts and developers
to easily and costeffectively process vast amounts of data It utilizes a hosted Hadoop framework
running on the webscale infrastructure of Amazon EC2 and Amazon S3 For more information go to
the Amazon EMR product details page
• AWS ImportExport – AWS ImportExport enables you to mail a storage device such as a
RAID drive to Amazon so that we can upload your (terabytes) of data into Amazon S3 For more
information go to the AWS ImportExport Developer Guide
API Version 20060301
10Amazon Simple Storage Service Developer Guide
About Access Keys
Making Requests
Topics
• About Access Keys (p 11)
• Request Endpoints (p 13)
• Making Requests to Amazon S3 over IPv6 (p 13)
• Making Requests Using the AWS SDKs (p 19)
• Making Requests Using the REST API (p 49)
Amazon S3 is a REST service You can send requests to Amazon S3 using the REST API or the AWS
SDK (see Sample Code and Libraries) wrapper libraries that wrap the underlying Amazon S3 REST
API simplifying your programming tasks
Every interaction with Amazon S3 is either authenticated or anonymous Authentication is a process
of verifying the identity of the requester trying to access an Amazon Web Services (AWS) product
Authenticated requests must include a signature value that authenticates the request sender The
signature value is in part generated from the requester's AWS access keys (access key ID and secret
access key) For more information about getting access keys see How Do I Get Security Credentials
in the AWS General Reference
If you are using the AWS SDK the libraries compute the signature from the keys you provide
However if you make direct REST API calls in your application you must write the code to compute
the signature and add it to the request
About Access Keys
The following sections review the types of access keys that you can use to make authenticated
requests
AWS Account Access Keys
The account access keys provide full access to the AWS resources owned by the account The
following are examples of access keys
• Access key ID (a 20character alphanumeric string) For example AKIAIOSFODNN7EXAMPLE
• Secret access key (a 40character string) For example wJalrXUtnFEMIK7MDENG
bPxRfiCYEXAMPLEKEY
API Version 20060301
11Amazon Simple Storage Service Developer Guide
IAM User Access Keys
The access key ID uniquely identifies an AWS account You can use these access keys to send
authenticated requests to Amazon S3
IAM User Access Keys
You can create one AWS account for your company however there may be several employees in
the organization who need access to your organization's AWS resources Sharing your AWS account
access keys reduces security and creating individual AWS accounts for each employee might not
be practical Also you cannot easily share resources such as buckets and objects because they are
owned by different accounts To share resources you must grant permissions which is additional
work
In such scenarios you can use AWS Identity and Access Management (IAM) to create users under
your AWS account with their own access keys and attach IAM user policies granting appropriate
resource access permissions to them To better manage these users IAM enables you to create
groups of users and grant grouplevel permissions that apply to all users in that group
These users are referred as IAM users that you create and manage within AWS The parent account
controls a user's ability to access AWS Any resources an IAM user creates are under the control of
and paid for by the parent AWS account These IAM users can send authenticated requests to Amazon
S3 using their own security credentials For more information about creating and managing users
under your AWS account go to the AWS Identity and Access Management product details page
Temporary Security Credentials
In addition to creating IAM users with their own access keys IAM also enables you to grant temporary
security credentials (temporary access keys and a security token) to any IAM user to enable them
to access your AWS services and resources You can also manage users in your system outside
AWS These are referred as federated users Additionally users can be applications that you create to
access your AWS resources
IAM provides the AWS Security Token Service API for you to request temporary security credentials
You can use either the AWS STS API or the AWS SDK to request these credentials The API returns
temporary security credentials (access key ID and secret access key) and a security token These
credentials are valid only for the duration you specify when you request them You use the access key
ID and secret key the same way you use them when sending requests using your AWS account or IAM
user access keys In addition you must include the token in each request you send to Amazon S3
An IAM user can request these temporary security credentials for their own use or hand them out to
federated users or applications When requesting temporary security credentials for federated users
you must provide a user name and an IAM policy defining the permissions you want to associate with
these temporary security credentials The federated user cannot get more permissions than the parent
IAM user who requested the temporary credentials
You can use these temporary security credentials in making requests to Amazon S3 The API libraries
compute the necessary signature value using those credentials to authenticate your request If you
send requests using expired credentials Amazon S3 denies the request
For information on signing requests using temporary security credentials in your REST API requests
see Signing and Authenticating REST Requests (p 575) For information about sending requests
using AWS SDKs see Making Requests Using the AWS SDKs (p 19)
For more information about IAM support for temporary security credentials see Temporary Security
Credentials in the IAM User Guide
For added security you can require multifactor authentication (MFA) when accessing your Amazon S3
resources by configuring a bucket policy For information see Adding a Bucket Policy to Require MFA
Authentication (p 339) After you require MFA to access your Amazon S3 resources the only way
API Version 20060301
12Amazon Simple Storage Service Developer Guide
Request Endpoints
you can access these resources is by providing temporary credentials that are created with an MFA
key For more information see the AWS MultiFactor Authentication detail page and Configuring MFA
Protected API Access in the IAM User Guide
Request Endpoints
You send REST requests to the service's predefined endpoint For a list of all AWS services and their
corresponding endpoints go to Regions and Endpoints in the AWS General Reference
Making Requests to Amazon S3 over IPv6
Amazon Simple Storage Service (Amazon S3) supports the ability to access S3 buckets using the
Internet Protocol version 6 (IPv6) in addition to the IPv4 protocol Amazon S3 dualstack endpoints
support requests to S3 buckets over IPv6 and IPv4 There are no additional charges for accessing
Amazon S3 over IPv6 For more information about pricing see Amazon S3 Pricing
Topics
• Getting Started Making Requests over IPv6 (p 13)
• Using IPv6 Addresses in IAM Policies (p 14)
• Testing IP Address Compatibility (p 15)
• Using Amazon S3 DualStack Endpoints (p 16)
Getting Started Making Requests over IPv6
To make a request to an S3 bucket over IPv6 you need to use a dualstack endpoint The next section
describes how to make requests over IPv6 by using dualstack endpoints
The following are some things you should know before trying to access a bucket over IPv6
• The client and the network accessing the bucket must be enabled to use IPv6
• Both virtual hostedstyle and path style requests are supported for IPv6 access For more
information see Amazon S3 DualStack Endpoints (p 16)
• If you use source IP address filtering in your AWS Identity and Access Management (IAM) user
or bucket policies you need to update the policies to include IPv6 address ranges For more
information see Using IPv6 Addresses in IAM Policies (p 14)
• When using IPv6 server access log files output IP addresses in an IPv6 format You need to update
existing tools scripts and software that you use to parse Amazon S3 log files so that they can
parse the IPv6 formatted Remote IP addresses For more information see Server Access Log
Format (p 553) and Server Access Logging (p 546)
Note
If you experience issues related to the presence of IPv6 addresses in log files contact AWS
Support
Making Requests over IPv6 by Using DualStack Endpoints
You make requests with Amazon S3 API calls over IPv6 by using dualstack endpoints The Amazon
S3 API operations work the same way whether you're accessing Amazon S3 over IPv6 or over IPv4
Performance should be the same too
API Version 20060301
13Amazon Simple Storage Service Developer Guide
Using IPv6 Addresses in IAM Policies
When using the REST API you access a dualstack endpoint directly For more information see Dual
Stack Endpoints (p 16)
When using the AWS Command Line Interface (AWS CLI) and AWS SDKs you can use a parameter
or flag to change to a dualstack endpoint You can also specify the dualstack endpoint directly as an
override of the Amazon S3 endpoint in the config file
You can use a dualstack endpoint to access a bucket over IPv6 from any of the following
• The AWS CLI see Using DualStack Endpoints from the AWS CLI (p 16)
• The AWS SDKs see Using DualStack Endpoints from the AWS SDKs (p 17)
• The REST API see Making Requests to DualStack Endpoints by Using the REST API (p 50)
Features Not Available over IPv6
The following features are not currently supported when accessing an S3 bucket over IPv6
• Static website hosting from an S3 bucket
• Amazon S3 Transfer Acceleration
• BitTorrent
Amazon S3 IPv6 Access from Amazon EC2
Amazon EC2 instances currently support IPv4 only They cannot reach Amazon S3 over IPv6 If you
use the dualstack endpoints normally the OS or applications automatically establish the connection
over IPv4 Before EC2 (VPC) supports IPv6 we recommend that you continue using the standard
IPv4only endpoints from EC2 instances or conduct sufficient testing before switching to the dual
stack endpoints For a list of Amazon S3 endpoints see Regions and Endpoints in the AWS General
Reference
Using IPv6 Addresses in IAM Policies
Before trying to access a bucket using IPv6 you must ensure that any IAM user or S3 bucket polices
that are used for IP address filtering are updated to include IPv6 address ranges IP address filtering
policies that are not updated to handle IPv6 addresses may result in clients incorrectly losing or
gaining access to the bucket when they start using IPv6 For more information about managing access
permissions with IAM see Managing Access Permissions to Your Amazon S3 Resources (p 266)
IAM policies that filter IP addresses use IP Address Condition Operators The following bucket policy
identifies the 54240143* range of allowed IPv4 addresses by using IP address condition operators
Any IP addresses outside of this range will be denied access to the bucket (examplebucket) Since
all IPv6 addresses are outside of the allowed range this policy prevents IPv6 addresses from being
able to access examplebucket
{
Version 20121017
Statement [
{
Sid IPAllow
Effect Allow
Principal *
Action s3*
Resource arnawss3examplebucket*
Condition {
API Version 20060301
14Amazon Simple Storage Service Developer Guide
Testing IP Address Compatibility
IpAddress {awsSourceIp 54240143024}
}
}
]
}
You can modify the bucket policy's Condition element to allow both IPv4 (54240143024) and
IPv6 (2001DB81234567864) address ranges as shown in the following example You can use
the same type of Condition block shown in the example to update both your IAM user and bucket
policies
Condition {
IpAddress {
awsSourceIp [
54240143024
2001DB81234567864
]
}
}
Before using IPv6 you must update all relevant IAM user and bucket policies that use IP address
filtering to allow IPv6 address ranges We recommend that you update your IAM policies with your
organization's IPv6 address ranges in addition to your existing IPv4 address ranges For an example
of a bucket policy that allows access over both IPv6 and IPv4 see Restricting Access to Specific IP
Addresses (p 336)
You can review your IAM user policies using the IAM console at httpsconsoleawsamazoncomiam
For more information about IAM see the IAM User Guide For information about editing S3 bucket
policies see Edit Bucket Permissions in the Amazon Simple Storage Service Console User Guide
Testing IP Address Compatibility
If you are using use LinuxUnix or Mac OS X you can test whether you can access a dualstack
endpoint over IPv6 by using the curl command as shown in the following example
curl v https3dualstackuswest2amazonawscom
You get back information similar to the following example If you are connected over IPv6 the
connected IP address will be an IPv6 address
* About to connect() to s3uswest2amazonawscom port 80 (#0)
* Trying IPv6 address connected
* Connected to s3dualstackuswest2amazonawscom (IPv6 address) port 80
(#0)
> GET HTTP11
> UserAgent curl7181 (x86_64unknownlinuxgnu) libcurl7181
OpenSSL101t zlib123
> Host s3dualstackuswest2amazonawscom
If you are using Microsoft Windows 7 you can test whether you can access a dualstack endpoint over
IPv6 or IPv4 by using the ping command as shown in the following example
ping ipv6s3dualstackuswest2amazonawscom
API Version 20060301
15Amazon Simple Storage Service Developer Guide
Using DualStack Endpoints
Using Amazon S3 DualStack Endpoints
Amazon S3 dualstack endpoints support requests to S3 buckets over IPv6 and IPv4 This section
describes how to use dualstack endpoints
Topics
• Amazon S3 DualStack Endpoints (p 16)
• Using DualStack Endpoints from the AWS CLI (p 16)
• Using DualStack Endpoints from the AWS SDKs (p 17)
• Using DualStack Endpoints from the REST API (p 18)
Amazon S3 DualStack Endpoints
When you make a request to a dualstack endpoint the bucket URL resolves to an IPv6 or an IPv4
address For more information about accessing a bucket over IPv6 see Making Requests to Amazon
S3 over IPv6 (p 13)
When using the REST API you directly access an Amazon S3 endpoint by using the endpoint name
(URI) You can access an S3 bucket through a dualstack endpoint by using a virtual hostedstyle or a
pathstyle endpoint name Amazon S3 supports only regional dualstack endpoint names which means
that you must specify the region as part of the name
Use the following naming conventions for the dualstack virtual hostedstyle and pathstyle endpoint
names
• Virtual hostedstyle dualstack endpoint
bucketnames3dualstackawsregionamazonawscom
• Pathstyle dualstack endpoint
s3dualstackawsregionamazonawscombucketname
For more information about endpoint name style see Accessing a Bucket (p 60) For a list of
Amazon S3 endpoints see Regions and Endpoints in the AWS General Reference
When using the AWS Command Line Interface (AWS CLI) and AWS SDKs you can use a parameter
or flag to change to a dualstack endpoint You can also specify the dualstack endpoint directly as an
override of the Amazon S3 endpoint in the config file The following sections describe how to use dual
stack endpoints from the AWS CLI and the AWS SDKs
Using DualStack Endpoints from the AWS CLI
This section provides examples of AWS CLI commands used to make requests to a dualstack
endpoint For instructions on setting up the AWS CLI see Set Up the AWS CLI (p 562)
You set the configuration value use_dualstack_endpoint to true in a profile in your AWS Config
file to direct all Amazon S3 requests made by the s3 and s3api AWS CLI commands to the dualstack
endpoint for the specified region You specify the region in the config file or in a command using the
region option
When using dualstack endpoints with the AWS CLI both path and virtual addressing styles are
supported The addressing style set in the config file controls if the bucket name is in the hostname or
part of the URL By default the CLI will attempt to use virtual style where possible but will fall back to
path style if necessary For more information see AWS CLI Amazon S3 Configuration
API Version 20060301
16Amazon Simple Storage Service Developer Guide
Using DualStack Endpoints
You can also make configuration changes by using a command as shown in the following example
which sets use_dualstack_endpoint to true and addressing_style to virtual in the default
profile
aws configure set defaults3use_dualstack_endpoint true
aws configure set defaults3addressing_style virtual
If you want to use a dualstack endpoint for specified AWS CLI commands only (not all commands)
you can use either of the following methods
• You can use the dualstack endpoint per command by setting the endpointurl parameter
to httpss3dualstackawsregionamazonawscom or https3dualstackaws
regionamazonawscom for any s3 or s3api command
aws s3api listobjects bucket bucketname endpointurl https
s3dualstackawsregionamazonawscom
• You can set up separate profiles in your AWS Config file For example create one profile that sets
use_dualstack_endpoint to true and a profile that does not set use_dualstack_endpoint
When you run a command specify which profile you want to use depending upon whether or not
you want to use the dualstack endpoint
Note
You currently cannot use transfer acceleration with dualstack endpoints For more
information see Using Transfer Acceleration from the AWS Command Line Interface (AWS
CLI) (p 84)
Using DualStack Endpoints from the AWS SDKs
This section provides examples of how to access a dualstack endpoint by using the AWS SDKs
AWS Java SDK DualStack Endpoint Example
You use the setS3ClientOptions method in the AWS Java SDK to enable the use of a dualstack
endpoint when creating an instance of AmazonS3Client as shown in the following example
AmazonS3 s3Client new AmazonS3Client(new ProfileCredentialsProvider())
s3ClientsetRegion(RegiongetRegion(RegionsUS_WEST_2))
s3ClientsetS3ClientOptions(S3ClientOptionsbuilder()enableDualstack()build())
If you are using the AWS Java SDK on Microsoft Windows you might have to set the following Java
virtual machine (JVM) property
javanetpreferIPv6Addressestrue
Note
You currently cannot use transfer acceleration with dualstack endpoints The
Java SDK will throw an exception if you configure both enableDualstack and
setAccelerateModeEnabled on the config object For more information see Using
Transfer Acceleration from the AWS SDK for Java (p 85)
For information about how to create and test a working Java sample see Testing the Java Code
Examples (p 564)
API Version 20060301
17Amazon Simple Storage Service Developer Guide
Using DualStack Endpoints
AWS NET SDK DualStack Endpoint Example
When using the AWS SDK for NET you use the AmazonS3Config class to enable the use of a dual
stack endpoint as shown in the following example
var config new AmazonS3Config
{
UseDualstackEndpoint true
RegionEndpoint RegionEndpointUSWest2
}
using (var s3Client new AmazonS3Client(config))
{
var request new ListObjectsRequest
{
BucketName myBucket
}
var response s3ClientListObjects(request)
}
For a full NET sample for listing objects see Listing Keys Using the AWS SDK for NET (p 233)
Note
You currently cannot use transfer acceleration with dualstack endpoints The NET
SDK will throw an exception if you configure both UseAccelerateEndpoint and
UseDualstackEndpoint on the config object For more information see Using Transfer
Acceleration from the AWS SDK for NET (p 88)
For information about how to create and test a working NET sample see Running the Amazon
S3 NET Code Examples (p 566)
Using DualStack Endpoints from the REST API
For information about making requests to dualstack endpoints by using the REST API see Making
Requests to DualStack Endpoints by Using the REST API (p 50)
API Version 20060301
18Amazon Simple Storage Service Developer Guide
Making Requests Using the AWS SDKs
Making Requests Using the AWS SDKs
Topics
• Making Requests Using AWS Account or IAM User Credentials (p 20)
• Making Requests Using IAM User Temporary Credentials (p 25)
• Making Requests Using Federated User Temporary Credentials (p 36)
You can send authenticated requests to Amazon S3 using either the AWS SDK or by making the
REST API calls directly in your application The AWS SDK API uses the credentials that you provide
to compute the signature for authentication If you use the REST API directly in your applications you
must write the necessary code to compute the signature for authenticating your request For a list of
available AWS SDKs go to Sample Code and Libraries
API Version 20060301
19Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials
Making Requests Using AWS Account or IAM User
Credentials
You can use an AWS account or IAM user security credentials to send authenticated requests to
Amazon S3 This section provides examples of how you can send authenticated requests using the
AWS SDK for Java AWS SDK for NET and AWS SDK for PHP For a list of available AWS SDKs go
to Sample Code and Libraries
Topics
• Making Requests Using AWS Account or IAM User Credentials AWS SDK for Java (p 20)
• Making Requests Using AWS Account or IAM User Credentials AWS SDK for NET (p 21)
• Making Requests Using AWS Account or IAM User Credentials AWS SDK for PHP (p 23)
• Making Requests Using AWS Account or IAM User Credentials AWS SDK for Ruby (p 24)
For more information about setting up your AWS credentials for use with the AWS SDK for Java see
Testing the Java Code Examples (p 564)
Making Requests Using AWS Account or IAM User Credentials
AWS SDK for Java
The following tasks guide you through using the Java classes to send authenticated requests using
your AWS account credentials or IAM user credentials
Making Requests Using Your AWS account or IAM user credentials
1 Create an instance of the AmazonS3Client class
2 Execute one of the AmazonS3Client methods to send requests to Amazon S3 The
client generates the necessary signature value from your credentials and includes it in the
request it sends to Amazon S3
The following Java code sample demonstrates the preceding tasks
AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
Send sample request (list objects in a given bucket)
ObjectListing objectListing s3clientlistObjects(new
ListObjectsRequest()withBucketName(bucketName))
Note
You can create the AmazonS3Client class without providing your security credentials
Requests sent using this client are anonymous requests without a signature Amazon S3
returns an error if you send anonymous requests for a resource that is not publicly available
To see how to make requests using your AWS credentials within the context of an example of listing
all the object keys in your bucket see Listing Keys Using the AWS SDK for Java (p 231) For
more examples see Working with Amazon S3 Objects (p 98) and Working with Amazon S3
Buckets (p 58) You can test these examples using your AWS Account or IAM user credentials
Related Resources
• Using the AWS SDKs CLI and Explorers (p 560)
API Version 20060301
20Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials
Making Requests Using AWS Account or IAM User Credentials
AWS SDK for NET
The following tasks guide you through using the NET classes to send authenticated requests using
your AWS account or IAM user credentials
Making Requests Using Your AWS Account or IAM User Credentials
1 Create an instance of the AmazonS3Client class
2 Execute one of the AmazonS3Client methods to send requests to Amazon S3 The
client generates the necessary signature from your credentials and includes it in the
request it sends to Amazon S3
The following C# code sample demonstrates the preceding tasks
For information on running the NET examples in this guide and for instructions on how to store your
credentials in a configuration file see Running the Amazon S3 NET Code Examples (p 566)
using System
using AmazonS3
using AmazonS3Model
namespace s3amazoncomdocsamples
{
class MakeS3Request
{
static string bucketName *** Provide bucket name ***
static IAmazonS3 client
public static void Main(string[] args)
{
using (client new
AmazonS3Client(AmazonRegionEndpointUSEast1))
{
ConsoleWriteLine(Listing objects stored in a bucket)
ListingObjects()
}
ConsoleWriteLine(Press any key to continue)
ConsoleReadKey()
}
static void ListingObjects()
{
try
{
ListObjectsRequest request new ListObjectsRequest
{
BucketName bucketName
MaxKeys 2
}
do
{
ListObjectsResponse response
clientListObjects(request)
API Version 20060301
21Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials
Process response
foreach (S3Object entry in responseS3Objects)
{
ConsoleWriteLine(key {0} size {1}
entryKey entrySize)
}
If response is truncated set the marker to get the
next
set of keys
if (responseIsTruncated)
{
requestMarker responseNextMarker
}
else
{
request null
}
} while (request null)
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3ExceptionErrorCode null &&
(amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
||
amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
{
ConsoleWriteLine(Check the provided AWS Credentials)
ConsoleWriteLine(
To sign up for service go to httpawsamazoncom
s3)
}
else
{
ConsoleWriteLine(
Error occurred Message'{0}' when listing objects
amazonS3ExceptionMessage)
}
}
}
}
}
Note
You can create the AmazonS3Client client without providing your security credentials
Requests sent using this client are anonymous requests without a signature Amazon S3
returns an error if you send anonymous requests for a resource that is not publicly available
For working examples see Working with Amazon S3 Objects (p 98) and Working with Amazon S3
Buckets (p 58) You can test these examples using your AWS Account or an IAM user credentials
For example to list all the object keys in your bucket see Listing Keys Using the AWS SDK
for NET (p 233)
Related Resources
• Using the AWS SDKs CLI and Explorers (p 560)
API Version 20060301
22Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials
Making Requests Using AWS Account or IAM User Credentials
AWS SDK for PHP
This topic guides you through using a class from the AWS SDK for PHP to send authenticated
requests using your AWS account or IAM user credentials
Note
This topic assumes that you are already following the instructions for Using the AWS SDK
for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
installed
Making Requests Using Your AWS Account or IAM user Credentials
1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
method
2 Execute one of the Aws\S3\S3Client methods to send requests to Amazon S3 For
example you can use the Aws\S3\S3ClientlistBuckets() method to send a request to list
all the buckets for your account The client API generates the necessary signature using
your credentials and includes it in the request it sends to Amazon S3
The following PHP code sample demonstrates the preceding tasks and illustrates how the client makes
a request using your security credentials to list all the buckets for your account
use Aws\S3\S3Client
Instantiate the S3 client with your AWS credentials
s3 S3Clientfactory()
result s3>listBuckets()
For working examples see Working with Amazon S3 Objects (p 98) and Working with Amazon S3
Buckets (p 58) You can test these examples using your AWS account or IAM user credentials
For an example of listing object keys in a bucket see Listing Keys Using the AWS SDK for
PHP (p 235)
Related Resources
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
• AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientlistBuckets() Method
• AWS SDK for PHP for Amazon S3
• AWS SDK for PHP Documentation
API Version 20060301
23Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials
Making Requests Using AWS Account or IAM User Credentials
AWS SDK for Ruby
The following tasks guide you through using the AWS SDK for Ruby to send authenticated requests
using your AWS Account credentials or IAM user credentials
Making Requests Using Your AWS Account or IAM user Credentials
1 Create an instance of the AWSS3 class
2 Make a request to Amazon S3 by enumerating objects in a bucket using the buckets
method of AWSS3 The client generates the necessary signature value from your
credentials and includes it in the request it sends to Amazon S3
The following Ruby code sample demonstrates the preceding tasks
# Get an instance of the S3 interface using the specified credentials
configuration
s3 AWSS3new()
# Get a list of all object keys in a bucket
bucket s3buckets[bucket_name]objectscollect(&key)
puts bucket
Note
You can create the AWSS3 client without providing your security credentials Requests sent
using this client are anonymous requests without a signature Amazon S3 returns an error if
you send anonymous requests for a resource that is not publicly available
For working examples see Working with Amazon S3 Objects (p 98) and Working with Amazon S3
Buckets (p 58) You can test these examples using your AWS Account or IAM user credentials
API Version 20060301
24Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Making Requests Using IAM User Temporary
Credentials
Topics
• Making Requests Using IAM User Temporary Credentials AWS SDK for Java (p 25)
• Making Requests Using IAM User Temporary Credentials AWS SDK for NET (p 28)
• Making Requests Using AWS Account or IAM User Temporary Credentials AWS SDK for
PHP (p 31)
• Making Requests Using IAM User Temporary Credentials AWS SDK for Ruby (p 34)
An AWS Account or an IAM user can request temporary security credentials and use them to send
authenticated requests to Amazon S3 This section provides examples of how to use the AWS SDK
for Java NET and PHP to obtain temporary security credentials and use them to authenticate your
requests to Amazon S3
Making Requests Using IAM User Temporary Credentials
AWS SDK for Java
An IAM user or an AWS Account can request temporary security credentials (see Making
Requests (p 11)) using AWS SDK for Java and use them to access Amazon S3 These credentials
expire after the session duration By default the session duration is one hour If you use IAM user
credentials you can specify duration between 1 and 36 hours when requesting the temporary security
credentials
Making Requests Using IAM User Temporary Security Credentials
1 Create an instance of the AWS Security Token Service client
AWSSecurityTokenServiceClient
2 Start a session by calling the GetSessionToken method of the STS client you
created in the preceding step You provide session information to this method using a
GetSessionTokenRequest object
The method returns your temporary security credentials
3 Package the temporary security credentials in an instance of the
BasicSessionCredentials object so you can provide the credentials to your Amazon
S3 client
4 Create an instance of the AmazonS3Client class by passing in the temporary security
credentials
You send the requests to Amazon S3 using this client If you send requests using
expired credentials Amazon S3 returns an error
The following Java code sample demonstrates the preceding tasks
In real applications the following code is part of your trusted code It
has
your security credentials you use to obtain temporary security
credentials
AWSSecurityTokenServiceClient stsClient
new AWSSecurityTokenServiceClient(new
ProfileCredentialsProvider())
API Version 20060301
25Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Manually start a session
GetSessionTokenRequest getSessionTokenRequest new GetSessionTokenRequest()
Following duration can be set only if temporary credentials are requested
by an IAM user
getSessionTokenRequestsetDurationSeconds(7200)
GetSessionTokenResult sessionTokenResult
stsClientgetSessionToken(getSessionTokenRequest)
Credentials sessionCredentials sessionTokenResultgetCredentials()
Package the temporary security credentials as
a BasicSessionCredentials object for an Amazon S3 client object to use
BasicSessionCredentials basicSessionCredentials
new
BasicSessionCredentials(sessionCredentialsgetAccessKeyId()
sessionCredentialsgetSecretAccessKey()
sessionCredentialsgetSessionToken())
The following will be part of your less trusted code You provide
temporary security
credentials so it can send authenticated requests to Amazon S3
Create Amazon S3 client by passing in the basicSessionCredentials object
AmazonS3Client s3 new AmazonS3Client(basicSessionCredentials)
Test For example get object keys in a bucket
ObjectListing objects s3listObjects(bucketName)
API Version 20060301
26Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Example
Note
If you obtain temporary security credentials using your AWS account credentials the
temporary security credentials are valid for only one hour You can specify session duration
only if you use IAM user credentials to request a session
The following Java code example lists the object keys in the specified bucket For illustration the code
example obtains temporary security credentials for a default one hour session and uses them to send
an authenticated request to Amazon S3
If you want to test the sample using IAM user credentials you will need to create an IAM user under
your AWS Account For more information about how to create an IAM user see Creating Your First
IAM User and Administrators Group in the IAM User Guide
import javaioIOException
import comamazonawsauthBasicSessionCredentials
import comamazonawsauthPropertiesCredentials
import comamazonawsservicess3AmazonS3Client
import comamazonawsservicessecuritytokenAWSSecurityTokenServiceClient
import comamazonawsservicessecuritytokenmodelCredentials
import comamazonawsservicessecuritytokenmodelGetSessionTokenRequest
import comamazonawsservicessecuritytokenmodelGetSessionTokenResult
import comamazonawsservicess3modelObjectListing
public class S3Sample {
private static String bucketName *** Provide bucket name ***
public static void main(String[] args) throws IOException {
AWSSecurityTokenServiceClient stsClient
new AWSSecurityTokenServiceClient(new
ProfileCredentialsProvider())
Start a session
GetSessionTokenRequest getSessionTokenRequest
new GetSessionTokenRequest()
GetSessionTokenResult sessionTokenResult
stsClientgetSessionToken(getSessionTokenRequest)
Credentials sessionCredentials sessionTokenResultgetCredentials()
Systemoutprintln(Session Credentials
+
sessionCredentialstoString())
Package the session credentials as a BasicSessionCredentials
object for an S3 client object to use
BasicSessionCredentials basicSessionCredentials
new
BasicSessionCredentials(sessionCredentialsgetAccessKeyId()
sessionCredentialsgetSecretAccessKey()
sessionCredentialsgetSessionToken())
AmazonS3Client s3 new AmazonS3Client(basicSessionCredentials)
Test For example get object keys for a given bucket
ObjectListing objects s3listObjects(bucketName)
Systemoutprintln(No of Objects +
objectsgetObjectSummaries()size())
}
}
API Version 20060301
27Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Related Resources
• Using the AWS SDKs CLI and Explorers (p 560)
Making Requests Using IAM User Temporary Credentials
AWS SDK for NET
An IAM user or an AWS Account can request temporary security credentials (see Making
Requests (p 11)) using the AWS SDK for NET and use them to access Amazon S3 These
credentials expire after the session duration By default the session duration is one hour If you
use IAM user credentials you can specify duration between 1 and 36 hours when requesting the
temporary security credentials
Making Requests Using IAM User Temporary Security Credentials
1 Create an instance of the AWS Security Token Service client
AmazonSecurityTokenServiceClient For information about providing credentials
see Using the AWS SDKs CLI and Explorers (p 560)
2 Start a session by calling the GetSessionToken method of the STS client you
created in the preceding step You provide session information to this method using a
GetSessionTokenRequest object
The method returns you temporary security credentials
3 Package up the temporary security credentials in an instance of the
SessionAWSCredentials object You use this object to provide the temporary
security credentials to your Amazon S3 client
4 Create an instance of the AmazonS3Client class by passing in the temporary security
credentials
You send requests to Amazon S3 using this client If you send requests using expired
credentials Amazon S3 returns an error
The following C# code sample demonstrates the preceding tasks
In real applications the following code is part of your trusted code It
has
your security credentials you use to obtain temporary security
credentials
AmazonSecurityTokenServiceConfig config new
AmazonSecurityTokenServiceConfig()
AmazonSecurityTokenServiceClient stsClient
new AmazonSecurityTokenServiceClient(config)
GetSessionTokenRequest getSessionTokenRequest new GetSessionTokenRequest()
Following duration can be set only if temporary credentials are requested
by an IAM user
getSessionTokenRequestDurationSeconds 7200 seconds
Credentials credentials
stsClientGetSessionToken(getSessionTokenRequest)GetSessionTokenResultCredentials
SessionAWSCredentials sessionCredentials
new SessionAWSCredentials(credentialsAccessKeyId
API Version 20060301
28Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
credentialsSecretAccessKey
credentialsSessionToken)
The following will be part of your less trusted code You provide
temporary security
credentials so it can send authenticated requests to Amazon S3
Create Amazon S3 client by passing in the basicSessionCredentials object
AmazonS3Client s3Client new AmazonS3Client(sessionCredentials)
Test For example send request to list object key in a bucket
var response s3ClientListObjects(bucketName)
API Version 20060301
29Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Example
Note
If you obtain temporary security credentials using your AWS account security credentials the
temporary security credentials are valid for only one hour You can specify session duration
only if you use IAM user credentials to request a session
The following C# code example lists object keys in the specified bucket For illustration the code
example obtains temporary security credentials for a default one hour session and uses them to send
authenticated request to Amazon S3
If you want to test the sample using IAM user credentials you will need to create an IAM user under
your AWS Account For more information about how to create an IAM user see Creating Your First
IAM User and Administrators Group in the IAM User Guide
For instructions on how to create and test a working example see Running the Amazon S3 NET Code
Examples (p 566)
using System
using SystemConfiguration
using SystemCollectionsSpecialized
using AmazonS3
using AmazonSecurityToken
using AmazonSecurityTokenModel
using AmazonRuntime
using AmazonS3Model
using SystemCollectionsGeneric
namespace s3amazoncomdocsamples
{
class TempCredExplicitSessionStart
{
static string bucketName *** Provide bucket name ***
static IAmazonS3 client
public static void Main(string[] args)
{
NameValueCollection appConfig ConfigurationManagerAppSettings
string accessKeyID appConfig[AWSAccessKey]
string secretAccessKeyID appConfig[AWSSecretKey]
try
{
ConsoleWriteLine(Listing objects stored in a bucket)
SessionAWSCredentials tempCredentials
GetTemporaryCredentials(accessKeyID secretAccessKeyID)
Create client by providing temporary security credentials
using (client new AmazonS3Client(tempCredentials
AmazonRegionEndpointUSEast1))
{
ListObjectsRequest listObjectRequest
new ListObjectsRequest()
listObjectRequestBucketName bucketName
Send request to Amazon S3
ListObjectsResponse response
clientListObjects(listObjectRequest)
List
ConsoleWriteLine(Object count {0} objectsCount)
ConsoleWriteLine(Press any key to continue)
ConsoleReadKey()
}
}
catch (AmazonS3Exception s3Exception)
{
ConsoleWriteLine(s3ExceptionMessage
s3ExceptionInnerException)
}
catch (AmazonSecurityTokenServiceException stsException)
{
ConsoleWriteLine(stsExceptionMessage
stsExceptionInnerException)
}
}
private static SessionAWSCredentials GetTemporaryCredentials(
string accessKeyId string secretAccessKeyId)
{
AmazonSecurityTokenServiceClient stsClient
new AmazonSecurityTokenServiceClient(accessKeyId
secretAccessKeyId)
GetSessionTokenRequest getSessionTokenRequest
new GetSessionTokenRequest()
getSessionTokenRequestDurationSeconds 7200 seconds
GetSessionTokenResponse sessionTokenResponse
stsClientGetSessionToken(getSessionTokenRequest)
Credentials credentials sessionTokenResponseCredentials
SessionAWSCredentials sessionCredentials
new SessionAWSCredentials(credentialsAccessKeyId
credentialsSecretAccessKey
credentialsSessionToken)
return sessionCredentials
}
}
}
API Version 20060301
30Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Related Resources
• Using the AWS SDKs CLI and Explorers (p 560)
Making Requests Using AWS Account or IAM User Temporary
Credentials AWS SDK for PHP
This topic guides you through using classes from the AWS SDK for PHP to request temporary security
credentials and use them to access Amazon S3
Note
This topic assumes that you are already following the instructions for Using the AWS SDK
for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
installed
An IAM user or an AWS Account can request temporary security credentials (see Making
Requests (p 11)) using the AWS SDK for PHP and use them to access Amazon S3 These credentials
expire when the session duration expires By default the session duration is one hour If you use
IAM user credentials you can specify the duration between 1 and 36 hours when requesting the
temporary security credentials For more information about temporary security credentials see
Temporary Security Credentials in the IAM User Guide
Making Requests Using AWS Account or IAM User Temporary Security Credentials
1 Create an instance of an AWS Security Token Service (AWS STS) client by using the
Aws\Sts\StsClient class factory() method
2 Execute the Aws\Sts\StsClientgetSessionToken() method to start a session
The method returns you temporary security credentials
3 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class
factory() method with the temporary security credentials you obtained in the preceding
step
Any methods in the S3Client class that you call use the temporary security
credentials to send authenticated requests to Amazon S3
The following PHP code sample demonstrates how to request temporary security credentials and use
them to access Amazon S3
use Aws\Sts\StsClient
use Aws\S3\S3Client
In real applications the following code is part of your trusted code
It has your security credentials that you use to obtain temporary
security credentials
sts StsClientfactory()
result sts>getSessionToken()
The following will be part of your less trusted code You provide
temporary
security credentials so it can send authenticated requests to Amazon S3
Create an Amazon S3 client using temporary security credentials
credentials result>get('Credentials')
API Version 20060301
31Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
s3 S3Clientfactory(array(
'key' > credentials['AccessKeyId']
'secret' > credentials['SecretAccessKey']
'token' > credentials['SessionToken']
))
result s3>listBuckets()
Note
If you obtain temporary security credentials using your AWS account security credentials
the temporary security credentials are valid for only one hour You can specify the session
duration only if you use IAM user credentials to request a session
Example of Making an Amazon S3 Request Using Temporary Security Credentials
The following PHP code example lists object keys in the specified bucket using temporary security
credentials The code example obtains temporary security credentials for a default one hour session
and uses them to send authenticated request to Amazon S3 For information about running the PHP
examples in this guide go to Running PHP Examples (p 567)
If you want to test the example using IAM user credentials you will need to create an IAM user under
your AWS Account For information about how to create an IAM user see Creating Your First IAM
User and Administrators Group in the IAM User Guide For an example of setting session duration
when using IAM user credentials to request a session see Making Requests Using Federated User
Temporary Credentials AWS SDK for PHP (p 43)
require 'vendorautoloadphp'
use Aws\Sts\StsClient
use Aws\S3\S3Client
use Aws\S3\Exception\S3Exception
bucket '*** Your Bucket Name ***'
sts StsClientfactory()
credentials sts>getSessionToken()>get('Credentials')
s3 S3Clientfactory(array(
'key' > credentials['AccessKeyId']
'secret' > credentials['SecretAccessKey']
'token' > credentials['SessionToken']
))
try {
objects s3>getIterator('ListObjects' array(
'Bucket' > bucket
))
echo Keys retrieved\n
foreach (objects as object) {
echo object['Key'] \n
}
} catch (S3Exception e) {
echo e>getMessage() \n
}
API Version 20060301
32Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Related Resources
• AWS SDK for PHP for Amazon S3 Aws\Sts\StsClient Class
• AWS SDK for PHP for Amazon S3 Aws\Sts\StsClientfactory() Method
• AWS SDK for PHP for Amazon S3 Aws\Sts\StsClientgetSessionToken() Method
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
• AWS SDK for PHP for Amazon S3
• AWS SDK for PHP Documentation
API Version 20060301
33Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Making Requests Using IAM User Temporary Credentials
AWS SDK for Ruby
An IAM user or an AWS Account can request temporary security credentials (see Making
Requests (p 11)) using AWS SDK for Ruby and use them to access Amazon S3 These credentials
expire after the session duration By default the session duration is one hour If you use IAM user
credentials you can specify the duration between 1 and 36 hours when requesting the temporary
security credentials
Making Requests Using IAM User Temporary Security Credentials
1 Create an instance of the AWS Security Token Service client AWSSTSSession by
providing your credentials
2 Start a session by calling the new_session method of the STS client that you
created in the preceding step You provide session information to this method using a
GetSessionTokenRequest object
The method returns your temporary security credentials
3 Use the temporary credentials in a new instance of the AWSS3 class by passing in the
temporary security credentials
You send the requests to Amazon S3 using this client If you send requests using
expired credentials Amazon S3 returns an error
The following Ruby code sample demonstrates the preceding tasks
# Start a session
# In real applications the following code is part of your trusted code It
has
# your security credentials that you use to obtain temporary security
credentials
sts AWSSTSnew()
session stsnew_session()
puts Session expires at #{sessionexpires_atto_s}
# Get an instance of the S3 interface using the session credentials
s3 AWSS3new(sessioncredentials)
# Get a list of all object keys in a bucket
bucket s3buckets[bucket_name]objectscollect(&key)
API Version 20060301
34Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials
Example
Note
If you obtain temporary security credentials using your AWS account security credentials the
temporary security credentials are valid for only one hour You can specify session duration
only if you use IAM user credentials to request a session
The following Ruby code example lists the object keys in the specified bucket For illustration the code
example obtains temporary security credentials for a default one hour session and uses them to send
an authenticated request to Amazon S3
If you want to test the sample using IAM user credentials you will need to create an IAM user under
your AWS Account For more information about how to create an IAM user see Creating Your First
IAM User and Administrators Group in the IAM User Guide
require 'rubygems'
require 'awssdk'
# In real applications the following code is part of your trusted code It
has
# your security credentials you use to obtain temporary security credentials
bucket_name '*** Provide bucket name ***'
# Start a session
sts AWSSTSnew()
session stsnew_session()
puts Session expires at #{sessionexpires_atto_s}
# get an instance of the S3 interface using the session credentials
s3 AWSS3new(sessioncredentials)
# get a list of all object keys in a bucket
bucket s3buckets[bucket_name]objectscollect(&key)
puts bucket
API Version 20060301
35Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Making Requests Using Federated User Temporary
Credentials
Topics
• Making Requests Using Federated User Temporary Credentials AWS SDK for Java (p 36)
• Making Requests Using Federated User Temporary Credentials AWS SDK for NET (p 40)
• Making Requests Using Federated User Temporary Credentials AWS SDK for PHP (p 43)
• Making Requests Using Federated User Temporary Credentials AWS SDK for Ruby (p 47)
You can request temporary security credentials and provide them to your federated users or
applications who need to access your AWS resources This section provides examples of how you can
use the AWS SDK to obtain temporary security credentials for your federated users or applications and
send authenticated requests to Amazon S3 using those credentials For a list of available AWS SDKs
go to Sample Code and Libraries
Note
Both the AWS account and an IAM user can request temporary security credentials
for federated users However for added security only an IAM user with the necessary
permissions should request these temporary credentials to ensure that the federated user
gets at most the permissions of the requesting IAM user In some applications you might
find suitable to create an IAM user with specific permissions for the sole purpose of granting
temporary security credentials to your federated users and applications
Making Requests Using Federated User Temporary
Credentials AWS SDK for Java
You can provide temporary security credentials for your federated users and applications (see Making
Requests (p 11)) so they can send authenticated requests to access your AWS resources When
requesting these temporary credentials from the IAM service you must provide a user name and an
IAM policy describing the resource permissions you want to grant By default the session duration is
one hour However if you are requesting temporary credentials using IAM user credentials you can
explicitly set a different duration value when requesting the temporary security credentials for federated
users and applications
Note
To request temporary security credentials for federated users and applications for added
security you might want to use a dedicated IAM user with only the necessary access
permissions The temporary user you create can never get more permissions than the IAM
user who requested the temporary security credentials For more information go to AWS
Identity and Access Management FAQs
Making Requests Using Federated User Temporary Security Credentials
1 Create an instance of the AWS Security Token Service client
AWSSecurityTokenServiceClient
2 Start a session by calling the getFederationToken method of the STS client you
created in the preceding step
You will need to provide session information including the user name and an IAM policy
that you want to attach to the temporary credentials
This method returns your temporary security credentials
API Version 20060301
36Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
3 Package the temporary security credentials in an instance of the
BasicSessionCredentials object You use this object to provide the temporary
security credentials to your Amazon S3 client
4 Create an instance of the AmazonS3Client class by passing the temporary security
credentials
You send requests to Amazon S3 using this client If you send requests using expired
credentials Amazon S3 returns an error
The following Java code sample demonstrates the preceding tasks
In real applications the following code is part of your trusted code It
has
your security credentials you use to obtain temporary security
credentials
AWSSecurityTokenServiceClient stsClient
new AWSSecurityTokenServiceClient(new
ProfileCredentialsProvider())
GetFederationTokenRequest getFederationTokenRequest
new GetFederationTokenRequest()
getFederationTokenRequestsetDurationSeconds(7200)
getFederationTokenRequestsetName(User1)
Define the policy and add to the request
Policy policy new Policy()
Define the policy here
Add the policy to the request
getFederationTokenRequestsetPolicy(policytoJson())
GetFederationTokenResult federationTokenResult
stsClientgetFederationToken(getFederationTokenRequest)
Credentials sessionCredentials federationTokenResultgetCredentials()
Package the session credentials as a BasicSessionCredentials object
for an S3 client object to use
BasicSessionCredentials basicSessionCredentials new
BasicSessionCredentials(
sessionCredentialsgetAccessKeyId()
sessionCredentialsgetSecretAccessKey()
sessionCredentialsgetSessionToken())
The following will be part of your less trusted code You provide
temporary security
credentials so it can send authenticated requests to Amazon S3
Create an Amazon S3 client by passing in the basicSessionCredentials
object
AmazonS3Client s3 new AmazonS3Client(basicSessionCredentials)
Test For example send list object keys in a bucket
ObjectListing objects s3listObjects(bucketName)
To set a condition in the policy create a Condition object and associate it with the policy The
following code sample shows a condition that allows users from a specified IP range to list objects
Policy policy new Policy()
API Version 20060301
37Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Allow only a specified IP range
Condition condition new
StringCondition(StringConditionStringComparisonTypeStringLike
ConditionFactorySOURCE_IP_CONDITION_KEY 192168143*)
policywithStatements(new Statement(EffectAllow)
withActions(S3ActionsListObjects)
withConditions(condition)
withResources(new Resource(arnawss3+ bucketName)))
getFederationTokenRequestsetPolicy(policytoJson())
API Version 20060301
38Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Example
The following Java code example lists keys in the specified bucket In the code example you first
obtain temporary security credentials for a twohour session for your federated user (User1) and use
them to send authenticated requests to Amazon S3
When requesting temporary credentials for others for added security you use the security credentials
of an IAM user who has permissions to request temporary security credentials You can also limit the
access permissions of this IAM user to ensure that the IAM user grants only the minimum application
specific permissions when requesting temporary security credentials This sample only lists objects in a
specific bucket Therefore first create an IAM user with the following policy attached
{
Statement[{
Action[s3ListBucket
stsGetFederationToken*
]
EffectAllow
Resource*
}
]
}
The policy allows the IAM user to request temporary security credentials and access permission only to
list your AWS resources For information about how to create an IAM user see Creating Your First IAM
User and Administrators Group in the IAM User Guide
You can now use the IAM user security credentials to test the following example The example sends
authenticated request to Amazon S3 using temporary security credentials The example specifies the
following policy when requesting temporary security credentials for the federated user (User1) which
restricts access to list objects in a specific bucket (YourBucketName) You must update the policy and
provide your own existing bucket name
{
Statement[
{
Sid1
Action[s3ListBucket]
EffectAllow
Resourcearnawss3YourBucketName
}
]
}
You must update the following sample and provide the bucket name that you specified in the preceding
federated user access policy
import javaioIOException
import comamazonawsauthBasicSessionCredentials
import comamazonawsauthPropertiesCredentials
import comamazonawsauthpolicyPolicy
import comamazonawsauthpolicyResource
import comamazonawsauthpolicyStatement
import comamazonawsauthpolicyStatementEffect
import comamazonawsauthpolicyactionsS3Actions
import comamazonawsservicess3AmazonS3Client
import comamazonawsservicessecuritytokenAWSSecurityTokenServiceClient
import comamazonawsservicessecuritytokenmodelCredentials
import comamazonawsservicessecuritytokenmodelGetFederationTokenRequest
import comamazonawsservicessecuritytokenmodelGetFederationTokenResult
import comamazonawsservicess3modelObjectListing
public class S3Sample {
private static String bucketName *** Specify bucket name ***
public static void main(String[] args) throws IOException {
AWSSecurityTokenServiceClient stsClient
new AWSSecurityTokenServiceClient(new
ProfileCredentialsProvider())
GetFederationTokenRequest getFederationTokenRequest
new GetFederationTokenRequest()
getFederationTokenRequestsetDurationSeconds(7200)
getFederationTokenRequestsetName(User1)
Define the policy and add to the request
Policy policy new Policy()
policywithStatements(new Statement(EffectAllow)
withActions(S3ActionsListObjects)
withResources(new Resource(arnawss3ExampleBucket)))
getFederationTokenRequestsetPolicy(policytoJson())
Get the temporary security credentials
GetFederationTokenResult federationTokenResult
stsClientgetFederationToken(getFederationTokenRequest)
Credentials sessionCredentials
federationTokenResultgetCredentials()
Package the session credentials as a BasicSessionCredentials
object for an S3 client object to use
BasicSessionCredentials basicSessionCredentials
new
BasicSessionCredentials(sessionCredentialsgetAccessKeyId()
sessionCredentialsgetSecretAccessKey()
sessionCredentialsgetSessionToken())
AmazonS3Client s3 new AmazonS3Client(basicSessionCredentials)
Test For example send ListBucket request using the temporary
security credentials
ObjectListing objects s3listObjects(bucketName)
Systemoutprintln(No of Objects +
objectsgetObjectSummaries()size())
}
}
API Version 20060301
39Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Related Resources
• Using the AWS SDKs CLI and Explorers (p 560)
Making Requests Using Federated User Temporary
Credentials AWS SDK for NET
You can provide temporary security credentials for your federated users and applications (see Making
Requests (p 11)) so they can send authenticated requests to access your AWS resources When
requesting these temporary credentials you must provide a user name and an IAM policy describing
the resource permissions you want to grant By default the session duration is one hour You can
explicitly set a different duration value when requesting the temporary security credentials for federated
users and applications
Note
To request temporary security credentials for federated users and applications for added
security you might want to use a dedicated IAM user with only the necessary access
permissions The temporary user you create can never get more permissions than the IAM
user who requested the temporary security credentials For more information go to AWS
Identity and Access Management FAQs
Making Requests Using Federated User Temporary Credentials
1 Create an instance of the AWS Security Token Service client
AmazonSecurityTokenServiceClient class For information about providing
credentials see Using the AWS SDK for NET (p 565)
2 Start a session by calling the GetFederationToken method of the STS client
You will need to provide session information including the user name and an IAM
policy that you want to attach to the temporary credentials You can provide an optional
session duration
This method returns your temporary security credentials
3 Package the temporary security credentials in an instance of the
SessionAWSCredentials object You use this object to provide the temporary
security credentials to your Amazon S3 client
4 Create an instance of the AmazonS3Client class by passing the temporary security
credentials
You send requests to Amazon S3 using this client If you send requests using expired
credentials Amazon S3 returns an error
The following C# code sample demonstrates the preceding tasks
In real applications the following code is part of your trusted code It
has
your security credentials you use to obtain temporary security
credentials
AmazonSecurityTokenServiceConfig config new
AmazonSecurityTokenServiceConfig()
AmazonSecurityTokenServiceClient stsClient
new AmazonSecurityTokenServiceClient(config)
GetFederationTokenRequest federationTokenRequest
new GetFederationTokenRequest()
federationTokenRequestName User1
API Version 20060301
40Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
federationTokenRequestPolicy *** Specify policy ***
federationTokenRequestDurationSeconds 7200
GetFederationTokenResponse federationTokenResponse
stsClientGetFederationToken(federationTokenRequest)
GetFederationTokenResult federationTokenResult
federationTokenResponseGetFederationTokenResult
Credentials credentials federationTokenResultCredentials
SessionAWSCredentials sessionCredentials
new SessionAWSCredentials(credentialsAccessKeyId
credentialsSecretAccessKey
credentialsSessionToken)
The following will be part of your less trusted code You provide
temporary security
credentials so it can send authenticated requests to Amazon S3
Create Amazon S3 client by passing in the basicSessionCredentials object
AmazonS3Client s3Client new AmazonS3Client(sessionCredentials)
Test For example send list object keys in a bucket
ListObjectsRequest listObjectRequest new ListObjectsRequest()
listObjectRequestBucketName bucketName
ListObjectsResponse response s3ClientListObjects(listObjectRequest)
API Version 20060301
41Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Example
The following C# code example lists keys in the specified bucket In the code example you first obtain
temporary security credentials for a twohour session for your federated user (User1) and use them to
send authenticated requests to Amazon S3
When requesting temporary credentials for others for added security you use the security credentials
of an IAM user who has permissions to request temporary security credentials You can also limit
the access permissions of this IAM user to ensure that the IAM user grants only the minimum
applicationspecific permissions to the federated user This sample only lists objects in a specific
bucket Therefore first create an IAM user with the following policy attached
{
Statement[{
Action[s3ListBucket
stsGetFederationToken*
]
EffectAllow
Resource*
}
]
}
The policy allows the IAM user to request temporary security credentials and access permission only
to list your AWS resources For more information about how to create an IAM user see Creating Your
First IAM User and Administrators Group in the IAM User Guide
You can now use the IAM user security credentials to test the following example The example sends
authenticated request to Amazon S3 using temporary security credentials The example specifies the
following policy when requesting temporary security credentials for the federated user (User1) which
restricts access to list objects in a specific bucket (YourBucketName) You must update the policy and
provide your own existing bucket name
{
Statement[
{
Sid1
Action[s3ListBucket]
EffectAllow
Resourcearnawss3YourBucketName
}
]
}
You must update the following sample and provide the bucket name that you specified in the preceding
federated user access policy For instructions on how to create and test a working example see
Running the Amazon S3 NET Code Examples (p 566)
using System
using SystemConfiguration
using SystemCollectionsSpecialized
using AmazonS3
using AmazonSecurityToken
using AmazonSecurityTokenModel
using AmazonRuntime
using AmazonS3Model
using SystemCollectionsGeneric
namespace s3amazoncomdocsamples
{
class TempFederatedCredentials
{
static string bucketName *** Provide bucket name ***
static IAmazonS3 client
public static void Main(string[] args)
{
NameValueCollection appConfig ConfigurationManagerAppSettings
string accessKeyID appConfig[AWSAccessKey]
string secretAccessKeyID appConfig[AWSSecretKey]
try
{
ConsoleWriteLine(Listing objects stored in a bucket)
SessionAWSCredentials tempCredentials
GetTemporaryFederatedCredentials(accessKeyID
secretAccessKeyID)
Create client by providing temporary security credentials
using (client new AmazonS3Client(tempCredentials
AmazonRegionEndpointUSEast1))
{
ListObjectsRequest listObjectRequest new
ListObjectsRequest()
listObjectRequestBucketName bucketName
ListObjectsResponse response
clientListObjects(listObjectRequest)
List
ConsoleWriteLine(Object count {0} objectsCount)
ConsoleWriteLine(Press any key to continue)
ConsoleReadKey()
}
}
catch (AmazonS3Exception s3Exception)
{
ConsoleWriteLine(s3ExceptionMessage
s3ExceptionInnerException)
}
catch (AmazonSecurityTokenServiceException stsException)
{
ConsoleWriteLine(stsExceptionMessage
stsExceptionInnerException)
}
}
private static SessionAWSCredentials GetTemporaryFederatedCredentials(
string accessKeyId string secretAccessKeyId)
{
AmazonSecurityTokenServiceConfig config new
AmazonSecurityTokenServiceConfig()
AmazonSecurityTokenServiceClient stsClient
new AmazonSecurityTokenServiceClient(
accessKeyId secretAccessKeyId
config)
GetFederationTokenRequest federationTokenRequest
new GetFederationTokenRequest()
federationTokenRequestDurationSeconds 7200
federationTokenRequestName User1
federationTokenRequestPolicy @{
Statement
[
{
SidStmt1311212314284
Action[s3ListBucket]
EffectAllow
Resourcearnawss3YourBucketName
}
]
}
GetFederationTokenResponse federationTokenResponse
stsClientGetFederationToken(federationTokenRequest)
Credentials credentials federationTokenResponseCredentials
SessionAWSCredentials sessionCredentials
new SessionAWSCredentials(credentialsAccessKeyId
credentialsSecretAccessKey
credentialsSessionToken)
return sessionCredentials
}
}
}
API Version 20060301
42Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Related Resources
• Using the AWS SDKs CLI and Explorers (p 560)
Making Requests Using Federated User Temporary
Credentials AWS SDK for PHP
This topic guides you through using classes from the AWS SDK for PHP to request temporary security
credentials for federated users and applications and use them to access Amazon S3
Note
This topic assumes that you are already following the instructions for Using the AWS SDK
for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
installed
You can provide temporary security credentials to your federated users and applications (see Making
Requests (p 11)) so they can send authenticated requests to access your AWS resources When
requesting these temporary credentials you must provide a user name and an IAM policy describing
the resource permissions you want to grant These credentials expire when the session duration
expires By default the session duration is one hour You can explicitly set a different duration value
when requesting the temporary security credentials for federated users and applications For more
information about temporary security credentials see Temporary Security Credentials in the IAM User
Guide
To request temporary security credentials for federated users and applications for added security
you might want to use a dedicated IAM user with only the necessary access permissions The
temporary user you create can never get more permissions than the IAM user who requested the
temporary security credentials For information about identity federation go to AWS Identity and
Access Management FAQs
Making Requests Using Federated User Temporary Credentials
1 Create an instance of an AWS Security Token Service (AWS STS) client by using the
Aws\Sts\StsClient class factory() method
2 Execute the Aws\Sts\StsClientgetFederationToken() method by providing the name
of the federated user in the array parameter's required Name key You can also add
the optional array parameter's Policy and DurationSeconds keys
The method returns temporary security credentials that you can provide to your
federated users
3 Any federated user who has the temporary security credentials can send requests to
Amazon S3 by creating an instance of an Amazon S3 client by using Aws\S3\S3Client
class factory method with the temporary security credentials
Any methods in the S3Client class that you call use the temporary security
credentials to send authenticated requests to Amazon S3
The following PHP code sample demonstrates obtaining temporary security credentials for a federated
user and using the credentials to access Amazon S3
use Aws\Sts\StsClient
use Aws\S3\S3Client
In real applications the following code is part of your trusted code It
has
API Version 20060301
43Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
your security credentials that you use to obtain temporary security
credentials
sts StsClientfactory()
Fetch the federated credentials
result sts>getFederationToken(array(
'Name' > 'User1'
'DurationSeconds' > 3600
'Policy' > json_encode(array(
'Statement' > array(
array(
'Sid' > 'randomstatementid' time()
'Action' > array('s3ListBucket')
'Effect' > 'Allow'
'Resource' > 'arnawss3YourBucketName'
)
)
))
))
The following will be part of your less trusted code You provide
temporary
security credentials so it can send authenticated requests to Amazon S3
credentials result>get('Credentials')
s3 new S3Clientfactory(array(
'key' > credentials['AccessKeyId']
'secret' > credentials['SecretAccessKey']
'token' > credentials['SessionToken']
))
result s3>listObjects()
API Version 20060301
44Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Example of a Federated User Making an Amazon S3 Request Using Temporary Security
Credentials
The following PHP code example lists keys in the specified bucket In the code example you first
obtain temporary security credentials for an hour session for your federated user (User1) and use them
to send authenticated requests to Amazon S3 For information about running the PHP examples in this
guide go to Running PHP Examples (p 567)
When requesting temporary credentials for others for added security you use the security credentials
of an IAM user who has permissions to request temporary security credentials You can also limit the
access permissions of this IAM user to ensure that the IAM user grants only the minimum application
specific permissions to the federated user This example only lists objects in a specific bucket
Therefore first create an IAM user with the following policy attached
{
Statement[{
Action[s3ListBucket
stsGetFederationToken*
]
EffectAllow
Resource*
}
]
}
The policy allows the IAM user to request temporary security credentials and access permission only
to list your AWS resources For more information about how to create an IAM user see Creating Your
First IAM User and Administrators Group in the IAM User Guide
You can now use the IAM user security credentials to test the following example The example sends
an authenticated request to Amazon S3 using temporary security credentials The example specifies
the following policy when requesting temporary security credentials for the federated user (User1)
which restricts access to list objects in a specific bucket You must update the policy with your own
existing bucket name
{
Statement[
{
Sid1
Action[s3ListBucket]
EffectAllow
Resourcearnawss3YourBucketName
}
]
}
In the following example you must replace YourBucketName with your own existing bucket name when
specifying the policy resource
require 'vendorautoloadphp'
bucket '*** Your Bucket Name ***'
use Aws\Sts\StsClient
use Aws\S3\S3Client
use Aws\S3\Exception\S3Exception
Instantiate the client
sts StsClientfactory()
result sts>getFederationToken(array(
'Name' > 'User1'
'DurationSeconds' > 3600
'Policy' > json_encode(array(
'Statement' > array(
array(
'Sid' > 'randomstatementid' time()
'Action' > array('s3ListBucket')
'Effect' > 'Allow'
'Resource' > 'arnawss3YourBucketName'
)
)
))
))
credentials result>get('Credentials')
s3 S3Clientfactory(array(
'key' > credentials['AccessKeyId']
'secret' > credentials['SecretAccessKey']
'token' > credentials['SessionToken']
))
try {
objects s3>getIterator('ListObjects' array(
'Bucket' > bucket
))
echo Keys retrieved\n
foreach (objects as object) {
echo object['Key'] \n
}
} catch (S3Exception e) {
echo e>getMessage() \n
}
API Version 20060301
45Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Related Resources
• AWS SDK for PHP for Amazon S3 Aws\Sts\StsClient Class
• AWS SDK for PHP for Amazon S3 Aws\Sts\StsClientfactory() Method
• AWS SDK for PHP for Amazon S3 Aws\Sts\StsClientgetSessionToken() Method
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
• AWS SDK for PHP for Amazon S3
• AWS SDK for PHP Documentation
API Version 20060301
46Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Making Requests Using Federated User Temporary
Credentials AWS SDK for Ruby
You can provide temporary security credentials for your federated users and applications (see Making
Requests (p 11)) so that they can send authenticated requests to access your AWS resources When
requesting these temporary credentials from the IAM service you must provide a user name and an
IAM policy describing the resource permissions you want to grant By default the session duration is
one hour However if you are requesting temporary credentials using IAM user credentials you can
explicitly set a different duration value when requesting the temporary security credentials for federated
users and applications
Note
To request temporary security credentials for federated users and applications for added
security you might want to use a dedicated IAM user with only the necessary access
permissions The temporary user you create can never get more permissions than the IAM
user who requested the temporary security credentials For more information go to AWS
Identity and Access Management FAQs
Making Requests Using Federated User Temporary Security Credentials
1 Create an instance of the AWS Security Token Service client AWSSTSSession
2 Start a session by calling the new_federated_session method of the STS client you
created in the preceding step
You will need to provide session information including the user name and an IAM policy
that you want to attach to the temporary credentials
This method returns your temporary security credentials
3 Create an instance of the AWSS3 class by passing the temporary security credentials
You send requests to Amazon S3 using this client If you send requests using expired
credentials Amazon S3 returns an error
The following Ruby code sample demonstrates the preceding tasks
# Start a session with restricted permissions
sts AWSSTSnew()
policy AWSSTSPolicynew
policyallow(
actions > [s3ListBucket]
resources > arnawss3#{bucket_name})
session stsnew_federated_session(
'User1'
policy > policy
duration > 2*60*60)
puts Policy #{policyto_json}
# Get an instance of the S3 interface using the session credentials
s3 AWSS3new(sessioncredentials)
# Get a list of all object keys in a bucket
bucket s3buckets[bucket_name]objectscollect(&key)
API Version 20060301
47Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials
Example
The following Ruby code example lists keys in the specified bucket In the code example you first
obtain temporary security credentials for a two hour session for your federated user (User1) and use
them to send authenticated requests to Amazon S3When requesting temporary credentials for others for added security you use the security credentials
of an IAM user who has permissions to request temporary security credentials You can also limit the
access permissions of this IAM user to ensure that the IAM user grants only the minimum application
specific permissions when requesting temporary security credentials This sample only lists objects in a
specific bucket Therefore first create an IAM user with the following policy attached
{
Statement[{
Action[s3ListBucket
stsGetFederationToken*
]
EffectAllow
Resource*
}
]
}The policy allows the IAM user to request temporary security credentials and access permission only
to list your AWS resources For more information about how to create an IAM user see Creating Your
First IAM User and Administrators Group in the IAM User GuideYou can now use the IAM user security credentials to test the following example The example sends
an authenticated request to Amazon S3 using temporary security credentials The example specifies
the following policy when requesting temporary security credentials for the federated user (User1)
which restricts access to listing objects in a specific bucket (YourBucketName) To use this example in
your code update the policy and provide your own bucket name
{
Statement[
{
Sid1
Action[s3ListBucket]
EffectAllow
Resourcearnawss3YourBucketName
}
]
}To use this example in your code provide your access key ID and secret key and the bucket name that
you specified in the preceding federated user access policyrequire 'rubygems'
require 'awssdk'
# In real applications the following code is part of your trusted code It
has
# your security credentials that you use to obtain temporary security
credentials
bucket_name '*** Provide bucket name ***'
# Start a session with restricted permissions
sts AWSSTSnew()
policy AWSSTSPolicynew
policyallow(
actions > [s3ListBucket]
resources > arnawss3#{bucket_name})
session stsnew_federated_session(
'User1'
policy > policy
duration > 2*60*60)
puts Policy #{policyto_json}
# Get an instance of the S3 interface using the session credentials
s3 AWSS3new(sessioncredentials)
# Get a list of all object keys in a bucket
bucket s3buckets[bucket_name]objectscollect(&key)
puts No of Objects #{bucketcountto_s}
puts bucket
API Version 20060301
48Amazon Simple Storage Service Developer Guide
Making Requests Using the REST API
Making Requests Using the REST API
This section contains information on how to make requests to Amazon S3 endpoints by using the
REST API For a list of Amazon S3 endpoints see Regions and Endpoints in the AWS General
Reference
Topics
• Making Requests to DualStack Endpoints by Using the REST API (p 50)
• Virtual Hosting of Buckets (p 50)
• Request Redirection and the REST API (p 55)
When making requests by using the REST API you can use virtual hosted–style or pathstyle URIs for
the Amazon S3 endpoints For more information see Working with Amazon S3 Buckets (p 58)
Example Virtual Hosted–Style Request
Following is an example of a virtual hosted–style request to delete the puppyjpg file from the bucket
named examplebucket
DELETE puppyjpg HTTP11
Host examplebuckets3uswest2amazonawscom
Date Mon 11 Apr 2016 120000 GMT
xamzdate Mon 11 Apr 2016 120000 GMT
Authorization authorization string
Example PathStyle Request
Following is an example of a pathstyle version of the same request
DELETE examplebucketpuppyjpg HTTP11
Host s3uswest2amazonawscom
Date Mon 11 Apr 2016 120000 GMT
xamzdate Mon 11 Apr 2016 120000 GMT
Authorization authorization string
Amazon S3 supports virtual hostedstyle and pathstyle access in all regions The pathstyle syntax
however requires that you use the regionspecific endpoint when attempting to access a bucket
For example if you have a bucket called mybucket that resides in the EU (Ireland) region you want
to use pathstyle syntax and the object is named puppyjpg the correct URI is https3eu
west1amazonawscommybucketpuppyjpg
You will receive an HTTP response code 307 Temporary Redirect error and a message indicating
what the correct URI is for your resource if you try to access a bucket outside the US East (N Virginia)
region with pathstyle syntax that uses either of the following
• https3amazonawscom
• An endpoint for a region different from the one where the bucket resides For example if you use
https3euwest1amazonawscom for a bucket that was created in the US West (N
California) region
API Version 20060301
49Amazon Simple Storage Service Developer Guide
DualStack Endpoints (REST API)
Making Requests to DualStack Endpoints by Using
the REST API
When using the REST API you can directly access a dualstack endpoint by using a virtual hosted–
style or a path style endpoint name (URI) All Amazon S3 dualstack endpoint names include the
region in the name Unlike the standard IPv4only endpoints both virtual hosted–style and a pathstyle
endpoints use regionspecific endpoint names
Example Virtual Hosted–Style DualStack Endpoint Request
You can use a virtual hosted–style endpoint in your REST request as shown in the following example
that retrieves the puppyjpg object from the bucket named examplebucket
GET puppyjpg HTTP11
Host examplebuckets3dualstackuswest2amazonawscom
Date Mon 11 Apr 2016 120000 GMT
xamzdate Mon 11 Apr 2016 120000 GMT
Authorization authorization string
Example PathStyle DualStack Endpoint Request
Or you can use a pathstyle endpoint in your request as shown in the following example
GET examplebucketpuppyjpg HTTP11
Host s3dualstackuswest2amazonawscom
Date Mon 11 Apr 2016 120000 GMT
xamzdate Mon 11 Apr 2016 120000 GMT
Authorization authorization string
For more information about dualstack endpoints see Using Amazon S3 DualStack Endpoints (p 16)
Virtual Hosting of Buckets
Topics
• HTTP Host Header Bucket Specification (p 51)
• Examples (p 51)
• Customizing Amazon S3 URLs with CNAMEs (p 53)
• Limitations (p 54)
• Backward Compatibility (p 55)
In general virtual hosting is the practice of serving multiple web sites from a single web server
One way to differentiate sites is by using the apparent host name of the request instead of just the
path name part of the URI An ordinary Amazon S3 REST request specifies a bucket by using the
first slashdelimited component of the RequestURI path Alternatively you can use Amazon S3
virtual hosting to address a bucket in a REST API call by using the HTTP Host header In practice
Amazon S3 interprets Host as meaning that most buckets are automatically accessible (for limited
types of requests) at httpbucketnames3amazonawscom Furthermore by naming your
bucket after your registered domain name and by making that name a DNS alias for Amazon S3
you can completely customize the URL of your Amazon S3 resources for example http
mybucketnamecom
Besides the attractiveness of customized URLs a second benefit of virtual hosting is the ability to
publish to the root directory of your bucket's virtual server This ability can be important because
API Version 20060301
50Amazon Simple Storage Service Developer Guide
Virtual Hosting of Buckets
many existing applications search for files in this standard location For example faviconico
robotstxt crossdomainxml are all expected to be found at the root
Important
Amazon S3 supports virtual hostedstyle and pathstyle access in all regions The pathstyle
syntax however requires that you use the regionspecific endpoint when attempting to access
a bucket For example if you have a bucket called mybucket that resides in the EU (Ireland)
region you want to use pathstyle syntax and the object is named puppyjpg the correct
URI is https3euwest1amazonawscommybucketpuppyjpg
You will receive an HTTP response code 307 Temporary Redirect error and a message
indicating what the correct URI is for your resource if you try to access a bucket outside the
US East (N Virginia) region with pathstyle syntax that uses either of the following
• https3amazonawscom
• An endpoint for a region different from the one where the bucket resides For example if
you use https3euwest1amazonawscom for a bucket that was created in the US
West (N California) region
Note
Amazon S3 routes any virtual hosted–style requests to the US East (N Virginia) region by
default if you use the US East (N Virginia) endpoint (s3amazonawscom) instead of the
regionspecific endpoint (for example s3euwest1amazonawscom) When you create a
bucket in any region Amazon S3 updates DNS to reroute the request to the correct location
which might take time In the meantime the default rule applies and your virtual hosted–style
request goes to the US East (N Virginia) region and Amazon S3 redirects it with HTTP 307
redirect to the correct region For more information see Request Redirection and the REST
API (p 513)
When using virtual hosted–style buckets with SSL the SSL wild card certificate only matches
buckets that do not contain periods To work around this use HTTP or write your own
certificate verification logic
HTTP Host Header Bucket Specification
As long as your GET request does not use the SSL endpoint you can specify the bucket for the request
by using the HTTP Host header The Host header in a REST request is interpreted as follows
• If the Host header is omitted or its value is 's3amazonawscom' the bucket for the request will be
the first slashdelimited component of the RequestURI and the key for the request will be the rest of
the RequestURI This is the ordinary method as illustrated by the first and second examples in this
section Omitting the Host header is valid only for HTTP 10 requests
• Otherwise if the value of the Host header ends in 's3amazonawscom' the bucket name is the
leading component of the Host header's value up to 's3amazonawscom' The key for the request
is the RequestURI This interpretation exposes buckets as subdomains of s3amazonawscom as
illustrated by the third and fourth examples in this section
• Otherwise the bucket for the request is the lowercase value of the Host header and the key for the
request is the RequestURI This interpretation is useful when you have registered the same DNS
name as your bucket name and have configured that name to be a CNAME alias for Amazon S3
The procedure for registering domain names and configuring DNS is beyond the scope of this guide
but the result is illustrated by the final example in this section
Examples
This section provides example URLs and requests
API Version 20060301
51Amazon Simple Storage Service Developer Guide
Virtual Hosting of Buckets
Example Path Style Method
This example uses johnsmithnet as the bucket name and homepagehtml as the key name
The URL is as follows
https3amazonawscomjohnsmithnethomepagehtml
The request is as follows
GET johnsmithnethomepagehtml HTTP11
Host s3amazonawscom
The request with HTTP 10 and omitting the host header is as follows
GET johnsmithnethomepagehtml HTTP10
For information about DNScompatible names see Limitations (p 54) For more information about
keys see Keys (p 4)
Example Virtual Hosted–Style Method
This example uses johnsmithnet as the bucket name and homepagehtml as the key name
The URL is as follows
httpjohnsmithnets3amazonawscomhomepagehtml
The request is as follows
GET homepagehtml HTTP11
Host johnsmithnets3amazonawscom
The virtual hosted–style method requires the bucket name to be DNScompliant
API Version 20060301
52Amazon Simple Storage Service Developer Guide
Virtual Hosting of Buckets
Example Virtual Hosted–Style Method for a Bucket in a Region Other Than US East (N
Virginia) region
This example uses johnsmitheu as the name for a bucket in the EU (Ireland) region and
homepagehtml as the key name
The URL is as follows
httpjohnsmitheus3euwest1amazonawscomhomepagehtml
The request is as follows
GET homepagehtml HTTP11
Host johnsmitheus3euwest1amazonawscom
Note that instead of using the regionspecific endpoint you can also use the US East (N Virginia)
region endpoint no matter what region the bucket resides
httpjohnsmitheus3amazonawscomhomepagehtml
The request is as follows
GET homepagehtml HTTP11
Host johnsmitheus3amazonawscom
Example CNAME Method
This example uses wwwjohnsmithnet as the bucket name and homepagehtml as the
key name To use this method you must configure your DNS name as a CNAME alias for
bucketnames3amazonawscom
The URL is as follows
httpwwwjohnsmithnethomepagehtml
The example is as follows
GET homepagehtml HTTP11
Host wwwjohnsmithnet
Customizing Amazon S3 URLs with CNAMEs
Depending on your needs you might not want s3amazonawscom to appear on your website or
service For example if you host your website images on Amazon S3 you might prefer http
imagesjohnsmithnet instead of httpjohnsmithimagess3amazonawscom
The bucket name must be the same as the CNAME So httpimagesjohnsmithnet
filename would be the same as httpimagesjohnsmithnets3amazonawscom
filename if a CNAME were created to map imagesjohnsmithnet to
imagesjohnsmithnets3amazonawscom
Any bucket with a DNScompatible name can be referenced as follows http
[BucketName]s3amazonawscom[Filename] for example http
API Version 20060301
53Amazon Simple Storage Service Developer Guide
Virtual Hosting of Buckets
imagesjohnsmithnets3amazonawscommydogjpg By using CNAME you can map
imagesjohnsmithnet to an Amazon S3 host name so that the previous URL could become
httpimagesjohnsmithnetmydogjpg
The CNAME DNS record should alias your domain name to the appropriate virtual hosted–style
host name For example if your bucket name and domain name are imagesjohnsmithnet the
CNAME record should alias to imagesjohnsmithnets3amazonawscom
imagesjohnsmithnet CNAME imagesjohnsmithnets3amazonawscom
Setting the alias target to s3amazonawscom also works but it may result in extra HTTP redirects
Amazon S3 uses the host name to determine the bucket name For example suppose that you have
configured wwwexamplecom as a CNAME for wwwexamplecoms3amazonawscom When you
access httpwwwexamplecom Amazon S3 receives a request similar to the following
GET HTTP11
Host wwwexamplecom
Date date
Authorization signatureValue
Because Amazon S3 sees only the original host name wwwexamplecom and is unaware of the
CNAME mapping used to resolve the request the CNAME and the bucket name must be the same
Any Amazon S3 endpoint can be used in a CNAME For example s3ap
southeast1amazonawscom can be used in CNAMEs For more information about endpoints see
Request Endpoints (p 13)
To associate a host name with an Amazon S3 bucket using CNAMEs
1 Select a host name that belongs to a domain you control This example uses the images
subdomain of the johnsmithnet domain
2 Create a bucket that matches the host name In this example the host and bucket names are
imagesjohnsmithnet
Note
The bucket name must exactly match the host name
3 Create a CNAME record that defines the host name as an alias for the Amazon S3 bucket For
example
imagesjohnsmithnet CNAME imagesjohnsmithnets3amazonawscom
Important
For request routing reasons the CNAME record must be defined exactly as shown in the
preceding example Otherwise it might appear to operate correctly but will eventually
result in unpredictable behavior
Note
The procedure for configuring DNS depends on your DNS server or DNS provider For
specific information see your server documentation or contact your provider
Limitations
Specifying the bucket for the request by using the HTTP Host header is supported for nonSSL
requests and when using the REST API You cannot specify the bucket in SOAP by using a different
endpoint
API Version 20060301
54Amazon Simple Storage Service Developer Guide
Request Redirection and the REST API
Note
SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
features will not be supported for SOAP We recommend that you use either the REST API or
the AWS SDKs
Backward Compatibility
Early versions of Amazon S3 incorrectly ignored the HTTP Host header Applications that depend on
this undocumented behavior must be updated to set the Host header correctly Because Amazon S3
determines the bucket name from Host when it is present the most likely symptom of this problem is
to receive an unexpected NoSuchBucket error result code
Request Redirection and the REST API
Topics
• Redirects and HTTP UserAgents (p 55)
• Redirects and 100Continue (p 55)
• Redirect Example (p 56)
This section describes how to handle HTTP redirects by using the Amazon S3 REST API For general
information about Amazon S3 redirects see Request Redirection and the REST API (p 513) in the
Amazon Simple Storage Service API Reference
Redirects and HTTP UserAgents
Programs that use the Amazon S3 REST API should handle redirects either at the application layer
or the HTTP layer Many HTTP client libraries and user agents can be configured to correctly handle
redirects automatically however many others have incorrect or incomplete redirect implementations
Before you rely on a library to fulfill the redirect requirement test the following cases
• Verify all HTTP request headers are correctly included in the redirected request (the second request
after receiving a redirect) including HTTP standards such as Authorization and Date
• Verify nonGET redirects such as PUT and DELETE work correctly
• Verify large PUT requests follow redirects correctly
• Verify PUT requests follow redirects correctly if the 100continue response takes a long time to
arrive
HTTP useragents that strictly conform to RFC 2616 might require explicit confirmation before following
a redirect when the HTTP request method is not GET or HEAD It is generally safe to follow redirects
generated by Amazon S3 automatically as the system will issue redirects only to hosts within the
amazonawscom domain and the effect of the redirected request will be the same as that of the original
request
Redirects and 100Continue
To simplify redirect handling improve efficiencies and avoid the costs associated with sending a
redirected request body twice configure your application to use 100continues for PUT operations
When your application uses 100continue it does not send the request body until it receives an
acknowledgement If the message is rejected based on the headers the body of the message is not
sent For more information about 100continue go to RFC 2616 Section 823
Note
According to RFC 2616 when using Expect Continue with an unknown HTTP server
you should not wait an indefinite period before sending the request body This is because
API Version 20060301
55Amazon Simple Storage Service Developer Guide
Request Redirection and the REST API
some HTTP servers do not recognize 100continue However Amazon S3 does recognize
if your request contains an Expect Continue and will respond with a provisional 100
continue status or a final status code Additionally no redirect error will occur after receiving
the provisional 100 continue goahead This will help you avoid receiving a redirect response
while you are still writing the request body
Redirect Example
This section provides an example of clientserver interaction using HTTP redirects and 100continue
Following is a sample PUT to the quotess3amazonawscom bucket
PUT nelsontxt HTTP11
Host quotess3amazonawscom
Date Mon 15 Oct 2007 221846 +0000
ContentLength 6
Expect 100continue
Amazon S3 returns the following
HTTP11 307 Temporary Redirect
Location httpquotess34c25d83bamazonawscomnelsontxtrk8d47490b
ContentType applicationxml
TransferEncoding chunked
Date Mon 15 Oct 2007 221846 GMT
Server AmazonS3
TemporaryRedirect
Please resend this request to the
specified temporary endpoint Continue to use the
original request endpoint for future requests
quotess34c25d83bamazonawscom
quotes
The client follows the redirect response and issues a new request to the
quotess34c25d83bamazonawscom temporary endpoint
PUT nelsontxtrk8d47490b HTTP11
Host quotess34c25d83bamazonawscom
Date Mon 15 Oct 2007 221846 +0000
ContentLength 6
Expect 100continue
Amazon S3 returns a 100continue indicating the client should proceed with sending the request body
HTTP11 100 Continue
The client sends the request body
API Version 20060301
56Amazon Simple Storage Service Developer Guide
Request Redirection and the REST API
ha ha\n
Amazon S3 returns the final response
HTTP11 200 OK
Date Mon 15 Oct 2007 221848 GMT
ETag a2c8d6b872054293afd41061e93bc289
ContentLength 0
Server AmazonS3
API Version 20060301
57Amazon Simple Storage Service Developer Guide
Working with Amazon S3 Buckets
Amazon S3 is cloud storage for the Internet To upload your data (photos videos documents etc)
you first create a bucket in one of the AWS Regions You can then upload any number of objects to the
bucket
In terms of implementation buckets and objects are resources and Amazon S3 provides APIs for you
to manage them For example you can create a bucket and upload objects using the Amazon S3 API
You can also use the Amazon S3 console to perform these operations The console internally uses the
Amazon S3 APIs to send requests to Amazon S3
In this section we explain working with buckets For information about working with objects see
Working with Amazon S3 Objects (p 98)
Amazon S3 bucket names are globally unique regardless of the AWS Region in which you create the
bucket You specify the name at the time you create the bucket For bucket naming guidelines see
Bucket Restrictions and Limitations (p 62)
Amazon S3 creates bucket in a region you specify You can choose any AWS Region that is
geographically close to you to optimize latency minimize costs or address regulatory requirements
For example if you reside in Europe you might find it advantageous to create buckets in the EU
(Ireland) or EU (Frankfurt) regions For a list of AWS Amazon S3 regions go to Regions and Endpoints
in the AWS General Reference
Note
Objects belonging to a bucket that you create in a specific AWS Region never leave that
region unless you explicitly transfer them to another region For example objects stored in
the EU (Ireland) region never leave it
Topics
• Creating a Bucket (p 59)
• Accessing a Bucket (p 60)
• Bucket Configuration Options (p 61)
• Bucket Restrictions and Limitations (p 62)
• Examples of Creating a Bucket (p 64)
• Deleting or Emptying a Bucket (p 67)
• Managing Bucket Website Configuration (p 73)
API Version 20060301
58Amazon Simple Storage Service Developer Guide
Creating a Bucket
• Amazon S3 Transfer Acceleration (p 81)
• Requester Pays Buckets (p 92)
• Buckets and Access Control (p 96)
• Billing and Reporting of Buckets (p 96)
Creating a Bucket
Amazon S3 provides APIs for you to create and manage buckets By default you can create up to 100
buckets in each of your AWS accounts If you need additional buckets you can increase your bucket
limit by submitting a service limit increase To learn more about submitting a bucket limit increase go
to AWS Service Limits in the AWS General Reference
When you create a bucket you provide a name and AWS Region where you want the bucket created
For information about naming buckets see Rules for Bucket Naming (p 63)
Within each bucket you can store any number of objects You can create a bucket using any of the
following methods
• Create the bucket using the console
• Create the bucket programmatically using the AWS SDKs
Note
If you need to you can also make the Amazon S3 REST API calls directly from your code
However this can be cumbersome because it requires you to write code to authenticate
your requests For more information go to PUT Bucket in the Amazon Simple Storage
Service API Reference
When using AWS SDKs you first create a client and then send a request to create a bucket using
the client You can specify an AWS Region when you create the client US East (N Virginia) is the
default region You can also specify a region in your create bucket request Note the following
• If you create a client by specifying the US East (N Virginia) Region it uses the following endpoint
to communicate with Amazon S3
s3amazonawscom
You can use this client to create a bucket in any AWS Region In your create bucket request
• If you don’t specify a region Amazon S3 creates the bucket in the US East (N Virginia) Region
• If you specify an AWS Region Amazon S3 creates the bucket in the specified region
• If you create a client by specifying any other AWS Region each of these regions maps to the
regionspecific endpoint
s3amazonawscom
For example if you create a client by specifying the euwest1 region it maps to the following
regionspecific endpoint
s3euwest1amazonawscom
In this case you can use the client to create a bucket only in the euwest1 region Amazon S3
returns an error if you specify any other region in your create bucket request
• If you create a client to access a dualstack endpoint you must specify an AWS Region For more
information see DualStack Endpoints (p 16)
API Version 20060301
59Amazon Simple Storage Service Developer Guide
About Permissions
For a list of available AWS Regions go to Regions and Endpoints in the AWS General Reference
For examples see Examples of Creating a Bucket (p 64)
About Permissions
You can use your AWS account root credentials to create a bucket and perform any other Amazon S3
operation However AWS recommends not using the root credentials of your AWS account to make
requests such as create a bucket Instead create an IAM user and grant that user full access (users
by default have no permissions) We refer to these users as administrator users You can use the
administrator user credentials instead of the root credentials of your account to interact with AWS and
perform tasks such as create a bucket create users and grant them permissions
For more information go to Root Account Credentials vs IAM User Credentials in the AWS General
Reference and IAM Best Practices in the IAM User Guide
The AWS account that creates a resource owns that resource For example if you create an IAM
user in your AWS account and grant the user permission to create a bucket the user can create a
bucket But the user does not own the bucket the AWS account to which the user belongs owns the
bucket The user will need additional permission from the resource owner to perform any other bucket
operations For more information about managing permissions for your Amazon S3 resources see
Managing Access Permissions to Your Amazon S3 Resources (p 266)
Accessing a Bucket
You can access your bucket using the Amazon S3 console Using the console UI you can perform
almost all bucket operations without having to write any code
If you access a bucket programmatically note that Amazon S3 supports RESTful architecture in which
your buckets and objects are resources each with a resource URI that uniquely identify the resource
Amazon S3 supports both virtualhosted–style and pathstyle URLs to access a bucket
• In a virtualhosted–style URL the bucket name is part of the domain name in the URL For example
• httpbuckets3amazonawscom
• httpbuckets3awsregionamazonawscom
In a virtualhosted–style URL you can use either of these endpoints If you make a request to the
httpbuckets3amazonawscom endpoint the DNS has sufficient information to route your
request directly to the region where your bucket resides
For more information see Virtual Hosting of Buckets (p 50)
• In a pathstyle URL the bucket name is not part of the domain (unless you use a regionspecific
endpoint) For example
• US East (N Virginia) region endpoint https3amazonawscombucket
• Regionspecific endpoint https3awsregionamazonawscombucket
In a pathstyle URL the endpoint you use must match the region in which the bucket resides For
example if your bucket is in the South America (São Paulo) region you must use the http
s3saeast1amazonawscombucket endpoint If your bucket is in the US East (N Virginia)
region you must use the https3amazonawscombucket endpoint
API Version 20060301
60Amazon Simple Storage Service Developer Guide
Bucket Configuration Options
Important
Because buckets can be accessed using pathstyle and virtualhosted–style URLs we
recommend you create buckets with DNScompliant bucket names For more information see
Bucket Restrictions and Limitations (p 62)
Accessing an S3 Bucket over IPv6
Amazon S3 has a set of dualstack endpoints which support requests to S3 buckets over both Internet
Protocol version 6 (IPv6) and IPv4 For more information see Making Requests over IPv6 (p 13)
Bucket Configuration Options
Amazon S3 supports various options for you to configure your bucket For example you can configure
your bucket for website hosting add configuration to manage lifecycle of objects in the bucket and
configure the bucket to log all access to the bucket Amazon S3 supports subresources for you to
store and manage the bucket configuration information That is using the Amazon S3 API you can
create and manage these subresources You can also use the console or the AWS SDKs
Note
There are also objectlevel configurations For example you can configure objectlevel
permissions by configuring an access control list (ACL) specific to that object
These are referred to as subresources because they exist in the context of a specific bucket or object
The following table lists subresources that enable you to manage bucketspecific configurations
Subresource Description
location When you create a bucket you specify the AWS Region where you want
Amazon S3 to create the bucket Amazon S3 stores this information in the
location subresource and provides an API for you to retrieve this information
policy and ACL
(Access Control
List)
All your resources (such as buckets and objects) are private by default Amazon
S3 supports both bucket policy and access control list (ACL) options for you to
grant and manage bucketlevel permissions Amazon S3 stores the permission
information in the policy and acl subresources
For more information see Managing Access Permissions to Your Amazon S3
Resources (p 266)
cors (crossorigin
resource sharing)
You can configure your bucket to allow crossorigin requests
For more information see Enabling CrossOrigin Resource Sharing
website You can configure your bucket for static website hosting Amazon S3 stores this
configuration by creating a website subresource
For more information see Hosting a Static Website on Amazon S3
logging Logging enables you to track requests for access to your bucket Each
access log record provides details about a single access request such as the
requester bucket name request time request action response status and
error code if any Access log information can be useful in security and access
audits It can also help you learn about your customer base and understand
your Amazon S3 bill
For more information see Server Access Logging (p 546)
event notification You can enable your bucket to send you notifications of specified bucket events
API Version 20060301
61Amazon Simple Storage Service Developer Guide
Restrictions and Limitations
Subresource Description
For more information see Configuring Amazon S3 Event
Notifications (p 472)
versioning Versioning helps you recover accidental overwrites and deletes
We recommend versioning as a best practice to recover objects from being
deleted or overwritten by mistake
For more information see Using Versioning (p 423)
lifecycle You can define lifecycle rules for objects in your bucket that have a welldefined
lifecycle For example you can define a rule to archive objects one year after
creation or delete an object 10 years after creation
For more information see Object Lifecycle Management
crossregion
replication
Crossregion replication is the automatic asynchronous copying of objects
across buckets in different AWS Regions For more information see Cross
Region Replication (p 492)
tagging You can add cost allocation tags to your bucket to categorize and track your
AWS costs Amazon S3 provides the tagging subresource to store and manage
tags on a bucket Using tags you apply to your bucket AWS generates a cost
allocation report with usage and costs aggregated by your tags
For more information see Billing and Reporting of Buckets (p 96)
requestPayment By default the AWS account that creates the bucket (the bucket owner) pays
for downloads from the bucket Using this subresource the bucket owner
can specify that the person requesting the download will be charged for the
download Amazon S3 provides an API for you to manage this subresource
For more information see Requester Pays Buckets (p 92)
transfer
acceleration
Transfer Acceleration enables fast easy and secure transfers of files over long
distances between your client and an S3 bucket Transfer Acceleration takes
advantage of Amazon CloudFront’s globally distributed edge locations
For more information see Amazon S3 Transfer Acceleration (p 81)
Bucket Restrictions and Limitations
A bucket is owned by the AWS account that created it By default you can create up to 100 buckets
in each of your AWS accounts If you need additional buckets you can increase your bucket limit by
submitting a service limit increase For information about how to increase your bucket limit go to AWS
Service Limits in the AWS General Reference
Bucket ownership is not transferable however if a bucket is empty you can delete it After a bucket
is deleted the name becomes available to reuse but the name might not be available for you to reuse
for various reasons For example some other account could create a bucket with that name Note too
that it might take some time before the name can be reused So if you want to use the same bucket
name don't delete the bucket
There is no limit to the number of objects that can be stored in a bucket and no difference in
performance whether you use many buckets or just a few You can store all of your objects in a single
bucket or you can organize them across several buckets
API Version 20060301
62Amazon Simple Storage Service Developer Guide
Rules for Naming
You cannot create a bucket within another bucket
The highavailability engineering of Amazon S3 is focused on get put list and delete operations
Because bucket operations work against a centralized global resource space it is not appropriate to
create or delete buckets on the highavailability code path of your application It is better to create or
delete buckets in a separate initialization or setup routine that you run less often
Note
If your application automatically creates buckets choose a bucket naming scheme that is
unlikely to cause naming conflicts Ensure that your application logic will choose a different
bucket name if a bucket name is already taken
Rules for Bucket Naming
We recommend that all bucket names comply with DNS naming conventions These conventions are
enforced in all regions except for the US East (N Virginia) region
Note
If you use the AWS management console bucket names must be DNS compliant in all
regions
DNScompliant bucket names allow customers to benefit from new features and operational
improvements as well as providing support for virtualhost style access to buckets While the US East
(N Virginia) region currently allows noncompliant DNS bucket naming we are moving to the same
DNScompliant bucket naming convention for the US East (N Virginia) region in the coming months
This will ensure a single consistent naming approach for Amazon S3 buckets The rules for DNS
compliant bucket names are
• Bucket names must be at least 3 and no more than 63 characters long
• Bucket names must be a series of one or more labels Adjacent labels are separated by a single
period () Bucket names can contain lowercase letters numbers and hyphens Each label must
start and end with a lowercase letter or a number
• Bucket names must not be formatted as an IP address (eg 19216854)
• When using virtual hosted–style buckets with SSL the SSL wildcard certificate only matches buckets
that do not contain periods To work around this use HTTP or write your own certificate verification
logic We recommend that you do not use periods () in bucket names
The following examples are valid bucket names
• myawsbucket
• myawsbucket
• myawsbucket1
The following examples are invalid bucket names
Invalid Bucket Name Comment
myawsbucket Bucket name cannot start with a period ()
myawsbucket Bucket name cannot end with a period ()
myexamplebucket There can be only one period between labels
Challenges with Non–DNSCompliant Bucket Names
The US East (N Virginia) region currently allows more relaxed standards for bucket naming which
can result in a bucket name that is not DNScompliant For example MyAWSBucket is a valid bucket
API Version 20060301
63Amazon Simple Storage Service Developer Guide
Examples of Creating a Bucket
name even though it contains uppercase letters If you try to access this bucket by using a virtual
hosted–style request (httpMyAWSBuckets3amazonawscomyourobject) the URL resolves
to the bucket myawsbucket and not the bucket MyAWSBucket In response Amazon S3 will return a
bucket not found error
To avoid this problem we recommend as a best practice that you always use DNScompliant bucket
names regardless of the region in which you create the bucket For more information about virtual
hosted–style access to your buckets see Virtual Hosting of Buckets (p 50)
The name of the bucket used for Amazon S3 Transfer Acceleration must be DNScompliant and
must not contain periods () For more information about transfer acceleration see see Amazon S3
Transfer Acceleration (p 81)
The rules for bucket names in the US East (N Virginia) region allow bucket names to be as long
as 255 characters and bucket names can contain any combination of uppercase letters lowercase
letters numbers periods () hyphens () and underscores (_)
Examples of Creating a Bucket
Topics
• Using the Amazon S3 Console (p 65)
• Using the AWS SDK for Java (p 65)
• Using the AWS SDK for NET (p 66)
• Using the AWS SDK for Ruby Version 2 (p 67)
• Using Other AWS SDKs (p 67)
This section provides code examples of creating a bucket programmatically using the AWS SDKs for
Java NET and Ruby The code examples perform the following tasks
• Create a bucket if it does not exist — The examples create a bucket as follows
• Create a client by explicitly specifying an AWS Region (example uses the s3eu
west1 region) Accordingly the client communicates with Amazon S3 using the s3eu
west1amazonawscom endpoint You can specify any other AWS Region For a list of available
AWS Regions see Regions and Endpoints in the AWS General Reference
• Send a create bucket request by specifying only a bucket name The create bucket request does
not specify another AWS Region therefore the client sends a request to Amazon S3 to create the
bucket in the region you specified when creating the client
Note
If you specify a region in your create bucket request that conflicts with the region you
specify when you create the client you might get an error For more information see
Creating a Bucket (p 59)
The SDK libraries send the PUT bucket request to Amazon S3 (see PUT Bucket) to create the
bucket
• Retrieve bucket location information — Amazon S3 stores bucket location information in the location
subresource associated with the bucket The SDK libraries send the GET Bucket location request
(see GET Bucket location) to retrieve this information
API Version 20060301
64Amazon Simple Storage Service Developer Guide
Using the Amazon S3 Console
Using the Amazon S3 Console
For creating a bucket using Amazon S3 console go to Creating a Bucket in the Amazon Simple
Storage Service Console User Guide
Using the AWS SDK for Java
For instructions on how to create and test a working sample see Testing the Java Code
Examples (p 564)
import javaioIOException
import comamazonawsAmazonClientException
import comamazonawsAmazonServiceException
import comamazonawsauthprofileProfileCredentialsProvider
import comamazonawsregionsRegion
import comamazonawsregionsRegions
import comamazonawsservicess3AmazonS3
import comamazonawsservicess3AmazonS3Client
import comamazonawsservicess3modelCreateBucketRequest
import comamazonawsservicess3modelGetBucketLocationRequest
public class CreateBucket {
private static String bucketName *** bucket name ***
public static void main(String[] args) throws IOException {
AmazonS3 s3client new AmazonS3Client(new
ProfileCredentialsProvider())
s3clientsetRegion(RegiongetRegion(RegionsUS_WEST_1))
try {
if((s3clientdoesBucketExist(bucketName)))
{
Note that CreateBucketRequest does not specify region So
bucket is
created in the region specified in the client
s3clientcreateBucket(new CreateBucketRequest(
bucketName))
}
Get location
String bucketLocation s3clientgetBucketLocation(new
GetBucketLocationRequest(bucketName))
Systemoutprintln(bucket location + bucketLocation)
} catch (AmazonServiceException ase) {
Systemoutprintln(Caught an AmazonServiceException which +
means your request made it +
to Amazon S3 but was rejected with an error response +
for some reason)
Systemoutprintln(Error Message + asegetMessage())
Systemoutprintln(HTTP Status Code + asegetStatusCode())
Systemoutprintln(AWS Error Code + asegetErrorCode())
Systemoutprintln(Error Type + asegetErrorType())
Systemoutprintln(Request ID + asegetRequestId())
} catch (AmazonClientException ace) {
Systemoutprintln(Caught an AmazonClientException which +
means the client encountered +
API Version 20060301
65Amazon Simple Storage Service Developer Guide
Using the AWS SDK for NET
an internal error while trying to +
communicate with S3 +
such as not being able to access the network)
Systemoutprintln(Error Message + acegetMessage())
}
}
}
Using the AWS SDK for NET
For information about how to create and test a working sample see Running the Amazon S3 NET
Code Examples (p 566)
using System
using AmazonS3
using AmazonS3Model
using AmazonS3Util
namespace s3amazoncomdocsamples
{
class CreateBucket
{
static string bucketName *** bucket name ***
public static void Main(string[] args)
{
using (var client new
AmazonS3Client(AmazonRegionEndpointEUWest1))
{
if ((AmazonS3UtilDoesS3BucketExist(client bucketName)))
{
CreateABucket(client)
}
Retrieve bucket location
string bucketLocation FindBucketLocation(client)
}
ConsoleWriteLine(Press any key to continue)
ConsoleReadKey()
}
static string FindBucketLocation(IAmazonS3 client)
{
string bucketLocation
GetBucketLocationRequest request new GetBucketLocationRequest()
{
BucketName bucketName
}
GetBucketLocationResponse response
clientGetBucketLocation(request)
bucketLocation responseLocationToString()
return bucketLocation
}
static void CreateABucket(IAmazonS3 client)
{
API Version 20060301
66Amazon Simple Storage Service Developer Guide
Using the AWS SDK for Ruby Version 2
try
{
PutBucketRequest putRequest1 new PutBucketRequest
{
BucketName bucketName
UseClientRegion true
}
PutBucketResponse response1 clientPutBucket(putRequest1)
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3ExceptionErrorCode null &&
(amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
||
amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
{
ConsoleWriteLine(Check the provided AWS Credentials)
ConsoleWriteLine(
For service sign up go to httpawsamazoncom
s3)
}
else
{
ConsoleWriteLine(
Error occurred Message'{0}' when writing an
object
amazonS3ExceptionMessage)
}
}
}
}
}
Using the AWS SDK for Ruby Version 2
For information about how to create and test a working sample see Using the AWS SDK for Ruby
Version 2 (p 568)
require 'awssdk'
s3 AwsS3Clientnew(region 'uswest1')
s3create_bucket(bucket 'bucketname')
Using Other AWS SDKs
For information about using other AWS SDKs go to Sample Code and Libraries
Deleting or Emptying a Bucket
Topics
• Delete a Bucket (p 68)
• Empty a Bucket (p 71)
API Version 20060301
67Amazon Simple Storage Service Developer Guide
Delete a Bucket
It is easy to delete an empty bucket however in some situations you may need to delete or empty
a bucket that contains objects In this section we'll explain how to delete objects in an unversioned
bucket (the default) and how to delete object versions and delete markers in a bucket that has
versioning enabled For more information about versioning see Using Versioning (p 423) In some
situations you may choose to empty a bucket instead of deleting it This section explains various
options you can use to delete or empty a bucket that contains objects
Delete a Bucket
You can delete a bucket and its content programmatically using AWS SDK You can also use lifecycle
configuration on a bucket to empty its content and then delete the bucket There are additional options
such as using Amazon S3 console and AWS CLI but there are limitations on this method based on the
number of objects in your bucket and the bucket's versioning status
Topics
• Delete a Bucket Using the Amazon S3 Console (p 68)
• Delete a Bucket Using the AWS CLI (p 68)
• Delete a Bucket Using Lifecycle Configuration (p 68)
• Delete a Bucket Using the AWS SDKs (p 69)
Delete a Bucket Using the Amazon S3 Console
The Amazon S3 console supports deleting a bucket that may or may not be empty If the bucket is not
empty the Amazon S3 console supports deleting a bucket containing up to 100000 objects If your
bucket contains more than 100000 objects you can use other options such as the AWS CLI bucket
lifecycle configuration or programmatically using AWS SDKs
In the Amazon S3 console open the context (rightclick) menu on the bucket and choose Delete
Bucket or Empty Bucket
Delete a Bucket Using the AWS CLI
You can delete a bucket that contains objects using the AWS CLI only if the bucket does not have
versioning enabled If your bucket does not have versioning enabled you can use the rb (remove
bucket) AWS CLI command with force parameter to remove a nonempty bucket This command
deletes all objects first and then deletes the bucket
aws s3 rb s3bucketname force
For more information see Using HighLevel S3 Commands with the AWS Command Line Interface in
the AWS Command Line Interface User Guide
To delete a nonempty bucket that does not have versioning enabled you have the following options
• Delete the bucket programmatically using the AWS SDK
• First delete all of the objects using the bucket's lifecycle configuration and then delete the empty
bucket using the Amazon S3 console
Delete a Bucket Using Lifecycle Configuration
You can configure lifecycle on your bucket to expire objects Amazon S3 then deletes expired objects
You can add lifecycle configuration rules to expire all or a subset of objects with a specific key name
API Version 20060301
68Amazon Simple Storage Service Developer Guide
Delete a Bucket
prefix For example to remove all objects in a bucket you can set a lifecycle rule to expire objects one
day after creation
If your bucket has versioning enabled you can also configure the rule to expire noncurrent objects
After Amazon S3 deletes all of the objects in your bucket you can delete the bucket or keep it
Important
If you just want to empty the bucket and not delete it make sure you remove the lifecycle
configuration rule you added to empty the bucket so that any new objects you create in the
bucket will remain in the bucket
For more information see Object Lifecycle Management (p 109) and Expiring Objects General
Considerations (p 112)
Delete a Bucket Using the AWS SDKs
You can use the AWS SDKs to delete a bucket The following sections provide examples of how to
delete a bucket using the AWS SDK for NET and Java First the code deletes objects in the bucket
and then it deletes the bucket For information about other AWS SDKs see Tools for Amazon Web
Services
Delete a Bucket Using the AWS SDK for Java
The following Java example deletes a nonempty bucket First the code deletes all objects and then it
deletes the bucket The code example also works for buckets with versioning enabled
For instructions on how to create and test a working sample see Testing the Java Code
Examples (p 564)
import javaioIOException
import javautilIterator
import comamazonawsAmazonClientException
import comamazonawsAmazonServiceException
import comamazonawsauthprofileProfileCredentialsProvider
import comamazonawsregionsRegion
import comamazonawsregionsRegions
import comamazonawsservicess3AmazonS3
import comamazonawsservicess3AmazonS3Client
import comamazonawsservicess3modelListVersionsRequest
import comamazonawsservicess3modelObjectListing
import comamazonawsservicess3modelS3ObjectSummary
import comamazonawsservicess3modelS3VersionSummary
import comamazonawsservicess3modelVersionListing
public class DeleteBucketAndContent {
private static String bucketName ***bucket name to delete ***
public static void main(String[] args) throws IOException {
AmazonS3 s3client new AmazonS3Client(new
ProfileCredentialsProvider())
s3clientsetRegion(RegiongetRegion(RegionsAWSRegionWhereBucket
Resides))
try {
Systemoutprintln(Deleting S3 bucket + bucketName)
API Version 20060301
69Amazon Simple Storage Service Developer Guide
Delete a Bucket
ObjectListing objectListing
s3clientlistObjects(bucketName)
while (true) {
for ( Iterator<> iterator
objectListinggetObjectSummaries()iterator() iteratorhasNext() ) {
S3ObjectSummary objectSummary (S3ObjectSummary)
iteratornext()
s3clientdeleteObject(bucketName
objectSummarygetKey())
}
if (objectListingisTruncated()) {
objectListing
s3clientlistNextBatchOfObjects(objectListing)
} else {
break
}
}
VersionListing list s3clientlistVersions(new
ListVersionsRequest()withBucketName(bucketName))
for ( Iterator<> iterator
listgetVersionSummaries()iterator() iteratorhasNext() ) {
S3VersionSummary s (S3VersionSummary)iteratornext()
s3clientdeleteVersion(bucketName sgetKey()
sgetVersionId())
}
s3clientdeleteBucket(bucketName)
} catch (AmazonServiceException ase) {
Systemoutprintln(Caught an AmazonServiceException which +
means your request made it +
to Amazon S3 but was rejected with an error response +
for some reason)
Systemoutprintln(Error Message + asegetMessage())
Systemoutprintln(HTTP Status Code + asegetStatusCode())
Systemoutprintln(AWS Error Code + asegetErrorCode())
Systemoutprintln(Error Type + asegetErrorType())
Systemoutprintln(Request ID + asegetRequestId())
} catch (AmazonClientException ace) {
Systemoutprintln(Caught an AmazonClientException which +
means the client encountered +
an internal error while trying to +
communicate with S3 +
such as not being able to access the network)
Systemoutprintln(Error Message + acegetMessage())
}
}
}
Delete a Bucket Using the AWS SDK for NET
The following NET example deletes a nonempty bucket First the code deletes all objects and then it
deletes the bucket The code example also works for buckets with versioning enabled
For instructions on how to create and test a working sample see Running the Amazon S3 NET Code
Examples (p 566)
API Version 20060301
70Amazon Simple Storage Service Developer Guide
Empty a Bucket
using System
using AmazonS3
using AmazonS3Model
using AmazonS3Util
namespace s3amazoncomdocsamples
{
class CreateBucket
{
static string bucketName *** bucket name to delete ***
public static void Main(string[] args)
{
try
{
using (var client new
AmazonS3Client(AmazonRegionEndpointAWSregionwherebucketresides))
{
AmazonS3UtilDeleteS3BucketWithObjects(client
bucketName)
ConsoleWriteLine(Press any key to continue)
ConsoleReadKey()
}
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3ExceptionErrorCode null &&
(amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
||
amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
{
ConsoleWriteLine(Check the provided AWS Credentials)
ConsoleWriteLine(
For service sign up go to httpawsamazoncom
s3)
}
else
{
ConsoleWriteLine(
Error occurred Message'{0}' when writing an
object
amazonS3ExceptionMessage)
}
}
}
}
}
Empty a Bucket
You can empty a bucket's content (that is delete all content but keep the bucket) programmatically
using the AWS SDK You can also specify lifecycle configuration on a bucket to expire objects so that
Amazon S3 can delete them There are additional options such as using Amazon S3 console and
AWS CLI but there are limitations on this method based on the number of objects in your bucket and
the bucket's versioning status
Topics
API Version 20060301
71Amazon Simple Storage Service Developer Guide
Empty a Bucket
• Empty a Bucket Using the Amazon S3 console (p 72)
• Empty a Bucket Using the AWS CLI (p 72)
• Empty a Bucket Using Lifecycle Configuration (p 72)
• Empty a Bucket Using the AWS SDKs (p 73)
Empty a Bucket Using the Amazon S3 console
The Amazon S3 console supports emptying your bucket provided that the bucket contains less than
100000 objects The Amazon S3 console returns an error if you attempt to empty a bucket that
contains more than 100000 objects For example if your bucket has versioning enabled you can
have one object with 101000 object versions and you will not be able to empty this bucket using the
Amazon S3 console
In the Amazon S3 console open the context (rightclick) menu on the bucket and choose Empty
Bucket
Empty a Bucket Using the AWS CLI
You can empty a bucket using the AWS CLI only if the bucket does not have versioning enabled If
your bucket does not have versioning enabled you can use the rm (remove) AWS CLI command with
the recursive parameter to empty a bucket (or remove a subset of objects with a specific key
name prefix)
The following rm command removes objects with key name prefix doc for example docdoc1 and
docdoc2
aws s3 rm s3bucketnamedoc recursive
Use the following command to remove all objects without specifying a prefix
aws s3 rm s3bucketname recursive
For more information see Using HighLevel S3 Commands with the AWS Command Line Interface in
the AWS Command Line Interface User Guide
Note
You cannot remove objects from a bucket with versioning enabled Amazon S3 adds a delete
marker when you delete an object which is what this command will do For more information
about versioning see Using Versioning (p 423)
To empty a bucket with versioning enabled you have the following options
• Delete the bucket programmatically using the AWS SDK
• Use the bucket's lifecycle configuration to request that Amazon S3 delete the objects
• Use the Amazon S3 console (can only use this option if your bucket contains less than 100000
items—including both object versions and delete markers)
Empty a Bucket Using Lifecycle Configuration
You can configure lifecycle on you bucket to expire objects and request that Amazon S3 delete expired
objects You can add lifecycle configuration rules to expire all or a subset of objects with a specific key
API Version 20060301
72Amazon Simple Storage Service Developer Guide
Bucket Website Configuration
name prefix For example to remove all objects in a bucket you can set lifecycle rule to expire objects
one day after creation
If your bucket has versioning enabled you can also configure the rule to expire noncurrent objects
Caution
After your objects expire Amazon S3 deletes the expired objects If you just want to empty the
bucket and not delete it make sure you remove the lifecycle configuration rule you added to
empty the bucket so that any new objects you create in the bucket will remain in the bucket
For more information see Object Lifecycle Management (p 109) and Expiring Objects General
Considerations (p 112)
Empty a Bucket Using the AWS SDKs
You can use the AWS SDKs to empty a bucket or remove a subset of objects with a specific key name
prefix
For an example of how to empty a bucket using AWS SDK for Java see Delete a Bucket Using the
AWS SDK for Java (p 69) The code deletes all objects regardless of whether the bucket has
versioning enabled or not and then it deletes the bucket To just empty the bucket make sure you
remove the statement that deletes the bucket
For more information about using other AWS SDKs see Tools for Amazon Web Services
Managing Bucket Website Configuration
Topics
• Managing Websites with the AWS Management Console (p 73)
• Managing Websites with the AWS SDK for Java (p 73)
• Managing Websites with the AWS SDK for NET (p 76)
• Managing Websites with the AWS SDK for PHP (p 79)
• Managing Websites with the REST API (p 81)
You can host static websites in Amazon S3 by configuring your bucket for website hosting For more
information see Hosting a Static Website on Amazon S3 (p 449) There are several ways you
can manage your bucket's website configuration You can use the AWS Management Console to
manage configuration without writing any code You can programmatically create update and delete
the website configuration by using the AWS SDKs The SDKs provide wrapper classes around the
Amazon S3 REST API If your application requires it you can send REST API requests directly from
your application
Managing Websites with the AWS Management
Console
For more information see Configure a Bucket for Website Hosting (p 452)
Managing Websites with the AWS SDK for Java
The following tasks guide you through using the Java classes to manage website configuration to your
bucket For more information about the Amazon S3 website feature see Hosting a Static Website on
Amazon S3 (p 449)
API Version 20060301
73Amazon Simple Storage Service Developer Guide
Using the SDK for Java
Managing Website Configuration
1 Create an instance of the AmazonS3 class
2 To add website configuration to a bucket execute the
AmazonS3setBucketWebsiteConfiguration method You need to provide the
bucket name and the website configuration information including the index document
and the error document names You must provide the index document but the error
document is optional You provide website configuration information by creating a
BucketWebsiteConfiguration object
To retrieve website configuration execute the
AmazonS3getBucketWebsiteConfiguration method by providing the bucket
name
To delete your bucket website configuration execute the
AmazonS3deleteBucketWebsiteConfiguration method by providing the bucket
name After you remove the website configuration the bucket is no longer available from
the website endpoint For more information see Website Endpoints (p 450)
The following Java code sample demonstrates the preceding tasks
AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
Add website configuration
s3ClientsetBucketWebsiteConfiguration(bucketName
new BucketWebsiteConfiguration(indexDoc errorDoc))
Get website configuration
BucketWebsiteConfiguration bucketWebsiteConfiguration
s3ClientgetBucketWebsiteConfiguration(bucketName)
Delete website configuration
s3ClientdeleteBucketWebsiteConfiguration(bucketName)
API Version 20060301
74Amazon Simple Storage Service Developer Guide
Using the SDK for Java
Example
The following Java code example adds a website configuration to the specified bucket retrieves it and
deletes the website configuration For instructions on how to create and test a working sample see
Testing the Java Code Examples (p 564)
import javaioIOException
import comamazonawsAmazonClientException
import comamazonawsAmazonServiceException
import comamazonawsauthprofileProfileCredentialsProvider
import comamazonawsservicess3AmazonS3
import comamazonawsservicess3AmazonS3Client
import comamazonawsservicess3modelBucketWebsiteConfiguration
public class WebsiteConfiguration {
private static String bucketName *** bucket name ***
private static String indexDoc *** index document name ***
private static String errorDoc *** error document name ***
public static void main(String[] args) throws IOException {
AmazonS3 s3Client new AmazonS3Client(new
ProfileCredentialsProvider())
try {
Get existing website configuration if any
getWebsiteConfig(s3Client)
Set new website configuration
s3ClientsetBucketWebsiteConfiguration(bucketName
new BucketWebsiteConfiguration(indexDoc errorDoc))
Verify (Get website configuration again)
getWebsiteConfig(s3Client)
Delete
s3ClientdeleteBucketWebsiteConfiguration(bucketName)
Verify (Get website configuration again)
getWebsiteConfig(s3Client)
} catch (AmazonServiceException ase) {
Systemoutprintln(Caught an AmazonServiceException which +
means your request made it +
to Amazon S3 but was rejected with an error response +
for some reason)
Systemoutprintln(Error Message + asegetMessage())
Systemoutprintln(HTTP Status Code + asegetStatusCode())
Systemoutprintln(AWS Error Code + asegetErrorCode())
Systemoutprintln(Error Type + asegetErrorType())
Systemoutprintln(Request ID + asegetRequestId())
} catch (AmazonClientException ace) {
Systemoutprintln(Caught an AmazonClientException which
means+
the client encountered +
a serious internal problem while trying to +
communicate with Amazon S3 +
such as not being able to access the network)
Systemoutprintln(Error Message + acegetMessage())
}
}
private static BucketWebsiteConfiguration getWebsiteConfig(
AmazonS3 s3Client) {
Systemoutprintln(Get website config)
1 Get website config
BucketWebsiteConfiguration bucketWebsiteConfiguration
s3ClientgetBucketWebsiteConfiguration(bucketName)
if (bucketWebsiteConfiguration null)
{
Systemoutprintln(No website config)
}
else
{
Systemoutprintln(Index doc +
bucketWebsiteConfigurationgetIndexDocumentSuffix())
Systemoutprintln(Error doc +
bucketWebsiteConfigurationgetErrorDocument())
}
return bucketWebsiteConfiguration
}
}
API Version 20060301
75Amazon Simple Storage Service Developer Guide
Using the AWS SDK for NET
Managing Websites with the AWS SDK for NET
The following tasks guide you through using the NET classes to manage website configuration on your
bucket For more information about the Amazon S3 website feature see Hosting a Static Website on
Amazon S3 (p 449)
Managing Bucket Website Configuration
1 Create an instance of the AmazonS3Client class
2 To add website configuration to a bucket execute the PutBucketWebsite method
You need to provide the bucket name and the website configuration information
including the index document and the error document names You must provide the index
document but the error document is optional You provide this information by creating a
PutBucketWebsiteRequest object
To retrieve website configuration execute the GetBucketWebsite method by providing
the bucket name
To delete your bucket website configuration execute the DeleteBucketWebsite
method by providing the bucket name After you remove the website configuration
the bucket is no longer available from the website endpoint For more information see
Website Endpoints (p 450)
The following C# code sample demonstrates the preceding tasks
static IAmazonS3 client
client new AmazonS3Client(AmazonRegionEndpointUSWest2)
Add website configuration
PutBucketWebsiteRequest putRequest new PutBucketWebsiteRequest()
{
BucketName bucketName
WebsiteConfiguration new WebsiteConfiguration()
{
IndexDocumentSuffix indexDocumentSuffix
ErrorDocument errorDocument
}
}
clientPutBucketWebsite(putRequest)
Get bucket website configuration
GetBucketWebsiteRequest getRequest new GetBucketWebsiteRequest()
{
BucketName bucketName
}
GetBucketWebsiteResponse getResponse clientGetBucketWebsite(getRequest)
Print configuration data
ConsoleWriteLine(Index document {0}
getResponseWebsiteConfigurationIndexDocumentSuffix)
ConsoleWriteLine(Error document {0}
getResponseWebsiteConfigurationErrorDocument)
Delete website configuration
DeleteBucketWebsiteRequest deleteRequest new DeleteBucketWebsiteRequest()
{
API Version 20060301
76Amazon Simple Storage Service Developer Guide
Using the AWS SDK for NET
BucketName bucketName
}
clientDeleteBucketWebsite(deleteRequest)
API Version 20060301
77Amazon Simple Storage Service Developer Guide
Using the AWS SDK for NET
Example
The following C# code example adds a website configuration to the specified bucket The configuration
specifies both the index document and the error document names For instructions on how to create
and test a working sample see Running the Amazon S3 NET Code Examples (p 566)
using System
using SystemConfiguration
using SystemCollectionsSpecialized
using AmazonS3
using AmazonS3Model
namespace s3amazoncomdocsamples
{
class AddWebsiteConfig
{
static string bucketName *** Provide existing bucket name
***
static string indexDocumentSuffix *** Provide index document name
***
static string errorDocument *** Provide error document name
***
static IAmazonS3 client
public static void Main(string[] args)
{
using (client new
AmazonS3Client(AmazonRegionEndpointUSWest2))
{
ConsoleWriteLine(Adding website configuration)
AddWebsiteConfiguration(bucketName indexDocumentSuffix
errorDocument)
}
Get bucket website configuration
GetBucketWebsiteRequest getRequest new
GetBucketWebsiteRequest()
{
BucketName bucketName
}
GetBucketWebsiteResponse getResponse
clientGetBucketWebsite(getRequest)
Print configuration data
ConsoleWriteLine(Index document {0}
getResponseWebsiteConfigurationIndexDocumentSuffix)
ConsoleWriteLine(Error document {0}
getResponseWebsiteConfigurationErrorDocument)
ConsoleWriteLine(Press any key to continue)
ConsoleReadKey()
}
static void AddWebsiteConfiguration(string bucketName
string indexDocumentSuffix
string errorDocument)
{
try
{
PutBucketWebsiteRequest putRequest new
PutBucketWebsiteRequest()
{
BucketName bucketName
WebsiteConfiguration new WebsiteConfiguration()
{
IndexDocumentSuffix indexDocumentSuffix
ErrorDocument errorDocument
}
}
clientPutBucketWebsite(putRequest)
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3ExceptionErrorCode null &&
(amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
||
amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
{
ConsoleWriteLine(Check the provided AWS Credentials)
ConsoleWriteLine(Sign up for service at http
awsamazoncoms3)
}
else
{
ConsoleWriteLine(
Error{0} occurred when adding website
configuration Message'{1}
amazonS3ExceptionErrorCode
amazonS3ExceptionMessage)
}
}
}
}
}
API Version 20060301
78Amazon Simple Storage Service Developer Guide
Using the SDK for PHP
Managing Websites with the AWS SDK for PHP
This topic guides you through using classes from the AWS SDK for PHP to configure and manage an
Amazon S3 bucket for website hosting For more information about the Amazon S3 website feature
see Hosting a Static Website on Amazon S3 (p 449)
Note
This topic assumes that you are already following the instructions for Using the AWS SDK
for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
installed
The following tasks guide you through using the PHP SDK classes to configure and manage an
Amazon S3 bucket for website hosting
Configuring a Bucket for Website Hosting
1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
method
2 To configure a bucket as a website execute the Aws\S3\S3ClientputBucketWebsite()
method You need to provide the bucket name and the website configuration information
including the index document and the error document names If you don't provide these
document names this method adds the indexhtml and errorhtml default names
to the website configuration You must verify that these documents are present in the
bucket
3 To retrieve existing bucket website configuration execute the Aws
\S3\S3ClientgetBucketWebsite() method
4 To delete website configuration from a bucket execute the Aws
\S3\S3ClientdeleteBucketWebsite() method passing the bucket name as a parameter
If you remove the website configuration the bucket is no longer accessible from the
website endpoints
The following PHP code sample demonstrates the preceding tasks
use Aws\S3\S3Client
bucket '*** Your Bucket Name ***'
1 Instantiate the client
s3 S3Clientfactory()
2 Add website configuration
result s3>putBucketWebsite(array(
'Bucket' > bucket
'IndexDocument' > array('Suffix' > 'indexhtml')
'ErrorDocument' > array('Key' > 'errorhtml')
))
3 Retrieve website configuration
result s3>getBucketWebsite(array(
'Bucket' > bucket
))
echo result>getPath('IndexDocumentSuffix')
API Version 20060301
79Amazon Simple Storage Service Developer Guide
Using the SDK for PHP
4) Delete website configuration
result s3>deleteBucketWebsite(array(
'Bucket' > bucket
))
Example of Configuring an Bucket Amazon S3 for Website Hosting
The following PHP code example first adds a website configuration to the specified bucket The
create_website_config method explicitly provides the index document and error document names
The sample also retrieves the website configuration and prints the response For more information
about the Amazon S3 website feature see Hosting a Static Website on Amazon S3 (p 449)
For instructions on how to create and test a working sample see Using the AWS SDK for PHP and
Running PHP Examples (p 566)
Include the AWS SDK using the Composer autoloader
require 'vendorautoloadphp'
use Aws\S3\S3Client
bucket '*** Your Bucket Name ***'
Instantiate the client
s3 S3Clientfactory()
1) Add website configuration
result s3>putBucketWebsite(array(
'Bucket' > bucket
'IndexDocument' > array('Suffix' > 'indexhtml')
'ErrorDocument' > array('Key' > 'errorhtml')
))
2) Retrieve website configuration
result s3>getBucketWebsite(array(
'Bucket' > bucket
))
echo result>getPath('IndexDocumentSuffix')
3) Delete website configuration
result s3>deleteBucketWebsite(array(
'Bucket' > bucket
))
Related Resources
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientdeleteBucketWebsite() Method
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
• AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientgetBucketWebsite() Method
• AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientputBucketWebsite() Method
• AWS SDK for PHP for Amazon S3
• AWS SDK for PHP Documentation
API Version 20060301
80Amazon Simple Storage Service Developer Guide
Using the REST API
Managing Websites with the REST API
You can use the AWS Management Console or the AWS SDK to configure a bucket as a website
However if your application requires it you can send REST requests directly For more information
see the following sections in the Amazon Simple Storage Service API Reference
• PUT Bucket website
• GET Bucket website
• DELETE Bucket website
Amazon S3 Transfer Acceleration
Amazon S3 Transfer Acceleration enables fast easy and secure transfers of files over long distances
between your client and an S3 bucket Transfer Acceleration takes advantage of Amazon CloudFront’s
globally distributed edge locations As the data arrives at an edge location data is routed to Amazon
S3 over an optimized network path
When using Transfer Acceleration additional data transfer charges may apply For more information
about pricing see Amazon S3 Pricing
Topics
• Why Use Amazon S3 Transfer Acceleration (p 81)
• Getting Started with Amazon S3 Transfer Acceleration (p 82)
• Requirements for Using Amazon S3 Transfer Acceleration (p 83)
• Amazon S3 Transfer Acceleration Examples (p 83)
Why Use Amazon S3 Transfer Acceleration
You might want to use Transfer Acceleration on a bucket for various reasons including the following
• You have customers that upload to a centralized bucket from all over the world
• You transfer gigabytes to terabytes of data on a regular basis across continents
• You underutilize the available bandwidth over the Internet when uploading to Amazon S3
For more information about when to use Transfer Acceleration see Amazon S3 FAQs
Using the Amazon S3 Transfer Acceleration Speed
Comparison Tool
You can use the Amazon S3 Transfer Acceleration Speed Comparison tool to compare accelerated
and nonaccelerated upload speeds across Amazon S3 regions The Speed Comparison tool uses
multipart uploads to transfer a file from your browser to various Amazon S3 regions with and without
using Transfer Acceleration
You can access the Speed Comparison tool using either of the following methods
• Copy the following URL into your browser window replacing region with the region that you are
using (for example uswest2) and yourBucketName with the name of the bucket that you want to
evaluate
https3acceleratespeedtests3accelerateamazonawscomenaccelerate
speedcomparsionhtmlregionregion&origBucketNameyourBucketName
API Version 20060301
81Amazon Simple Storage Service Developer Guide
Getting Started
For a list of the regions supported by Amazon S3 see Regions and Endpoints in the Amazon Web
Services General Reference
• Use the Amazon S3 console For details see Enabling Transfer Acceleration in the Amazon Simple
Storage Service Console User Guide
Getting Started with Amazon S3 Transfer
Acceleration
To get started using Amazon S3 Transfer Acceleration perform the following steps
1 Enable Transfer Acceleration on a bucket – For your bucket to work with transfer acceleration
the bucket name must conform to DNS naming requirements and must not contain periods ()
You can enable Transfer Acceleration on a bucket any of the following ways
• Use the Amazon S3 console For more information see Enabling Transfer Acceleration in the
Amazon Simple Storage Service Console User Guide
• Use the REST API PUT Bucket accelerate operation
• Use the AWS CLI and AWS SDKs For more information see Using the AWS SDKs CLI and
Explorers (p 560)
2 Transfer data to the accelerationenabled bucket using the bucketnames3
accelerateamazonawscom endpoint – When uploading to or downloading from the Transfer
Acceleration enabled bucket you must use the bucket endpoint domain name bucketnames3
accelerateamazonawscom to get accelerated data transfers You can find the unique Transfer
Acceleration endpoint name for your bucket in the Amazon S3 management console
Note
You can continue to use the regular endpoint in addition to the accelerate endpoint
For example let's say you currently have a REST API application using PUT Object that uses the
host name mybuckets3amazonawscom in the PUT request To accelerate the PUT you simply
change the host name in your request to mybuckets3accelerateamazonawscom To go back to
using the standard upload speed simply change the name back to mybuckets3amazonawscom
You can use the new accelerate endpoint in the AWS CLI AWS SDKs and other tools that transfer
data to and from Amazon S3 If you are using the AWS SDKs some of the supported languages
use an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint
for Transfer Acceleration to bucketnames3accelerateamazonawscom For examples of how
to use an accelerate endpoint client configuration flag see Amazon S3 Transfer Acceleration
Examples (p 83)
You can use all of the Amazon S3 operations through the transaction acceleration endpoint except
for the following the operations GET Service (list buckets) PUT Bucket (create bucket) and DELETE
Bucket Also Amazon S3 Transfer Acceleration does not support cross region copies using PUT
Object Copy
API Version 20060301
82Amazon Simple Storage Service Developer Guide
Requirements for Using Amazon S3 Transfer Acceleration
Requirements for Using Amazon S3 Transfer
Acceleration
The following are the requirements for using Transfer Acceleration on an S3 bucket
• Transfer Acceleration is only supported on virtual style requests For more information about virtual
style requests see Making Requests Using the REST API (p 49)
• The name of the bucket used for Transfer Acceleration must be DNScompliant and must not contain
periods ()
• Transfer Acceleration must be enabled on the bucket After enabling Transfer Acceleration on a
bucket it might take up to thirty minutes before the data transfer speed to the bucket increases
• You must use the use the endpoint bucketnames3accelerateamazonawscom to access the
enabled bucket
• You must be the bucket owner to set the transfer acceleration state The bucket owner can
assign permissions to other users to allow them to set the acceleration state on a bucket The
s3PutAccelerateConfiguration permission permits users to enable or disable Transfer
Acceleration on a bucket The s3GetAccelerateConfiguration permission permits users
to return the Transfer Acceleration state of a bucket which is either Enabled or Suspended
For more information about these permissions see Permissions Related to Bucket Subresource
Operations (p 314) and Managing Access Permissions to Your Amazon S3 Resources (p 266)
• Transfer Acceleration is not Health Insurance Portability and Accountability Act (HIPAA) compliant
Important
Transfer Acceleration uses AWS Edge infrastructure (edge locations) which are not Health
Insurance Portability and Accountability Act (HIPAA) compliant If your organization has
personal health information (PHI) workloads covered under the HIPAA Business Associate
Agreement (BAA) you can't use Transfer Acceleration For more information contact AWS
Support at Contact Us
Related Topics
• GET Bucket accelerate
• PUT Bucket accelerate
Amazon S3 Transfer Acceleration Examples
This section provides examples of how to enable Amazon S3 Transfer Acceleration on a bucket and
use the acceleration endpoint for the enabled bucket Some of the AWS SDK supported languages
(for example Java and NET) use an accelerate endpoint client configuration flag so you don't need to
explicitly set the endpoint for Transfer Acceleration to bucketnames3accelerateamazonawscom
For more information about Transfer Acceleration see Amazon S3 Transfer Acceleration (p 81)
Topics
• Using the Amazon S3 Console (p 84)
• Using Transfer Acceleration from the AWS Command Line Interface (AWS CLI) (p 84)
• Using Transfer Acceleration from the AWS SDK for Java (p 85)
• Using Transfer Acceleration from the AWS SDK for NET (p 88)
• Using Other AWS SDKs (p 92)
API Version 20060301
83Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples
Using the Amazon S3 Console
For information about enabling Transfer Acceleration on a bucket using the Amazon S3 console see
Enabling Transfer Acceleration in the Amazon Simple Storage Service Console User Guide
Using Transfer Acceleration from the AWS Command Line
Interface (AWS CLI)
This section provides examples of AWS CLI commands used for Transfer Acceleration For
instructions on setting up the AWS CLI see Set Up the AWS CLI (p 562)
Enabling Transfer Acceleration on a Bucket Using the AWS CLI
Use the AWS CLI putbucketaccelerateconfiguration command to enable or suspend Transfer
Acceleration on a bucket The following example sets StatusEnabled to enable Transfer
Acceleration on a bucket You use StatusSuspended to suspend Transfer Acceleration
aws s3api putbucketaccelerateconfiguration bucket bucketname
accelerateconfiguration StatusEnabled
Using the Transfer Acceleration from the AWS CLI
Setting the configuration value use_accelerate_endpoint to true in a profile in your AWS Config
File will direct all Amazon S3 requests made by s3 and s3api AWS CLI commands to the accelerate
endpoint s3accelerateamazonawscom Transfer Acceleration must be enabled on your bucket
to use the accelerate endpoint
All request are sent using the virtual style of bucket addressing mybuckets3
accelerateamazonawscom Any ListBuckets CreateBucket and DeleteBucket requests
will not be sent to the accelerate endpoint as the endpoint does not support those operations For more
information about use_accelerate_endpoint see AWS CLI S3 Configuration
The following example sets use_accelerate_endpoint to true in the default profile
aws configure set defaults3use_accelerate_endpoint true
If you want to use the accelerate endpoint for some AWS CLI commands but not others you can use
either one of the following two methods
• You can use the accelerate endpoint per command by setting the endpointurl parameter to
httpss3accelerateamazonawscom or https3accelerateamazonawscom for
any s3 or s3api command
• You can setup separate profiles in your AWS Config File For example create one
profile that sets use_accelerate_endpoint to true and a profile that does not set
use_accelerate_endpoint When you execute a command specify which profile you want to use
depending upon whether or not you want to use the accelerate endpoint
AWS CLI Examples of Uploading an Object to a Transfer Acceleration Enabled
Bucket
The following example uploads a file to a Transfer Acceleration enabled bucket by using the default
profile that has been configured to use the accelerate endpoint
aws s3 cp filetxt s3bucketnamekeyname region region
API Version 20060301
84Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples
The following example uploads a file to a Transfer Acceleration enabled bucket by using the
endpointurl parameter to specify the accelerate endpoint
aws configure set s3addressing_style virtual
aws s3 cp filetxt s3bucketnamekeyname region region endpointurl
https3accelerateamazonawscom
Using Transfer Acceleration from the AWS SDK for Java
This section provides examples of using the AWS SDK for Java for Transfer Acceleration For
information about how to create and test a working Java sample see Testing the Java Code
Examples (p 564)
Java Example 1 Enable Amazon S3 Transfer Acceleration on a Bucket
The following Java example shows how to enable Transfer Acceleration on a bucket
import javaioIOException
import comamazonawsauthprofileProfileCredentialsProvider
import comamazonawsregionsRegion
import comamazonawsregionsRegions
import comamazonawsservicess3AmazonS3Client
import comamazonawsservicess3modelBucketAccelerateConfiguration
import comamazonawsservicess3modelBucketAccelerateStatus
import
comamazonawsservicess3modelGetBucketAccelerateConfigurationRequest
import
comamazonawsservicess3modelSetBucketAccelerateConfigurationRequest
public class BucketAccelertionConfiguration {
public static String bucketName *** Provide bucket name ***
public static AmazonS3Client s3Client
public static void main(String[] args) throws IOException {
s3Client new AmazonS3Client(new ProfileCredentialsProvider())
s3ClientsetRegion(RegiongetRegion(RegionsUS_WEST_2))
1 Enable bucket for Amazon S3 Transfer Acceleration
s3ClientsetBucketAccelerateConfiguration(new
SetBucketAccelerateConfigurationRequest(bucketName
new BucketAccelerateConfiguration(BucketAccelerateStatusEnabled)))
2 Get the acceleration status of the bucket
String accelerateStatus
s3ClientgetBucketAccelerateConfiguration(new
GetBucketAccelerateConfigurationRequest(bucketName))getStatus()
Systemoutprintln(Acceleration status + accelerateStatus)
}
}
API Version 20060301
85Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples
Java Example 2 Uploading a Single Object to a Transfer Acceleration Enabled
Bucket
The following Java example shows how to use the accelerate endpoint to upload a single object
import javaioFile
import javaioIOException
import comamazonawsAmazonClientException
import comamazonawsAmazonServiceException
import comamazonawsauthprofileProfileCredentialsProvider
import comamazonawsregionsRegion
import comamazonawsregionsRegions
import comamazonawsservicess3AmazonS3
import comamazonawsservicess3AmazonS3Client
import comamazonawsservicess3S3ClientOptions
import comamazonawsservicess3modelPutObjectRequest
public class AcceleratedUploadSingleObject {
private static String bucketName *** Provide bucket name ***
private static String keyName *** Provide key name ***
private static String uploadFileName *** Provide file name with full
path ***
public static void main(String[] args) throws IOException {
AmazonS3 s3Client new AmazonS3Client(new
ProfileCredentialsProvider())
s3ClientsetRegion(RegiongetRegion(RegionsUS_WEST_2))
Use Amazon S3 Transfer Acceleration endpoint
s3ClientsetS3ClientOptions(S3ClientOptionsbuilder()setAccelerateModeEnabled(true)build())
try {
Systemoutprintln(Uploading a new object to S3 from a file
\n)
File file new File(uploadFileName)
s3ClientputObject(new PutObjectRequest(
bucketName keyName file))
} catch (AmazonServiceException ase) {
Systemoutprintln(Caught an AmazonServiceException which
+
means your request made it +
to Amazon S3 but was rejected with an error
response +
for some reason)
Systemoutprintln(Error Message + asegetMessage())
Systemoutprintln(HTTP Status Code +
asegetStatusCode())
Systemoutprintln(AWS Error Code +
asegetErrorCode())
Systemoutprintln(Error Type +
asegetErrorType())
Systemoutprintln(Request ID +
asegetRequestId())
API Version 20060301
86Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples
} catch (AmazonClientException ace) {
Systemoutprintln(Caught an AmazonClientException which
+
means the client encountered +
an internal error while trying to +
communicate with S3 +
such as not being able to access the network)
Systemoutprintln(Error Message + acegetMessage())
}
}
}
Java Example 3 Multipart Upload to a Transfer Acceleration Enabled Bucket
The following Java example shows how to use the accelerate endpoint for a multipart upload
import javaioFile
import comamazonawsAmazonClientException
import comamazonawsauthprofileProfileCredentialsProvider
import comamazonawsregionsRegions
import comamazonawsservicess3AmazonS3Client
import comamazonawsservicess3S3ClientOptions
import comamazonawsservicess3transferTransferManager
import comamazonawsservicess3transferUpload
public class AccelerateMultipartUploadUsingHighLevelAPI {
private static String EXISTING_BUCKET_NAME *** Provide bucket name
***
private static String KEY_NAME *** Provide key name ***
private static String FILE_PATH *** Provide file name with full path
***
public static void main(String[] args) throws Exception {
AmazonS3Client s3Client new AmazonS3Client(new
ProfileCredentialsProvider())
s3ClientconfigureRegion(RegionsUS_WEST_2)
Use Amazon S3 Transfer Acceleration endpoint
s3ClientsetS3ClientOptions(S3ClientOptionsbuilder()setAccelerateModeEnabled(true)build())
TransferManager tm new TransferManager(s3Client)
Systemoutprintln(TransferManager)
TransferManager processes all transfers asynchronously
so this call will return immediately
Upload upload tmupload(
EXISTING_BUCKET_NAME KEY_NAME new File(FILE_PATH))
Systemoutprintln(Upload)
try {
Or you can block and wait for the upload to finish
uploadwaitForCompletion()
Systemoutprintln(Upload complete)
} catch (AmazonClientException amazonClientException) {
API Version 20060301
87Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples
Systemoutprintln(Unable to upload file upload was aborted)
amazonClientExceptionprintStackTrace()
}
}
}
Using Transfer Acceleration from the AWS SDK for NET
This section provides examples of using the AWS SDK for NET for Transfer Acceleration For
information about how to create and test a working NET sample see Running the Amazon S3 NET
Code Examples (p 566)
NET Example 1 Enable Transfer Acceleration on a Bucket
The following NET example shows how to enable Transfer Acceleration on a bucket
using System
using SystemCollectionsGeneric
using AmazonS3
using AmazonS3Model
using AmazonS3Util
namespace s3amazoncomdocsamples
{
class SetTransferAccelerateState
{
private static string bucketName Provide bucket name
public static void Main(string[] args)
{
using (var s3Client new
AmazonS3Client(AmazonRegionEndpointUSWest2))
try
{
EnableTransferAcclerationOnBucket(s3Client)
BucketAccelerateStatus bucketAcclerationStatus
GetBucketAccelerateState(s3Client)
ConsoleWriteLine(Acceleration state '{0}'
bucketAcclerationStatus)
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3ExceptionErrorCode null &&
(amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
||
amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
{
ConsoleWriteLine(Check the provided AWS
Credentials)
ConsoleWriteLine(
To sign up for the service go to httpawsamazoncom
s3)
}
else
API Version 20060301
88Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples
{
ConsoleWriteLine(
Error occurred Message'{0}' when setting transfer
acceleration
amazonS3ExceptionMessage)
}
}
ConsoleWriteLine(Press any key to continue)
ConsoleReadKey()
}
static void EnableTransferAcclerationOnBucket(IAmazonS3 s3Client)
{
PutBucketAccelerateConfigurationRequest request new
PutBucketAccelerateConfigurationRequest
{
BucketName bucketName
AccelerateConfiguration new AccelerateConfiguration
{
Status BucketAccelerateStatusEnabled
}
}
PutBucketAccelerateConfigurationResponse response
s3ClientPutBucketAccelerateConfiguration(request)
}
static BucketAccelerateStatus GetBucketAccelerateState(IAmazonS3
s3Client)
{
GetBucketAccelerateConfigurationRequest request new
GetBucketAccelerateConfigurationRequest
{
BucketName bucketName
}
GetBucketAccelerateConfigurationResponse response
s3ClientGetBucketAccelerateConfiguration(request)
return responseStatus
}
}
}
NET Example 2 Uploading a Single Object to a Transfer Acceleration
Enabled Bucket
The following NET example shows how to use the accelerate endpoint to upload a single object
using System
using SystemCollectionsGeneric
using Amazon
using AmazonS3
using AmazonS3Model
using AmazonS3Util
namespace s3amazoncomdocsamples
{
API Version 20060301
89Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples
public class UploadtoAcceleratedBucket
{
private static RegionEndpoint TestRegionEndpoint
RegionEndpointUSWest2
private static string bucketName Provide bucket name
static string keyName *** Provide key name ***
static string filePath *** Provide filename of file to upload with
the full path ***
public static void Main(string[] args)
{
using (var client new AmazonS3Client(new AmazonS3Config
{
RegionEndpoint TestRegionEndpoint
UseAccelerateEndpoint true
}))
{
WriteObject(client)
ConsoleWriteLine(Press any key to continue)
ConsoleReadKey()
}
}
static void WriteObject(IAmazonS3 client)
{
try
{
PutObjectRequest putRequest new PutObjectRequest
{
BucketName bucketName
Key keyName
FilePath filePath
}
clientPutObject(putRequest)
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3ExceptionErrorCode null &&
(amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
||
amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
{
ConsoleWriteLine(Check the provided AWS
Credentials)
ConsoleWriteLine(
For service sign up go to httpawsamazoncom
s3)
}
else
{
ConsoleWriteLine(
Error occurred Message'{0}' when writing an
object
amazonS3ExceptionMessage)
}
}
}
API Version 20060301
90Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples
}
}
NET Example 3 Multipart Upload to a Transfer Acceleration Enabled Bucket
The following NET example shows how to use the accelerate endpoint for a multipart upload
using System
using SystemIO
using Amazon
using AmazonS3
using AmazonS3Model
using AmazonS3Transfer
namespace s3amazoncomdocsamples
{
class AcceleratedUploadFileMPUHAPI
{
private static RegionEndpoint TestRegionEndpoint
RegionEndpointUSWest2
private static string existingBucketName Provide bucket name
private static string keyName *** Provide your object key
***
private static string filePath *** Provide file name with full
path ***
static void Main(string[] args)
{
try
{
var client new AmazonS3Client(new AmazonS3Config
{
RegionEndpoint TestRegionEndpoint
UseAccelerateEndpoint true
})
using (TransferUtility fileTransferUtility new
TransferUtility(client))
{
1 Upload a file file name is used as the object key
name
fileTransferUtilityUpload(filePath
existingBucketName)
ConsoleWriteLine(Upload 1 completed)
2 Specify object key name explicitly
fileTransferUtilityUpload(filePath
existingBucketName keyName)
ConsoleWriteLine(Upload 2 completed)
3 Upload data from a type of SystemIOStream
using (FileStream fileToUpload
new FileStream(filePath FileModeOpen
FileAccessRead))
{
fileTransferUtilityUpload(fileToUpload
existingBucketName
keyName)
API Version 20060301
91Amazon Simple Storage Service Developer Guide
Requester Pays Buckets
}
ConsoleWriteLine(Upload 3 completed)
4Specify advanced settingsoptions
TransferUtilityUploadRequest fileTransferUtilityRequest
new TransferUtilityUploadRequest
{
BucketName existingBucketName
FilePath filePath
StorageClass S3StorageClassReducedRedundancy
PartSize 6291456 6 MB
Key keyName
CannedACL S3CannedACLPublicRead
}
fileTransferUtilityRequestMetadataAdd(param1
Value1)
fileTransferUtilityRequestMetadataAdd(param2
Value2)
fileTransferUtilityUpload(fileTransferUtilityRequest)
ConsoleWriteLine(Upload 4 completed)
}
}
catch (AmazonS3Exception s3Exception)
{
ConsoleWriteLine({0} {1} s3ExceptionMessage
s3ExceptionInnerException)
}
}
}
}
Using Other AWS SDKs
For information about using other AWS SDKs see Sample Code and Libraries
Requester Pays Buckets
Topics
• Configure Requester Pays by Using the Amazon S3 Console (p 93)
• Configure Requester Pays with the REST API (p 93)
• DevPay and Requester Pays (p 96)
• Charge Details (p 96)
In general bucket owners pay for all Amazon S3 storage and data transfer costs associated with
their bucket A bucket owner however can configure a bucket to be a Requester Pays bucket With
Requester Pays buckets the requester instead of the bucket owner pays the cost of the request and
the data download from the bucket The bucket owner always pays the cost of storing data
Typically you configure buckets to be Requester Pays when you want to share data but not incur
charges associated with others accessing the data You might for example use Requester Pays
buckets when making available large data sets such as zip code directories reference data
geospatial information or web crawling data
Important
If you enable Requester Pays on a bucket anonymous access to that bucket is not allowed
API Version 20060301
92Amazon Simple Storage Service Developer Guide
Configure with the Console
You must authenticate all requests involving Requester Pays buckets The request authentication
enables Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket
When the requester assumes an AWS Identity and Access Management (IAM) role prior to making
their request the account to which the role belongs is charged for the request For more information
about IAM roles see IAM Roles in the IAM User Guide
After you configure a bucket to be a Requester Pays bucket requesters must include xamz
requestpayer in their requests either in the header for POST GET and HEAD requests or as a
parameter in a REST request to show that they understand that they will be charged for the request
and the data download
Requester Pays buckets do not support the following
• Anonymous requests
• BitTorrent
• SOAP requests
• You cannot use a Requester Pays bucket as the target bucket for end user logging or vice versa
however you can turn on end user logging on a Requester Pays bucket where the target bucket is
not a Requester Pays bucket
Configure Requester Pays by Using the Amazon S3
Console
You can configure a bucket for Requester Pays by using the Amazon S3 console
To configure a bucket for Requester Pays
1 Sign in to the AWS Management Console and open the Amazon S3 console at https
consoleawsamazoncoms3
2 In the Buckets list click the details icon on the left of the bucket name and then click Properties
to display bucket properties
3 In the Properties pane click Requester Pays
4 Select the Enabled check box
Configure Requester Pays with the REST API
Topics
• Setting the requestPayment Bucket Configuration (p 94)
• Retrieving the requestPayment Configuration (p 94)
• Downloading Objects in Requester Pays Buckets (p 95)
API Version 20060301
93Amazon Simple Storage Service Developer Guide
Configure with the REST API
Setting the requestPayment Bucket Configuration
Only the bucket owner can set the RequestPaymentConfigurationpayer configuration value
of a bucket to BucketOwner the default or Requester Setting the requestPayment resource is
optional By default the bucket is not a Requester Pays bucket
To revert a Requester Pays bucket to a regular bucket you use the value BucketOwner Typically
you would use BucketOwner when uploading data to the Amazon S3 bucket and then you would set
the value to Requester before publishing the objects in the bucket
To set requestPayment
• Use a PUT request to set the Payer value to Requester on a specified bucket
PUT requestPayment HTTP11
Host [BucketName]s3amazonawscom
ContentLength 173
Date Wed 01 Mar 2009 120000 GMT
Authorization AWS [Signature]
doc20060301>
Requester
If the request succeeds Amazon S3 returns a response similar to the following
HTTP11 200 OK
xamzid2 [id]
xamzrequestid [request_id]
Date Wed 01 Mar 2009 120000 GMT
ContentLength 0
Connection close
Server AmazonS3
xamzrequestchargedrequester
You can set Requester Pays only at the bucket level you cannot set Requester Pays for specific
objects within the bucket
You can configure a bucket to be BucketOwner or Requester at any time Realize however that
there might be a small delay on the order of minutes before the new configuration value takes effect
Note
Bucket owners who give out presigned URLs should think twice before configuring a bucket
to be Requester Pays especially if the URL has a very long lifetime The bucket owner
is charged each time the requester uses a presigned URL that uses the bucket owner's
credentials
Retrieving the requestPayment Configuration
You can determine the Payer value that is set on a bucket by requesting the resource
requestPayment
To return the requestPayment resource
• Use a GET request to obtain the requestPayment resource as shown in the following request
API Version 20060301
94Amazon Simple Storage Service Developer Guide
Configure with the REST API
GET requestPayment HTTP11
Host [BucketName]s3amazonawscom
Date Wed 01 Mar 2009 120000 GMT
Authorization AWS [Signature]
If the request succeeds Amazon S3 returns a response similar to the following
HTTP11 200 OK
xamzid2 [id]
xamzrequestid [request_id]
Date Wed 01 Mar 2009 120000 GMT
ContentType [type]
ContentLength [length]
Connection close
Server AmazonS3
Requester
This response shows that the payer value is set to Requester
Downloading Objects in Requester Pays Buckets
Because requesters are charged for downloading data from Requester Pays buckets the requests
must contain a special parameter xamzrequestpayer which confirms that the requester knows
he or she will be charged for the download To access objects in Requester Pays buckets requests
must include one of the following
• For GET HEAD and POST requests include xamzrequestpayer requester in the
header
• For signed URLs include xamzrequestpayerrequester in the request
If the request succeeds and the requester is charged the response includes the header xamz
requestchargedrequester If xamzrequestpayer is not in the request Amazon S3 returns
a 403 error and charges the bucket owner for the request
Note
Bucket owners do not need to add xamzrequestpayer to their requests
Ensure that you have included xamzrequestpayer and its value in your signature
calculation For more information see Constructing the CanonicalizedAmzHeaders
Element (p 579)
To download objects from a Requester Pays bucket
• Use a GET request to download an object from a Requester Pays bucket as shown in the following
request
GET [destinationObject] HTTP11
Host [BucketName]s3amazonawscom
xamzrequestpayer requester
Date Wed 01 Mar 2009 120000 GMT
Authorization AWS [Signature]
API Version 20060301
95Amazon Simple Storage Service Developer Guide
DevPay and Requester Pays
If the GET request succeeds and the requester is charged the response includes xamzrequest
chargedrequester
Amazon S3 can return an Access Denied error for requests that try to get objects from a Requester
Pays bucket For more information go to Error Responses
DevPay and Requester Pays
You can use Amazon DevPay to sell content that is stored in your Requester Pays bucket For
more information go to Using Amazon S3 Requester Pays with DevPay in the Using Amazon S3
Requester Pays with DevPay
Charge Details
The charge for successful Requester Pays requests is straightforward the requester pays for the data
transfer and the request the bucket owner pays for the data storage However the bucket owner is
charged for the request under the following conditions
• The requester doesn't include the parameter xamzrequestpayer in the header (GET HEAD or
POST) or as a parameter (REST) in the request (HTTP code 403)
• Request authentication fails (HTTP code 403)
• The request is anonymous (HTTP code 403)
• The request is a SOAP request
Buckets and Access Control
Each bucket has an associated access control policy This policy governs the creation deletion and
enumeration of objects within the bucket For more information see Managing Access Permissions to
Your Amazon S3 Resources (p 266)
Billing and Reporting of Buckets
Fees for object storage and network data transfer are always billed to the owner of the bucket that
contains the object unless the bucket was created as a Requester Pays bucket
The reporting tools available at the AWS developer portal organize your Amazon S3 usage reports by
bucket For more information about cost considerations see Amazon S3 Pricing
Cost Allocation Tagging
You can use cost allocation tagging to label Amazon S3 buckets so that you can more easily track their
cost against projects or other criteria
Use tags to organize your AWS bill to reflect your own cost structure To do this sign up to get your
AWS account bill with tag key values included Then to see the cost of combined resources organize
your billing information according to resources with the same tag key values For example you can tag
several resources with a specific application name and then organize your billing information to see
the total cost of that application across several services For more information see Cost Allocation and
Tagging in About AWS Billing and Cost Management
A cost allocation tag is a namevalue pair that you define and associate with an Amazon S3 bucket
We recommend that you use a consistent set of tag keys to make it easier to track costs associated
with your Amazon S3 buckets
API Version 20060301
96Amazon Simple Storage Service Developer Guide
Cost Allocation Tagging
Each Amazon S3 bucket has a tag set which contains all the tags that are assigned to that bucket A
tag set can contain as many as ten tags or it can be empty
If you add a tag that has the same key as an existing tag on a bucket the new value overwrites the old
value
AWS does not apply any semantic meaning to your tags tags are interpreted strictly as character
strings AWS does not automatically set any tags on buckets
You can use the Amazon S3 console the CLI or the Amazon S3 API to add list edit or delete tags
For more information about creating tags in the console go to Managing Cost Allocation Tagging in the
Amazon Simple Storage Service Console User Guide
The following list describes the characteristics of a cost allocation tag
• The tag key is the required name of the tag The string value can contain 1 to 128 Unicode
characters It cannot be prefixed with aws The string can contain only the set of Unicode letters
digits whitespace '_' '' '' '' '+' '' (Java regex ^([\\p{L}\\p{Z}\\p{N}_+\\]*))
• The tag value is a required string value of the tag The string value can contain from 1 to 256
Unicode characters It cannot be prefixed with aws The string can contain only the set of Unicode
letters digits whitespace '_' '' '' '' '+' '' (Java regex ^([\\p{L}\\p{Z}\\p{N}_+\\]*))
Values do not have to be unique in a tag set and they can be null For example you can have the
same keyvalue pair in tag sets named projectTrinity and costcenterTrinity
API Version 20060301
97Amazon Simple Storage Service Developer Guide
Working with Amazon S3 Objects
Amazon S3 is a simple key value store designed to store as many objects as you want You store
these objects in one or more buckets An object consists of the following
• Key – The name that you assign to an object You use the object key to retrieve the object
For more information see Object Key and Metadata (p 99)
• Version ID – Within a bucket a key and version ID uniquely identify an object
The version ID is a string that Amazon S3 generates when you add an object to a bucket For more
information see Object Versioning (p 106)
• Value – The content that you are storing
An object value can be any sequence of bytes Objects can range in size from zero to 5 TB For
more information see Uploading Objects (p 157)
• Metadata – A set of namevalue pairs with which you can store information regarding the object
You can assign metadata referred to as userdefined metadata to your objects in Amazon S3
Amazon S3 also assigns systemmetadata to these objects which it uses for managing objects For
more information see Object Key and Metadata (p 99)
• Subresources – Amazon S3 uses the subresource mechanism to store objectspecific additional
information
Because subresources are subordinates to objects they are always associated with some other
entity such as an object or a bucket For more information see Object Subresources (p 105)
• Access Control Information – You can control access to the objects you store in Amazon S3
Amazon S3 supports both the resourcebased access control such as an Access Control List (ACL)
and bucket policies and userbased access control For more information see Managing Access
Permissions to Your Amazon S3 Resources (p 266)
For more information about working with objects see the following sections Note that your Amazon
S3 resources (for example buckets and objects) are private by default You will need to explicitly grant
permission for others to access these resources For example you might want to share a video or a
photo stored in your Amazon S3 bucket on your website That will work only if you either make the
object public or use a presigned URL on your website For more information about sharing objects see
Share an Object with Others (p 152)
Topics
API Version 20060301
98Amazon Simple Storage Service Developer Guide
Object Key and Metadata
• Object Key and Metadata (p 99)
• Storage Classes (p 103)
• Object Subresources (p 105)
• Object Versioning (p 106)
• Object Lifecycle Management (p 109)
• CrossOrigin Resource Sharing (CORS) (p 131)
• Operations on Objects (p 142)
Object Key and Metadata
Topics
• Object Keys (p 99)
• Object Metadata (p 101)
Each Amazon S3 object has data a key and metadata Object key (or key name) uniquely identifies
the object in a bucket Object metadata is a set of namevalue pairs You can set object metadata at
the time you upload it After you upload the object you cannot modify object metadata The only way to
modify object metadata is to make a copy of the object and set the metadata
Object Keys
When you create an object you specify the key name which uniquely identifies the object in the
bucket For example in the Amazon S3 console (see AWS Management Console) when you highlight
a bucket a list of objects in your bucket appears These names are the object keys The name for a
key is a sequence of Unicode characters whose UTF8 encoding is at most 1024 bytes long
Note
If you anticipate that your workload against Amazon S3 will exceed 100 requests per second
follow the Amazon S3 key naming guidelines for best performance For information see
Request Rate and Performance Considerations (p 518)
Object Key Naming Guidelines
Although you can use any UTF8 characters in an object key name the following key naming best
practices help ensure maximum compatibility with other applications Each application may parse
special characters differently The following guidelines help you maximize compliance with DNS web
safe characters XML parsers and other APIs
Safe Characters
The following character sets are generally safe for use in key names
• Alphanumeric characters [09azAZ]
• Special characters _ * ' ( and )
The following are examples of valid object key names
• 4myorganization
• mygreat_photos2014janmyvacationjpg
API Version 20060301
99Amazon Simple Storage Service Developer Guide
Object Keys
• videos2014birthdayvideo1wmv
Note that the Amazon S3 data model is a flat structure you create a bucket and the bucket stores
objects There is no hierarchy of subbuckets or subfolders however you can infer logical hierarchy
using key name prefixes and delimiters as the Amazon S3 console does The Amazon S3 console
supports a concept of folders Suppose your bucket (companybucket) has four objects with the
following object keys
DevelopmentProjects1xls
Financestatement1pdf
Privatetaxdocumentpdf
s3dgpdf
The console uses the key name prefixes (Development Finance and Private) and delimiter
('') to present a folder structure as shown
The s3dgpdf key does not have a prefix so its object appears directly at the root level of the
bucket If you open the Development folder you will see the Project1xls object in it
Note
Amazon S3 supports buckets and objects there is no hierarchy in Amazon S3 However the
prefixes and delimiters in an object key name enables the Amazon S3 console and the AWS
SDKs to infer hierarchy and introduce concept of folders
Characters That Might Require Special Handling
The following characters in a key name may require additional code handling and will likely need to be
URL encoded or referenced as HEX Some of these are nonprintable characters and your browser
may not handle them which will also require special handling
Ampersand (&) Dollar () ASCII character ranges 00–1F
hex (0–31 decimal) and 7F (127
decimal)
'At' symbol (@) Equals () Semicolon ()
Colon () Plus (+) Space – Significant sequences
of spaces may be lost in some
uses (especially multiple
spaces)
API Version 20060301
100Amazon Simple Storage Service Developer Guide
Object Metadata
Comma () Question mark ()
Characters to Avoid
You should avoid the following characters in a key name because of significant special handling for
consistency across all applications
Backslash (\) Left curly brace ({) Nonprintable ASCII characters
(128–255 decimal characters)
Caret (^) Right curly brace (}) Percent character ()
Grave accent back tick (`) Right square bracket (]) Quotation marks
'Greater Than' symbol (>) Left square bracket ([) Tilde (~)
'Less Than' symbol (<) 'Pound' character (#) Vertical bar pipe (|)
Object Metadata
There are two kinds of metadata system metadata and userdefined metadata
SystemDefined Metadata
For each object stored in a bucket Amazon S3 maintains a set of system metadata Amazon S3
processes this system metadata as needed For example Amazon S3 maintains object creation date
and size metadata and uses this information as part of object management
There are two categories of system metadata
• Metadata such as object creation date is system controlled where only Amazon S3 can modify the
value
• Other system metadata such as the storage class configured for the object and whether the object
has serverside encryption enabled are examples of system metadata whose values you control If
you have your bucket configured as a website sometimes you might want to redirect a page request
to another page or an external URL In this case a web page is an object in your bucket Amazon S3
stores the page redirect value as system metadata whose value you control
When you create objects you can configure values of these system metadata items or update the
values when you need For more information about storage class see Storage Classes (p 103)
For more information about serverside encryption see Protecting Data Using Encryption (p 380)
The following table provides a list of systemdefined metadata and whether you can update it
Name Description Can User
Modify the
Value
Date Current date and time No
ContentLength Object size in bytes No
LastModified Object creation date or the last modified date whichever is
the latest
No
API Version 20060301
101Amazon Simple Storage Service Developer Guide
Object Metadata
Name Description Can User
Modify the
Value
ContentMD5 The base64encoded 128bit MD5 digest of the object No
xamzserverside
encryption
Indicates whether serverside encryption is enabled for the
object and whether that encryption is from the AWS Key
Management Service (SSEKMS) or from AWSManaged
Encryption (SSES3) For more information see Protecting
Data Using ServerSide Encryption (p 381)
Yes
xamzversionid Object version When you enable versioning on a
bucket Amazon S3 assigns a version number to objects
added to the bucket For more information see Using
Versioning (p 423)
No
xamzdeletemarker In a bucket that has versioning enabled this Boolean
marker indicates whether the object is a delete marker
No
xamzstorageclass Storage class used for storing the object For more
information see Storage Classes (p 103)
Yes
xamzwebsite
redirectlocation
Redirects requests for the associated object to another
object in the same bucket or an external URL For
more information see Configuring a Web Page
Redirect (p 460)
Yes
xamzserverside
encryptionawskms
keyid
If the xamzserversideencryption is present and has
the value of awskms this indicates the ID of the Key
Management Service (KMS) master encryption key that
was used for the object
Yes
xamzserverside
encryptioncustomer
algorithm
Indicates whether serverside encryption with customer
provided encryption keys (SSEC) is enabled For more
information see Protecting Data Using ServerSide
Encryption with CustomerProvided Encryption Keys (SSE
C) (p 395)
Yes
UserDefined Metadata
When uploading an object you can also assign metadata to the object You provide this optional
information as a namevalue (keyvalue) pair when you send a PUT or POST request to create the
object When uploading objects using the REST API the optional userdefined metadata names must
begin with xamzmeta to distinguish them from other HTTP headers When you retrieve the object
using the REST API this prefix is returned When uploading objects using the SOAP API the prefix is
not required When you retrieve the object using the SOAP API the prefix is removed regardless of
which API you used to upload the object
Note
SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
features will not be supported for SOAP We recommend that you use either the REST API or
the AWS SDKs
When metadata is retrieved through the REST API Amazon S3 combines headers that have the same
name (ignoring case) into a commadelimited list If some metadata contains unprintable characters it
is not returned Instead the xamzmissingmeta header is returned with a value of the number of
the unprintable metadata entries
API Version 20060301
102Amazon Simple Storage Service Developer Guide
Storage Classes
Userdefined metadata is a set of keyvalue pairs Amazon S3 stores userdefined metadata keys in
lowercase Each keyvalue pair must conform to USASCII when using REST and UTF8 when using
SOAP or browserbased uploads via POST
Note
The PUT request header is limited to 8 KB in size Within the PUT request header the user
defined metadata is limited to 2 KB in size The size of userdefined metadata is measured by
taking the sum of the number of bytes in the UTF8 encoding of each key and value
Storage Classes
Each object in Amazon S3 has a storage class associated with it For example if you list all objects in
the bucket the console shows the storage class for all the objects in the list
Amazon S3 offers the following storage classes for the objects that you store You choose one
depending on your use case scenario and performance access requirements All of these storage
classes offer high durability
• STANDARD – This storage class is ideal for performancesensitive use cases and frequently
accessed data
STANDARD is the default storage class if you don't specify storage class at the time that you upload
an object Amazon S3 assumes the STANDARD storage class
• STANDARD_IA – This storage class (IA for infrequent access) is optimized for longlived and less
frequently accessed data for example backups and older data where of access has diminished but
the use case still demands high performance
Note
There is a retrieval fee associated with STANDARD_IA objects which makes it most
suitable for infrequently accessed data For pricing information see Amazon S3 Pricing
For example initially you might upload objects using the STANDARD storage class and then use a
bucket lifecycle configuration rule to transition objects (see Object Lifecycle Management (p 109))
to the STANDARD_IA (or GLACIER) storage class at some point in the object's lifetime For more
information about lifecycle management see Object Lifecycle Management (p 109)
The STANDARD_IA objects are available for realtime access The table at the end of this section
highlights some of the differences in these storage classes
The STANDARD_IA storage class is suitable for larger objects greater than 128 Kilobytes that
you want to keep for at least 30 days For example bucket lifecycle configuration has minimum
object size limit for Amazon S3 to transition objects For more information see Supported
Transitions (p 110)
• GLACIER – The GLACIER storage class is suitable for archiving data where data access is
infrequent and retrieval time of several hours is acceptable (Archived objects are not available for
realtime access You must first restore the objects before you can access them)
API Version 20060301
103Amazon Simple Storage Service Developer Guide
Storage Classes
The GLACIER storage class uses the very lowcost Amazon Glacier storage service but you still
manage objects in this storage class through Amazon S3 Note the following about the GLACIER
storage class
• You cannot specify GLACIER as the storage class at the time that you create an object You
create GLACIER objects by first uploading objects using STANDARD RRS or STANDARD_IA as
the storage class Then you transition these objects to the GLACIER storage class using lifecycle
management For more information see Object Lifecycle Management (p 109)
• You must first restore the GLACIER objects before you can access them (STANDARD RRS
and STANDARD_IA objects are available for anytime access) For more information GLACIER
Storage Class Additional Lifecycle Configuration Considerations (p 124)
To learn more about the Amazon Glacier service see the Amazon Glacier Developer Guide
All the preceding storage classes are designed to sustain the concurrent loss of data in two facilities
(for details see the following availability and durability table)
In addition to the performance requirements of your application scenario there is also price
performance considerations For the Amazon S3 storage classes and pricing see Amazon S3 Pricing
Amazon S3 also offers the following storage class that enables you to save costs by maintaining fewer
redundant copies of your data
• REDUCED_REDUNDANCY – The Reduced Redundancy Storage (RRS) storage class is designed
for noncritical reproducible data stored at lower levels of redundancy than the STANDARD storage
class which reduces storage costs For example if you upload an image and use STANDARD
storage class for it you might compute a thumbnail and save it as an object of the RRS storage
class
The durability level (see the following table) corresponds to an average annual expected loss of
001 of objects For example if you store 10000 objects using the RRS option you can on
average expect to incur an annual loss of a single object per year (001 of 10000 objects)
Note
This annual loss represents an expected average and does not guarantee the loss of less
than 001 of objects in a given year
RRS provides a costeffective highly available solution for distributing or sharing content that is
durably stored elsewhere or for storing thumbnails transcoded media or other processed data that
can be easily reproduced
If an RRS object is lost Amazon S3 returns a 405 error on requests made to that object
Amazon S3 can send an event notification to alert a user or start a workflow when it detects that an
RRS object is lost To receive notifications you need to add notification configuration to your bucket
For more information see Configuring Amazon S3 Event Notifications (p 472)
The following table summarizes the durability and availability offered by each of the storage classes
Storage Class Durability (designed for) Availability
(designed for)
Other
Considerations
STANDARD 99999999999 9999 None
STANDARD_IA 99999999999 999 There is a retrieval
fee associated with
API Version 20060301
104Amazon Simple Storage Service Developer Guide
Subresources
Storage Class Durability (designed for) Availability
(designed for)
Other
Considerations
STANDARD_IA
objects which
makes it most
suitable for
infrequently
accessed data For
pricing information
see Amazon S3
Pricing
GLACIER 99999999999 9999 (after
you restore
objects)
GLACIER objects
are not available for
realtime access
You must first
restore archived
objects before
you can access
them and restoring
objects can take
34 hours For more
information see
Restoring Archived
Objects (p 125)
RRS 9999 9999 None
Object Subresources
Amazon S3 defines a set of subresources associated with buckets and objects Subresources are
subordinates to objects that is subresources do not exist on their own they are always associated
with some other entity such as an object or a bucket
The following table lists the subresources associated with Amazon S3 objects
Subresource Description
acl Contains a list of grants identifying the grantees and the permissions granted When
you create an object the acl identifies the object owner as having full control over
the object You can retrieve an object ACL or replace it with updated list of grants
Any update to an ACL requires you to replace the existing ACL For more information
about ACLs see Managing Access with ACLs (p 364)
torrent Amazon S3 supports the BitTorrent protocol Amazon S3 uses the torrent
subresource to return the torrent file associated with the specific object To retrieve a
torrent file you specify the torrent subresource in your GET request Amazon S3
creates a torrent file and returns it You can only retrieve the torrent subresource
you cannot create update or delete the torrent subresource For more information
see Using BitTorrent with Amazon S3 (p 531)
API Version 20060301
105Amazon Simple Storage Service Developer Guide
Versioning
Object Versioning
Versioning enables you to keep multiple versions of an object in one bucket for example my
imagejpg (version 111111) and myimagejpg (version 222222) You might want to enable
versioning to protect yourself from unintended overwrites and deletions or to archive objects so that
you can retrieve previous versions of them
Note
The SOAP API does not support versioning SOAP support over HTTP is deprecated but it is
still available over HTTPS New Amazon S3 features will not be supported for SOAP
Object versioning can be used in combination with Object Lifecycle Management (p 109) allowing
you to customize your data retention needs while controlling your related storage costs For more
information about adding lifecycle configuration to versioningenabled buckets using the AWS
Management Console see Lifecycle Configuration for a Bucket with Versioning in the Amazon Simple
Storage Service Console User Guide
Important
If you have an object expiration lifecycle policy in your nonversioned bucket and you want to
maintain the same permanent delete behavior when you enable versioning you must add a
noncurrent expiration policy The noncurrent expiration lifecycle policy will manage the deletes
of the noncurrent object versions in the versionenabled bucket (A versionenabled bucket
maintains one current and zero or more noncurrent object versions)
You must explicitly enable versioning on your bucket By default versioning is disabled Regardless
of whether you have enabled versioning each object in your bucket has a version ID If you have not
enabled versioning then Amazon S3 sets the version ID value to null If you have enabled versioning
Amazon S3 assigns a unique version ID value for the object When you enable versioning on a bucket
existing objects if any in the bucket are unchanged the version IDs (null) contents and permissions
remain the same
Enabling and suspending versioning is done at the bucket level When you enable versioning
for a bucket all objects added to it will have a unique version ID Unique version IDs are
randomly generated Unicode UTF8 encoded URLready opaque strings that are at most
1024 bytes long An example version ID is 3L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY
+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo Only Amazon S3 generates version IDs They cannot be
edited
Note
For simplicity we will use much shorter IDs in all our examples
When you PUT an object in a versioningenabled bucket the noncurrent version is not overwritten
The following figure shows that when a new version of photogif is PUT into a bucket that already
contains an object with the same name the original object (ID 111111) remains in the bucket
Amazon S3 generates a new version ID (121212) and adds the newer version to the bucket
API Version 20060301
106Amazon Simple Storage Service Developer Guide
Versioning
This functionality prevents you from accidentally overwriting or deleting objects and affords you the
opportunity to retrieve a previous version of an object
When you DELETE an object all versions remain in the bucket and Amazon S3 inserts a delete marker
as shown in the following figure
The delete marker becomes the current version of the object By default GET requests retrieve the
most recently stored version Performing a simple GET Object request when the current version is a
delete marker returns a 404 Not Found error as shown in the following figure
API Version 20060301
107Amazon Simple Storage Service Developer Guide
Versioning
You can however GET a noncurrent version of an object by specifying its version ID In the following
figure we GET a specific object version 111111 Amazon S3 returns that object version even though
it's not the current version
You can permanently delete an object by specifying the version you want to delete Only the owner
of an Amazon S3 bucket can permanently delete a version The following figure shows how DELETE
versionId permanently deletes an object from a bucket and that Amazon S3 doesn't insert a delete
marker
API Version 20060301
108Amazon Simple Storage Service Developer Guide
Lifecycle Management
You can add additional security by configuring a bucket to enable MFA (MultiFactor Authentication)
Delete When you do the bucket owner must include two forms of authentication in any request
to delete a version or change the versioning state of the bucket For more information see MFA
Delete (p 424)
For more information see Using Versioning (p 423)
Object Lifecycle Management
This section provides an overview of the Amazon S3 lifecycle feature that you can use to manage
lifecycle of objects in your bucket
What Is Lifecycle Configuration
You manage an object's lifecycle by using a lifecycle configuration which defines how Amazon S3
manages objects during their lifetime Lifecycle configuration enables you to simplify the lifecycle
management of your objects such as automated transition of lessfrequently accessed objects to low
cost storage alternatives and scheduled deletions You can configure as many as 1000 lifecycle rules
per bucket
You can define lifecycle configuration rules for objects that have a welldefined lifecycle You can use
lifecycle configurations for objects you want to switch to different storage classes or delete during their
lifecycle for example
• If you are uploading periodic logs to your bucket your application might need these logs for a week
or a month after creation and after that you might want to delete them
• Some documents are frequently accessed for a limited period of time After that these documents
are less frequently accessed Over time you might not need realtime access to these objects
but your organization or regulations might require you to archive them for a longer period and then
optionally delete them later
• You might also upload some types of data to Amazon S3 primarily for archival purposes for example
digital media archives financial and healthcare records raw genomics sequence data longterm
database backups and data that must be retained for regulatory compliance
API Version 20060301
109Amazon Simple Storage Service Developer Guide
How Do I Configure a Lifecycle
How Do I Configure a Lifecycle
You can specify a lifecycle configuration as XML A lifecycle configuration comprises a set of rules with
predefined actions that you want Amazon S3 to perform on objects during their lifetime These actions
include
• Transition actions in which you define when objects transition to another Amazon S3 storage class
For example you may choose to transition objects to the STANDARD_IA (IA for infrequent access)
storage class 30 days after creation or archive objects to the GLACIER storage class one year after
creation
• Expiration actions in which you specify when the objects expire Then Amazon S3 deletes the
expired objects on your behalf
For more information about lifecycle rules see Lifecycle Configuration Elements (p 113)
Amazon S3 stores the configuration as a lifecycle subresource attached to your bucket Using the
Amazon S3 API you can PUT GET or DELETE a lifecycle configuration For more information see
PUT Bucket lifecycle GET Bucket lifecycle or DELETE Bucket lifecycle You can also configure
the lifecycle by using the Amazon S3 console or programmatically by using the AWS SDK wrapper
libraries and if you need to you can also make the REST API calls directly Then Amazon S3 applies
the lifecycle rules to all or specific objects identified in the rule
Transitioning Objects General Considerations
You can add rules in a lifecycle configuration to transition objects to another Amazon S3 storage
class For example you might transition objects to the STANDARD_IA storage class when you know
those objects are infrequently accessed You might also want to archive objects that don't need real
time access to the GLACIER storage class The following sections describe transitioning related
considerations and constraints
Supported Transitions
In a lifecycle configuration you can define rules to transition objects from one storage class to another
The following are supported transitions
• From the STANDARD or REDUCED_REDUNDANCY storage classes to STANDARD_IA The
following constraints apply
• Amazon S3 does not transition objects less than 128 Kilobytes in size to the STANDARD_IA
storage class Cost benefits of transitioning to STANDARD_IA can be realized for larger objects
For smaller objects it is not cost effective and Amazon S3 will not transition them
• Objects must be stored at least 30 days in the current storage class before you can transition
them to STANDARD_IA For example you cannot create a lifecycle rule to transition objects to the
STANDARD_IA storage class one day after creation
Transitions before the first 30 days are not supported because often younger objects are accessed
more frequently or deleted sooner than is suitable for STANDARD_IA
• If you are transitioning noncurrent objects (versioned bucket scenario) you can transition to
STANDARD_IA only objects that are at least 30 days noncurrent
• From any storage class to GLACIER
For more information see GLACIER Storage Class Additional Lifecycle Configuration
Considerations (p 124)
• You can combine these rules to manage an object's complete lifecycle including a first transition to
STANDARD_IA a second transition to GLACIER for archival and an expiration
API Version 20060301
110Amazon Simple Storage Service Developer Guide
Transitioning Objects General Considerations
Note
When configuring lifecycle the API will not allow you to create a lifecycle policy in which
you specify both of these transitions but the GLACIER transition occurs less than 30 days
after the STANDARD_IA transition This is because such a lifecycle policy may increase
costs because of the minimum 30 day storage charge associated with the STANDARD_IA
storage class For more information about cost considerations see Amazon S3 Pricing
For example suppose the objects you create have a welldefined lifecycle Initially the objects are
frequently accessed for a period of 30 days After the initial period the frequency of access diminishes
where objects are infrequently accessed for up to 90 days After that the objects are no longer needed
You may choose to archive or delete them You can use a lifecycle configuration to define transition
and expiration of objects that matches this example scenario (transition to STANDARD_IA 30 days
after creation and transition to GLACIER 90 days after creation and perhaps expire them after certain
number of days) As you tier down the object's storage class in the transition you can benefit from the
storage cost savings For more information about cost considerations see Amazon S3 Pricing
You can think of lifecycle transitions as supporting storage class tiers (see Storage Classes (p 103))
which offer different costs and benefits You may choose to transition an object to another storage
class in the object's lifetime for cost saving considerations and lifecycle configuration enables you to
do that For example to manage storage costs you might configure lifecycle to change an object's
storage class from the STANDARD which is most available and durable storage class to the
STANDARD_IA (IA for infrequent access) and then to the GLACIER storage class (where the objects
are archived and only available after you restore) These transitions can lower your storage costs
The following are not supported transitions
• You cannot transition from STANDARD_IA to STANDARD or REDUCED_REDUNDANCY
• You cannot transition from GLACIER to any other storage class
• You cannot transition from any storage class to REDUCED_REDUNDANCY
Transitioning to the GLACIER storage class (Object Archival)
Using lifecycle configuration you can transition objects to the GLACIER storage class—that is archive
data to Amazon Glacier a lowercost storage solution Before you archive objects note the following
• Objects in the GLACIER storage class are not available in real time
Archived objects are Amazon S3 objects but before you can access an archived object you must
first restore a temporary copy of it The restored object copy is available only for the duration you
specify in the restore request After that Amazon S3 deletes the temporary copy and the object
remains archived in Amazon Glacier
Note that object restoration from an archive can take up to five hours
You can restore an object by using the Amazon S3 console or programmatically by using the AWS
SDKs wrapper libraries or the Amazon S3 REST API in your code For more information see POST
Object restore
• The transition of objects to the GLACIER storage class is oneway
You cannot use a lifecycle configuration rule to convert the storage class of an object from
GLACIER to Standard or RRS If you want to change the storage class of an already archived
object to either Standard or RRS you must use the restore operation to make a temporary copy
first Then use the copy operation to overwrite the object as a STANDARD STANDARD_IA or
REDUCED_REDUNDANCY object
API Version 20060301
111Amazon Simple Storage Service Developer Guide
Expiring Objects General Considerations
• The GLACIER storage class objects are visible and available only through Amazon S3 not through
Amazon Glacier
Amazon S3 stores the archived objects in Amazon Glacier however these are Amazon S3 objects
and you can access them only by using the Amazon S3 console or the API You cannot access the
archived objects through the Amazon Glacier console or the API
Expiring Objects General Considerations
When an object reaches the end of its lifetime Amazon S3 queues it for removal and removes it
asynchronously There may be a delay between the expiration date and the date at which Amazon S3
removes an object You are not charged for storage time associated with an object that has expired
To find when an object is scheduled to expire you can use the HEAD Object or the GET Object APIs
These APIs return response headers that provide object expiration information
There are additional cost considerations if you put lifecycle policy to expire objects that have been in
STANDARD_IA for less than 30 days or GLACIER for less than 90 days For more information about
cost considerations see Amazon S3 Pricing
Lifecycle and Other Bucket Configurations
In addition to lifecycle configuration your bucket can have other configurations associated This is
section explains how lifecycle configuration relates to other bucket configurations
Lifecycle and Versioning
You can add lifecycle configuration to nonversioned buckets and versioningenabled buckets For
more information see Object Versioning (p 106) A versioningenabled bucket maintains one current
and zero or more noncurrent object versions You can define separate lifecycle rules for current and
noncurrent versions
For more information see Lifecycle Configuration Elements (p 113) For information about
versioning see Object Versioning (p 106)
Lifecycle and MFA Enabled Buckets
Lifecycle configuration on MFAenabled buckets is not supported
Lifecycle and Logging
If you have logging enabled on your bucket Amazon S3 reports the results of expiration action as
follows
• If the lifecycle expiration action results in Amazon S3 permanently removing the object Amazon S3
reports it as operation S3EXPIREOBJECT in the log record
• For a versioningenabled bucket if the lifecycle expiration action results in a logical deletion of
current version in which Amazon S3 adds a delete marker Amazon S3 reports the logical deletion
as operation S3CREATEDELETEMARKER in the log record For more information see Object
Versioning (p 106)
• When Amazon S3 transitions object to the GLACIER storage class it reports it as operation
S3TRANSITIONOBJECT in the log record to indicate it has initiated the operation When it is
transition to the STANDARD_IA storage class it is reported as S3TRANSITION_SIAOBJECT
API Version 20060301
112Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
Related Topics
• Lifecycle Configuration Elements (p 113)
• GLACIER Storage Class Additional Lifecycle Configuration Considerations (p 124)
• Specifying a Lifecycle Configuration (p 125)
Lifecycle Configuration Elements
Topics
• ID Element (p 114)
• Status Element (p 114)
• Prefix Element (p 114)
• Elements to Describe Lifecycle Actions (p 115)
• Examples of Lifecycle Configuration (p 117)
You specify a lifecycle policy configuration as XML It consists of one or more lifecycle rules Each rule
consists of the following
• Rule metadata that include a rule ID and status indicating whether the rule is enabled or disabled If
a rule is disabled Amazon S3 will not perform any actions specified in the rule
• Prefix identifying objects by the key prefix to which the rule applies
• One or more transitionexpiration actions with a date or a time period in the object's lifetime when
you want Amazon S3 to perform the specified action
The following are two introductory example configurations
Example 1 Lifecycle configuration
Suppose you want to transition objects with key prefix documents to the GLACIER storage class one
year after you create them and then permanently remove them 10 years after you created them You
can accomplish this by attaching the following lifecycle configuration to the bucket
samplerule
documents
Enabled
365
GLACIER
3650
The lifecycle configuration defines one rule that applies to objects with the key name prefix
documents The rule specifies two actions (Transition and Expiration) The rule is in effect because
the rule status is Enabled
API Version 20060301
113Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
Example 2 Lifecycle configuration on a versioningenabled bucket
If your bucket is versioningenabled you have one current object version and zero or more noncurrent
versions For more information see Object Versioning (p 106)
For a versioningenabled bucket the lifecycle actions apply as follows
• Transition and Expiration actions apply to current versions
• NoncurrentVersionTransition and NoncurrentVersionExpiration actions apply to
noncurrent versions
The following example lifecycle configuration has one rule that applies to objects with key name prefix
logs The rule specifies two actions for noncurrent versions
• The NoncurrentVersionTransition action directs Amazon S3 to transition noncurrent objects to
the GLACIER storage class 30 days after the objects become noncurrent
• The NoncurrentVersionExpiration action directs Amazon S3 to permanently remove the
noncurrent objects 180 days after they become noncurrent
samplerule
logs
Enabled
30
GLACIER
180
The following sections describe these XML elements in a lifecycle configuration
ID Element
A lifecycle configuration can have up to 1000 rules The ID element uniquely identifies a rule
Status Element
The Status element value can be either Enabled or Disabled If a rule is disabled Amazon S3 will not
perform any of the actions defined in the rule
Prefix Element
The Prefix element identifies objects to which the rule applies If you specify an empty prefix the rule
applies to all objects in the bucket If you specify a key name prefix the rule applies only to the objects
whose key name begins with specified string For more information about object keys see Object
Keys (p 99)
API Version 20060301
114Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
Elements to Describe Lifecycle Actions
You can direct Amazon S3 to perform specific actions in an object's lifetime by specifying one or
more of the following predefined actions in a lifecycle rule The effect of these actions depend on the
versioning state of your bucket
• Transition action element – You specify the Transition action to transition objects from
one storage class to another For more information about transitioning objects see Supported
Transitions (p 110) When a specified date or time period in the object's lifetime is reached
Amazon S3 performs the transition
For a versioned bucket (versioningenabled or versioningsuspended bucket) the Transition
action applies to the current object version To manage noncurrent versions Amazon S3 defines the
NoncurrentVersionTranstion action (described below)
• Expiration action element – The Expiration action expires objects identified in the rule Amazon
S3 makes all expired objects unavailable Whether the objects are permanently removed depends
on the versioning state of the bucket
Important
Object expiration lifecycle polices do not remove incomplete multipart uploads To remove
incomplete multipart uploads you must use the AbortIncompleteMultipartUpload lifecycle
configuration action that is described later in this section
• Nonversioned bucket – The Expiration action results in Amazon S3 permanently removing
the object
• Versioned bucket – For a versioned bucket versioningenabled or versioningsuspended (see
Using Versioning (p 423)) there are several considerations that guide how Amazon S3 handles
the expiration action Regardless of the version state the following applies
• The expiration action applies only to the current version (no impact on noncurrent object
versions)
• Amazon S3 will not take any action if there are one or more object versions and the delete
marker is the current version
• If the current object version is the only object version and it is also a delete marker (also referred
as the expired object delete marker where all object versions are deleted and you only have
a delete marker remaining) Amazon S3 will remove the expired object delete marker You can
also use the expiration action to direct Amazon S3 to remove any expired object delete markers
For an example see Example 8 Removing Expired Object Delete Markers (p 121)
Important
Amazon S3 will remove an expired object delete marker no sooner than 48 hours after
the object expired
The additional considerations for Amazon S3 to manage expiration are as follows
• Versioningenabled bucket
If current object version is not a delete marker Amazon S3 adds a delete marker with a unique
version ID making the current version noncurrent and the delete marker the current version
• Versioningsuspended bucket
In a versioningsuspended bucket the expiration action causes Amazon S3 to create a delete
marker with null as the version ID This delete marker will replace any object version with a null
version ID in the version hierarchy which effectively deletes the object
API Version 20060301
115Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
In addition Amazon S3 provides the following actions that you can use to manage noncurrent object
versions in a versioned bucket (versioningenabled and versioningsuspended buckets)
• NoncurrentVersionTransition action element – Use this action to specify how long (from the time
the objects became noncurrent) you want the objects to remain in the current storage class before
Amazon S3 transitions them to the specified storage class For more information about transitioning
objects see Supported Transitions (p 110)
• NoncurrentVersionExpiration action element – Use this action to specify how long (from the time
the objects became noncurrent) you want to retain noncurrent object versions before Amazon S3
permanently removes them The deleted object cannot be recovered
This delayed removal of noncurrent objects can be helpful when you need to correct any accidental
deletes or overwrites For example you can configure an expiration rule to delete noncurrent
versions five days after they become noncurrent For example suppose on 112014 1030 AM
UTC you create an object called photogif (version ID 111111) On 122014 1130 AM UTC
you accidentally delete photogif (version ID 111111) which creates a delete marker with a new
version ID (such as version ID 4857693) You now have five days to recover the original version
of photogif (version ID 111111) before the deletion is permanent On 182014 0000 UTC the
lifecycle rule for expiration executes and permanently deletes photogif (version ID 111111) five
days after it became a noncurrent version
Important
Object expiration lifecycle polices do not remove incomplete multipart uploads To remove
incomplete multipart uploads you must use the AbortIncompleteMultipartUpload lifecycle
configuration action that is described later in this section
In addition to the transition and expiration actions you can use the following lifecycle configuration
action to direct Amazon S3 to abort incomplete multipart uploads
• AbortIncompleteMultipartUpload action element – Use this element to set a maximum time (in
days) that you want to allow multipart uploads to remain in progress If the applicable multipart
uploads (determined by the key name prefix specified in the lifecycle rule) are not successfully
completed within the predefined time period Amazon S3 will abort the incomplete multipart
uploads For more information see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle
Policy (p 167)
How Amazon S3 Calculates How Long an Object Has Been Noncurrent
In a versioningenabled bucket you can have multiple versions of an object there is always one
current version and zero or more noncurrent versions Each time you upload an object the current
version is retained as noncurrent version and the newly added version the successor become current
To determine the number of days an object is noncurrent Amazon S3 looks at when its successor was
created Amazon S3 uses the number of days since its successor was created as the number of days
an object is noncurrent
Restoring Previous Versions of an Object When Using Lifecycle Configurations
As explained in detail in the topic Restoring Previous Versions (p 442) there are two
methods to retrieve previous versions of an object
1 By copying a noncurrent version of the object into the same bucket The copied object
becomes the current version of that object and all object versions are preserved
2 By permanently deleting the current version of the object When you delete the current
object version you in effect turn the noncurrent version into the current version of that
object
When using lifecycle configuration rules with versioningenabled buckets we recommend as a
best practice that you use the first method
API Version 20060301
116Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
Because of Amazon S3's eventual consistency semantics a current version that you
permanently deleted may not disappear until the changes propagate (Amazon S3 may
be unaware of this deletion) And in the meantime the lifecycle you configured to expire
noncurrent objects may permanently remove noncurrent objects including the one you want
to restore So copying the old version as recommended in the first method is the safer
alternative
Lifecycle Rules Based on the Object Age
You can specify a time period in number of days from the creation (or modification) of the objects when
Amazon S3 can take the action
When you specify number of days in the Transition and Expiration actions in a lifecycle
configuration note the following
• It is the number of days since object creation when the action will be taken
• Amazon S3 calculates the time by adding the number of days specified in the rule to the object
creation time and rounding the resulting time to the next day midnight UTC For example if an
object was created at 1152014 1030 AM UTC and you specify 3 days in a transition rule then the
transition date of the object would be calculated as 1192014 0000 UTC
Note
Amazon S3 maintains only the last modified date for each object For example the Amazon
S3 console shows the Last Modified date in the object Properties pane When you initially
create a new object this date reflects the date the object is created If you replace the object
the date will change accordingly So when we use the term creation date it is synonymous
with the term last modified date
When specifying the number of days in the NoncurrentVersionTransition and
NoncurrentVersionExpiration actions in a lifecycle configuration note the following
• It is the number of days from when the version of the object becomes noncurrent (that is since the
object was overwritten or deleted) as the time period for when Amazon S3 will take the action on the
specified object or objects
• Amazon S3 calculates the time by adding the number of days specified in the rule to the time when
the new successor version of the object is created and rounding the resulting time to the next day
midnight UTC For example in your bucket you have a current version of an object that was created
at 112014 1030 AM UTC if the new successor version of the object that replaces the current
version is created at 1152014 1030 AM UTC and you specify 3 days in a transition rule then the
transition date of the object would be calculated as 1192014 0000 UTC
Lifecycle Rules Based on a Specific Date
When specifying an action in a lifecycle configuration you can specify a date when you want Amazon
S3 to take the action The datebased rules trigger action on all objects created on or before this
date For example a rule to transition to GLACIER on 6302015 will transition all objects created on
or before this date (note that the rule applies every day after the specified date and not just on the
specified date as long as the rule is in effect)
Note
You cannot create the datebased rule using the AWS Management Console but you can
view disable or delete such rules
Examples of Lifecycle Configuration
This section provides examples of lifecycle configuration Each example shows how you can specify
XML in each of the example scenarios
API Version 20060301
117Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
Example 1 Specify a Lifecycle Rule for a Subset of Objects in a Bucket
The following lifecycle configuration rule is applied to a subset of objects with key name prefix
projectdocs The rule specifies two actions requesting Amazon S3 the following
• Transition objects to the GLACIER storage class 365 days (one year) after creation
• Delete objects (the Expiration action) objects 3650 days (10 years) after creation
Transition and Expiration Rule
projectdocs
Enabled
365
GLACIER
3650
Instead of specifying object age in terms of days after creation you can specify a date for each action
however you cannot use both Date and Days in the same rule
Example 2 Specify a Lifecycle Rule that Applies to All Objects in the Bucket
If you specify an empty Prefix in a lifecycle rule it applies to all objects in the bucket Suppose you
create a bucket only for archiving objects to GLACIER You can set lifecycle configuration requesting
Amazon S3 to transition objects to the GLACIER storage class immediately after creation as shown
The lifecycle configuration defines one rule with an empty Prefix The rule specifies a Transition
action requesting Amazon S3 to transition objects to the GLACIER storage class 0 days after creation
in which case objects are eligible for archival to Amazon Glacier at midnight UTC following creation
Archive all object sameday upon creation
Enabled
0
GLACIER
Example 3 Disable a Lifecycle Rule
You can temporarily disable a lifecycle rule The following lifecycle configuration specifies two rules
however one of them is disabled Amazon S3 will not perform any action specified in a rule that is
disabled
API Version 20060301
118Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
30 days log objects expire rule
logs
Enabled
0
GLACIER
1 year documents expire rule
documents
Disabled
0
GLACIER
Example 4 Tiering Down Storage Class Over Object Lifetime
In this example you leverage lifecycle configuration to tierdown the storage class of objects over
their lifetime This tiering down can help reduce storage costs For more information about pricing see
Amazon S3 Pricing
The following lifecycle configuration specifies a rule that applies to objects with key name prefix logs
The rule specifies the following actions
• Two transition actions
• Transition objects to the STANDARD_IA storage class 30 days after creation
• Transition objects to the GLACIER storage class 90 days after creation
• An expiration action directing Amazon S3 to delete objects a year after creation
exampleid
logs
Enabled
30
STANDARD_IA
90
GLACIER
365
Note
You can use one rule to describe all lifecycle actions if all actions apply to the same set of
objects (identified by the prefix) Otherwise you can add multiple rules each specify a different
key name prefix
API Version 20060301
119Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
Example 5 Specify Multiple Rules
You can specify multiple rules if you want different lifecycle actions of different objects The following
lifecycle configuration has two rules
• Rule 1 applies to objects with key name prefix classA It directs Amazon S3 to transition objects to
the GLACIER storage class one year after creation and expire these objects 10 years after creation
• Rule 2 applies to objects with key name prefix classB It directs Amazon S3 to transition objects to
the STANDARD_IA storage class 90 days after creation and delete then one year after creation
ClassADocRule
classA
Enabled
365
GLACIER
3650
ClassBDocRule
classB
Enabled
90
STANDARD_IA
365
Example 6 Specify Multiple Rules with Overlapping Prefixes
In the following example you have two rules that specify overlapping prefixes
• First rule specifies empty prefix indicating all objects in the bucket
• Second rule specifies subset of objects in the bucket with key name prefix logs
These overlapping prefixes are fine there is no conflict Rule 1 requests Amazon S3 to delete all
objects one year after creation and Rule 2 requests Amazon S3 to transition subset of objects to the
STANDARD_IA storage class 30 days after creation
Rule 1
Enabled
API Version 20060301
120Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements
365
Rule 2
logs
Enabled
STANDARD_IA
30
Example 7 Specify a Lifecycle Rule for a VersioningEnable Bucket
Suppose you have a versioningenabled bucket which means that for each object you have a
current version and zero or more noncurrent versions You want to maintain one year worth of
history and then delete the noncurrent versions For more information about versioning see Object
Versioning (p 106)
Also you want to save storage costs by moving noncurrent versions to GLACIER 30 days after they
become noncurrent (assuming cold data for which you will not need realtime access) In addition you
also expect frequency of access of the current versions to diminish 90 days after creation so you might
choose to move these objects to the STANDARD_IA storage class
samplerule
Enabled
90
STANDARD_IA
30
GLACIER
365