Amazon Simple Storage Service


    Amazon Simple Storage Service
    Developer Guide
    API Version 20060301Amazon Simple Storage Service Developer GuideAmazon Simple Storage Service Developer Guide
    Amazon Simple Storage Service Developer Guide
    Copyright © 2016 Amazon Web Services Inc andor its affiliates All rights reserved
    Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any
    manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other
    trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to
    or sponsored by AmazonAmazon Simple Storage Service Developer Guide
    Table of Contents
    What Is Amazon S3 1
    How Do I 1
    Introduction 2
    Overview of Amazon S3 and This Guide 2
    Advantages to Amazon S3 2
    Amazon S3 Concepts 3
    Buckets 3
    Objects 3
    Keys 4
    Regions 4
    Amazon S3 Data Consistency Model 4
    Features 6
    Reduced Redundancy Storage 6
    Bucket Policies 7
    AWS Identity and Access Management 8
    Access Control Lists 8
    Versioning 8
    Operations 8
    Amazon S3 Application Programming Interfaces (API) 8
    The REST Interface 9
    The SOAP Interface 9
    Paying for Amazon S3 9
    Related Services 9
    Making Requests 11
    About Access Keys 11
    AWS Account Access Keys 11
    IAM User Access Keys 12
    Temporary Security Credentials 12
    Request Endpoints 13
    Making Requests over IPv6 13
    Getting Started with IPv6 13
    Using IPv6 Addresses in IAM Policies 14
    Testing IP Address Compatibility 15
    Using DualStack Endpoints 16
    Making Requests Using the AWS SDKs 19
    Using AWS Account or IAM User Credentials 20
    Using IAM User Temporary Credentials 25
    Using Federated User Temporary Credentials 36
    Making Requests Using the REST API 49
    DualStack Endpoints (REST API) 50
    Virtual Hosting of Buckets 50
    Request Redirection and the REST API 55
    Buckets 58
    Creating a Bucket 59
    About Permissions 60
    Accessing a Bucket 60
    Bucket Configuration Options 61
    Restrictions and Limitations 62
    Rules for Naming 63
    Examples of Creating a Bucket 64
    Using the Amazon S3 Console 65
    Using the AWS SDK for Java 65
    Using the AWS SDK for NET 66
    Using the AWS SDK for Ruby Version 2 67
    Using Other AWS SDKs 67
    API Version 20060301
    ivAmazon Simple Storage Service Developer Guide
    Deleting or Emptying a Bucket 67
    Delete a Bucket 68
    Empty a Bucket 71
    Bucket Website Configuration 73
    Using the AWS Management Console 73
    Using the SDK for Java 73
    Using the AWS SDK for NET 76
    Using the SDK for PHP 79
    Using the REST API 81
    Transfer Acceleration 81
    Why use Transfer Acceleration 81
    Getting Started 82
    Requirements for Using Amazon S3 Transfer Acceleration 83
    Transfer Acceleration Examples 83
    Requester Pays Buckets 92
    Configure with the Console 93
    Configure with the REST API 93
    DevPay and Requester Pays 96
    Charge Details 96
    Access Control 96
    Billing and Reporting 96
    Cost Allocation Tagging 96
    Objects 98
    Object Key and Metadata 99
    Object Keys 99
    Object Metadata 101
    Storage Classes 103
    Subresources 105
    Versioning 106
    Lifecycle Management 109
    What Is Lifecycle Configuration 109
    How Do I Configure a Lifecycle 110
    Transitioning Objects General Considerations 110
    Expiring Objects General Considerations 112
    Lifecycle and Other Bucket Configurations 112
    Lifecycle Configuration Elements 113
    GLACIER Storage Class Additional Considerations 124
    Specifying a Lifecycle Configuration 125
    CrossOrigin Resource Sharing (CORS) 131
    CrossOrigin Resource Sharing Usecase Scenarios 131
    How Do I Configure CORS on My Bucket 132
    How Does Amazon S3 Evaluate the CORS Configuration On a Bucket 134
    Enabling CORS 134
    Troubleshooting CORS 142
    Operations on Objects 142
    Getting Objects 143
    Uploading Objects 157
    Copying Objects 212
    Listing Object Keys 229
    Deleting Objects 237
    Restoring Archived Objects 259
    Managing Access 266
    Introduction 266
    Overview 267
    How Amazon S3 Authorizes a Request 272
    Guidelines for Using the Available Access Policy Options 277
    Example Walkthroughs Managing Access 280
    Using Bucket Policies and User Policies 308
    API Version 20060301
    vAmazon Simple Storage Service Developer Guide
    Access Policy Language Overview 308
    Bucket Policy Examples 334
    User Policy Examples 343
    Managing Access with ACLs 364
    Access Control List (ACL) Overview 364
    Managing ACLs 369
    Protecting Data 380
    Data Encryption 380
    ServerSide Encryption 381
    ClientSide Encryption 409
    Reduced Redundancy Storage 420
    Setting the Storage Class of an Object You Upload 421
    Changing the Storage Class of an Object in Amazon S3 421
    Versioning 423
    How to Configure Versioning on a Bucket 424
    MFA Delete 424
    Related Topics 425
    Examples 426
    Managing Objects in a VersioningEnabled Bucket 428
    Managing Objects in a VersioningSuspended Bucket 444
    Hosting a Static Website 449
    Website Endpoints 450
    Key Differences Between the Amazon Website and the REST API Endpoint 451
    Configure a Bucket for Website Hosting 452
    Overview 452
    Syntax for Specifying Routing Rules 454
    Index Document Support 457
    Custom Error Document Support 459
    Configuring a Redirect 460
    Permissions Required for Website Access 462
    Example Walkthroughs 462
    Example Setting Up a Static Website 463
    Example Setting Up a Static Website Using a Custom Domain 464
    Notifications 472
    Overview 472
    How to Enable Event Notifications 473
    Event Notification Types and Destinations 475
    Supported Event Types 475
    Supported Destinations 476
    Configuring Notifications with Object Key Name Filtering 476
    Examples of Valid Notification Configurations with Object Key Name Filtering 477
    Examples of Notification Configurations with Invalid PrefixSuffix Overlapping 479
    Granting Permissions to Publish Event Notification Messages to a Destination 481
    Granting Permissions to Invoke an AWS Lambda Function 481
    Granting Permissions to Publish Messages to an SNS Topic or an SQS Queue 481
    Example Walkthrough 1 483
    Walkthrough Summary 483
    Step 1 Create an Amazon SNS Topic 484
    Step 2 Create an Amazon SQS Queue 484
    Step 3 Add a Notification Configuration to Your Bucket 485
    Step 4 Test the Setup 489
    Example Walkthrough 2 489
    Event Message Structure 489
    CrossRegion Replication 492
    Usecase Scenarios 492
    Requirements 493
    Related Topics 493
    What Is and Is Not Replicated 493
    API Version 20060301
    viAmazon Simple Storage Service Developer Guide
    What Is Replicated 493
    What Is Not Replicated 494
    Related Topics 495
    How to Set Up 495
    Create an IAM Role 495
    Add Replication Configuration 497
    Walkthrough 1 Same AWS Account 500
    Walkthrough 2 Different AWS Accounts 501
    Using the Console 505
    Using the AWS SDK for Java 505
    Using the AWS SDK for NET 507
    Replication Status Information 509
    Related Topics 510
    Troubleshooting 511
    Related Topics 511
    Replication and Other Bucket Configurations 511
    Lifecycle Configuration and Object Replicas 512
    Versioning Configuration and Replication Configuration 512
    Logging Configuration and Replication Configuration 512
    Related Topics 512
    Request Routing 513
    Request Redirection and the REST API 513
    Overview 513
    DNS Routing 514
    Temporary Request Redirection 514
    Permanent Request Redirection 516
    DNS Considerations 516
    Performance Optimization 518
    Request Rate and Performance Considerations 518
    Workloads with a Mix of Request Types 519
    GETIntensive Workloads 521
    TCP Window Scaling 521
    TCP Selective Acknowledgement 522
    Monitoring with Amazon CloudWatch 523
    Amazon S3 CloudWatch Metrics 523
    Amazon S3 CloudWatch Dimensions 524
    Accessing Metrics in Amazon CloudWatch 524
    Related Resources 525
    Logging API Calls with AWS CloudTrail 526
    Amazon S3 Information in CloudTrail 526
    Using CloudTrail Logs with Amazon S3 Server Access Logs and CloudWatch Logs 528
    Understanding Amazon S3 Log File Entries 528
    Related Resources 530
    BitTorrent 531
    How You are Charged for BitTorrent Delivery 531
    Using BitTorrent to Retrieve Objects Stored in Amazon S3 532
    Publishing Content Using Amazon S3 and BitTorrent 533
    Amazon DevPay 534
    Amazon S3 Customer Data Isolation 534
    Example 535
    Amazon DevPay Token Mechanism 535
    Amazon S3 and Amazon DevPay Authentication 535
    Amazon S3 Bucket Limitation 536
    Amazon S3 and Amazon DevPay Process 537
    Additional Information 537
    Error Handling 538
    The REST Error Response 538
    Response Headers 539
    API Version 20060301
    viiAmazon Simple Storage Service Developer Guide
    Error Response 539
    The SOAP Error Response 540
    Amazon S3 Error Best Practices 540
    Retry InternalErrors 540
    Tune Application for Repeated SlowDown errors 540
    Isolate Errors 541
    Troubleshooting Amazon S3 542
    General Getting my Amazon S3 request IDs 542
    Using HTTP 542
    Using a Web Browser 543
    Using an AWS SDK 543
    Using the AWS CLI 544
    Using Windows PowerShell 544
    Related Topics 544
    Server Access Logging 546
    Overview 546
    Log Object Key Format 547
    How are Logs Delivered 547
    Best Effort Server Log Delivery 547
    Bucket Logging Status Changes Take Effect Over Time 548
    Related Topics 548
    Enabling Logging Using the Console 548
    Enabling Logging Programmatically 550
    Enabling logging 550
    Granting the Log Delivery Group WRITE and READ_ACP Permissions 550
    Example AWS SDK for NET 551
    Log Format 553
    Custom Access Log Information 556
    Programming Considerations for Extensible Server Access Log Format 556
    Additional Logging for Copy Operations 556
    Deleting Log Files 559
    AWS SDKs and Explorers 560
    Specifying Signature Version in Request Authentication 561
    Set Up the AWS CLI 562
    Using the AWS SDK for Java 563
    The Java API Organization 564
    Testing the Java Code Examples 564
    Using the AWS SDK for NET 565
    The NET API Organization 565
    Running the Amazon S3 NET Code Examples 566
    Using the AWS SDK for PHP and Running PHP Examples 566
    AWS SDK for PHP Levels 566
    Running PHP Examples 567
    Related Resources 568
    Using the AWS SDK for Ruby Version 2 568
    The Ruby API Organization 568
    Testing the Ruby Script Examples 568
    Using the AWS SDK for Python (Boto) 569
    Appendices 570
    Appendix A Using the SOAP API 570
    Common SOAP API Elements 570
    Authenticating SOAP Requests 571
    Setting Access Policy with SOAP 571
    Appendix B Authenticating Requests (AWS Signature Version 2) 573
    Authenticating Requests Using the REST API 574
    Signing and Authenticating REST Requests 575
    BrowserBased Uploads Using POST 586
    Resources 602
    API Version 20060301
    viiiAmazon Simple Storage Service Developer Guide
    Document History 604
    AWS Glossary 614
    API Version 20060301
    ixAmazon Simple Storage Service Developer Guide
    How Do I
    What Is Amazon S3
    Amazon Simple Storage Service is storage for the Internet It is designed to make webscale
    computing easier for developers
    Amazon S3 has a simple web services interface that you can use to store and retrieve any amount
    of data at any time from anywhere on the web It gives any developer access to the same highly
    scalable reliable fast inexpensive data storage infrastructure that Amazon uses to run its own global
    network of web sites The service aims to maximize benefits of scale and to pass those benefits on to
    developers
    This guide explains the core concepts of Amazon S3 such as buckets and objects and how to work
    with these resources using the Amazon S3 application programming interface (API)
    How Do I
    Information Relevant Sections
    General product overview and pricing Amazon S3
    Get a quick handson introduction to
    Amazon S3
    Amazon Simple Storage Service Getting Started Guide
    Learn about Amazon S3 key
    terminology and concepts
    Introduction to Amazon S3 (p 2)
    How do I work with buckets Working with Amazon S3 Buckets (p 58)
    How do I work with objects Working with Amazon S3 Objects (p 98)
    How do I make requests Making Requests (p 11)
    How do I manage access to my
    resources
    Managing Access Permissions to Your Amazon S3
    Resources (p 266)
    API Version 20060301
    1Amazon Simple Storage Service Developer Guide
    Overview of Amazon S3 and This Guide
    Introduction to Amazon S3
    This introduction to Amazon Simple Storage Service is intended to give you a detailed summary of this
    web service After reading this section you should have a good idea of what it offers and how it can fit
    in with your business
    Topics
    • Overview of Amazon S3 and This Guide (p 2)
    • Advantages to Amazon S3 (p 2)
    • Amazon S3 Concepts (p 3)
    • Features (p 6)
    • Amazon S3 Application Programming Interfaces (API) (p 8)
    • Paying for Amazon S3 (p 9)
    • Related Services (p 9)
    Overview of Amazon S3 and This Guide
    Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of
    data at any time from anywhere on the web
    This guide describes how you send requests to create buckets store and retrieve your objects
    and manage permissions on your resources The guide also describes access control and the
    authentication process Access control defines who can access objects and buckets within Amazon S3
    and the type of access (eg READ and WRITE) The authentication process verifies the identity of a
    user who is trying to access Amazon Web Services (AWS)
    Advantages to Amazon S3
    Amazon S3 is intentionally built with a minimal feature set that focuses on simplicity and robustness
    Following are some of advantages of the Amazon S3 service
    • Create Buckets – Create and name a bucket that stores data Buckets are the fundamental
    container in Amazon S3 for data storage
    • Store data in Buckets – Store an infinite amount of data in a bucket Upload as many objects as
    you like into an Amazon S3 bucket Each object can contain up to 5 TB of data Each object is stored
    and retrieved using a unique developerassigned key
    API Version 20060301
    2Amazon Simple Storage Service Developer Guide
    Amazon S3 Concepts
    • Download data – Download your data or enable others to do so Download your data any time you
    like or allow others to do the same
    • Permissions – Grant or deny access to others who want to upload or download data into your
    Amazon S3 bucket Grant upload and download permissions to three types of users Authentication
    mechanisms can help keep data secure from unauthorized access
    • Standard interfaces – Use standardsbased REST and SOAP interfaces designed to work with any
    Internetdevelopment toolkit
    Note
    SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon
    S3 features will not be supported for SOAP We recommend that you use either the REST
    API or the AWS SDKs
    Amazon S3 Concepts
    Topics
    • Buckets (p 3)
    • Objects (p 3)
    • Keys (p 4)
    • Regions (p 4)
    • Amazon S3 Data Consistency Model (p 4)
    This section describes key concepts and terminology you need to understand to use Amazon S3
    effectively They are presented in the order you will most likely encounter them
    Buckets
    A bucket is a container for objects stored in Amazon S3 Every object is contained in a bucket For
    example if the object named photospuppyjpg is stored in the johnsmith bucket then it is
    addressable using the URL httpjohnsmiths3amazonawscomphotospuppyjpg
    Buckets serve several purposes they organize the Amazon S3 namespace at the highest level they
    identify the account responsible for storage and data transfer charges they play a role in access
    control and they serve as the unit of aggregation for usage reporting
    You can configure buckets so that they are created in a specific region For more information see
    Buckets and Regions (p 60) You can also configure a bucket so that every time an object is added
    to it Amazon S3 generates a unique version ID and assigns it to the object For more information see
    Versioning (p 423)
    For more information about buckets see Working with Amazon S3 Buckets (p 58)
    Objects
    Objects are the fundamental entities stored in Amazon S3 Objects consist of object data and
    metadata The data portion is opaque to Amazon S3 The metadata is a set of namevalue pairs
    that describe the object These include some default metadata such as the date last modified and
    standard HTTP metadata such as ContentType You can also specify custom metadata at the time
    the object is stored
    An object is uniquely identified within a bucket by a key (name) and a version ID For more information
    see Keys (p 4) and Versioning (p 423)
    API Version 20060301
    3Amazon Simple Storage Service Developer Guide
    Keys
    Keys
    A key is the unique identifier for an object within a bucket Every object in a bucket has exactly
    one key Because the combination of a bucket key and version ID uniquely identify each object
    Amazon S3 can be thought of as a basic data map between bucket + key + version and the
    object itself Every object in Amazon S3 can be uniquely addressed through the combination of
    the web service endpoint bucket name key and optionally a version For example in the URL
    httpdocs3amazonawscom20060301AmazonS3wsdl doc is the name of the bucket and
    20060301AmazonS3wsdl is the key
    Regions
    You can choose the geographical region where Amazon S3 will store the buckets you create You
    might choose a region to optimize latency minimize costs or address regulatory requirements
    Amazon S3 currently supports the following regions
    • US East (N Virginia) Region Uses Amazon S3 servers in Northern Virginia
    • US West (N California) Region Uses Amazon S3 servers in Northern California
    • US West (Oregon) Region Uses Amazon S3 servers in Oregon
    • Asia Pacific (Mumbai) Region Uses Amazon S3 servers in Mumbai
    • Asia Pacific (Seoul) Region Uses Amazon S3 servers in Seoul
    • Asia Pacific (Singapore) Region Uses Amazon S3 servers in Singapore
    • Asia Pacific (Sydney) Region Uses Amazon S3 servers in Sydney
    • Asia Pacific (Tokyo) Region Uses Amazon S3 servers in Tokyo
    • EU (Frankfurt) Region Uses Amazon S3 servers in Frankfurt
    • EU (Ireland) Region Uses Amazon S3 servers in Ireland
    • South America (São Paulo) Region Uses Amazon S3 servers in Sao Paulo
    Objects stored in a region never leave the region unless you explicitly transfer them to another region
    For example objects stored in the EU (Ireland) region never leave it For more information about
    Amazon S3 regions and endpoints go to Regions and Endpoints in the AWS General Reference
    Amazon S3 Data Consistency Model
    Amazon S3 provides readafterwrite consistency for PUTS of new objects in your S3 bucket in all
    regions with one caveat The caveat is that if you make a HEAD or GET request to the key name (to
    find if the object exists) before creating the object Amazon S3 provides eventual consistency for read
    afterwrite
    Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions
    Updates to a single key are atomic For example if you PUT to an existing key a subsequent read
    might return the old data or the updated data but it will never write corrupted or partial data
    Amazon S3 achieves high availability by replicating data across multiple servers within Amazon's data
    centers If a PUT request is successful your data is safely stored However information about the
    changes must replicate across Amazon S3 which can take some time and so you might observe the
    following behaviors
    • A process writes a new object to Amazon S3 and immediately lists keys within its bucket Until the
    change is fully propagated the object might not appear in the list
    • A process replaces an existing object and immediately attempts to read it Until the change is fully
    propagated Amazon S3 might return the prior data
    API Version 20060301
    4Amazon Simple Storage Service Developer Guide
    Amazon S3 Data Consistency Model
    • A process deletes an existing object and immediately attempts to read it Until the deletion is fully
    propagated Amazon S3 might return the deleted data
    • A process deletes an existing object and immediately lists keys within its bucket Until the deletion is
    fully propagated Amazon S3 might list the deleted object
    Note
    Amazon S3 does not currently support object locking If two PUT requests are simultaneously
    made to the same key the request with the latest time stamp wins If this is an issue you will
    need to build an objectlocking mechanism into your application
    Updates are keybased there is no way to make atomic updates across keys For example
    you cannot make the update of one key dependent on the update of another key unless you
    design this functionality into your application
    The following table describes the characteristics of eventually consistent read and consistent read
    Eventually Consistent Read Consistent Read
    Stale reads possible No stale reads
    Lowest read latency Potential higher read latency
    Highest read throughput Potential lower read throughput
    Concurrent Applications
    This section provides examples of eventually consistent and consistent read requests when multiple
    clients are writing to the same items
    In this example both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2
    (read 2) For a consistent read R1 and R2 both return color ruby For an eventually consistent
    read R1 and R2 might return color red color ruby or no results depending on the amount
    of time that has elapsed
    In the next example W2 does not complete before the start of R1 Therefore R1 might return color
    ruby or color garnet for either a consistent read or an eventually consistent read Also
    depending on the amount of time that has elapsed an eventually consistent read might return no
    results
    For a consistent read R2 returns color garnet For an eventually consistent read R2 might
    return color ruby color garnet or no results depending on the amount of time that has
    elapsed
    API Version 20060301
    5Amazon Simple Storage Service Developer Guide
    Features
    In the last example Client 2 performs W2 before Amazon S3 returns a success for W1 so the
    outcome of the final value is unknown (color garnet or color brick) Any subsequent reads
    (consistent read or eventually consistent) might return either value Also depending on the amount of
    time that has elapsed an eventually consistent read might return no results
    Features
    Topics
    • Reduced Redundancy Storage (p 6)
    • Bucket Policies (p 7)
    • AWS Identity and Access Management (p 8)
    • Access Control Lists (p 8)
    • Versioning (p 8)
    • Operations (p 8)
    This section describes important Amazon S3 features
    Reduced Redundancy Storage
    Customers can store their data using the Amazon S3 Reduced Redundancy Storage (RRS) option
    RRS enables customers to reduce their costs by storing noncritical reproducible data at lower levels
    of redundancy than Amazon S3 standard storage RRS provides a costeffective highly available
    API Version 20060301
    6Amazon Simple Storage Service Developer Guide
    Bucket Policies
    solution for distributing or sharing content that is durably stored elsewhere or for storing thumbnails
    transcoded media or other processed data that can be easily reproduced The RRS option stores
    objects on multiple devices across multiple facilities providing 400 times the durability of a typical disk
    drive but does not replicate objects as many times as standard Amazon S3 storage and thus is even
    more cost effective
    RRS provides 9999 durability of objects over a given year This durability level corresponds to an
    average expected loss of 001 of objects annually
    AWS charges less for using RRS than for standard Amazon S3 storage For pricing information see
    Amazon S3 Pricing
    For more information see Storage Classes (p 103)
    Bucket Policies
    Bucket policies provide centralized access control to buckets and objects based on a variety of
    conditions including Amazon S3 operations requesters resources and aspects of the request
    (eg IP address) The policies are expressed in our access policy language and enable centralized
    management of permissions The permissions attached to a bucket apply to all of the objects in that
    bucket
    Individuals as well as companies can use bucket policies When companies register with Amazon S3
    they create an account Thereafter the company becomes synonymous with the account Accounts
    are financially responsible for the Amazon resources they (and their employees) create Accounts have
    the power to grant bucket policy permissions and assign employees permissions based on a variety of
    conditions For example an account could create a policy that gives a user write access
    • To a particular S3 bucket
    • From an account's corporate network
    • During business hours
    • From an account's custom application (as identified by a user agent string)
    An account can grant one application limited read and write access but allow another to create and
    delete buckets as well An account could allow several field offices to store their daily reports in a
    single bucket allowing each office to write only to a certain set of names (eg Nevada* or Utah*)
    and only from the office's IP address range
    Unlike access control lists (described below) which can add (grant) permissions only on individual
    objects policies can either add or deny permissions across all (or a subset) of objects within a bucket
    With one request an account can set the permissions of any number of objects in a bucket An account
    can use wildcards (similar to regular expression operators) on Amazon resource names (ARNs) and
    other values so that an account can control access to groups of objects that begin with a common
    prefix or end with a given extension such as html
    Only the bucket owner is allowed to associate a policy with a bucket Policies written in the access
    policy language allow or deny requests based on
    • Amazon S3 bucket operations (such as PUT acl) and object operations (such as PUT Object
    or GET Object)
    • Requester
    • Conditions specified in the policy
    An account can control access based on specific Amazon S3 operations such as GetObject
    GetObjectVersion DeleteObject or DeleteBucket
    API Version 20060301
    7Amazon Simple Storage Service Developer Guide
    AWS Identity and Access Management
    The conditions can be such things as IP addresses IP address ranges in CIDR notation dates user
    agents HTTP referrer and transports (HTTP and HTTPS)
    For more information see Using Bucket Policies and User Policies (p 308)
    AWS Identity and Access Management
    For example you can use IAM with Amazon S3 to control the type of access a user or group of users
    has to specific parts of an Amazon S3 bucket your AWS account owns
    For more information about IAM see the following
    • Identity and Access Management (IAM)
    • Getting Started
    • IAM User Guide
    Access Control Lists
    For more information see Managing Access with ACLs (p 364)
    Versioning
    For more information see Object Versioning (p 106)
    Operations
    Following are the most common operations you'll execute through the API
    Common Operations
    • Create a Bucket – Create and name your own bucket in which to store your objects
    • Write an Object – Store data by creating or overwriting an object When you write an object you
    specify a unique key in the namespace of your bucket This is also a good time to specify any access
    control you want on the object
    • Read an Object – Read data back You can download the data via HTTP or BitTorrent
    • Deleting an Object – Delete some of your data
    • Listing Keys – List the keys contained in one of your buckets You can filter the key list based on a
    prefix
    Details on this and all other functionality are described in detail later in this guide
    Amazon S3 Application Programming Interfaces
    (API)
    The Amazon S3 architecture is designed to be programming languageneutral using our supported
    interfaces to store and retrieve objects
    Amazon S3 provides a REST and a SOAP interface They are similar but there are some differences
    For example in the REST interface metadata is returned in HTTP headers Because we only support
    API Version 20060301
    8Amazon Simple Storage Service Developer Guide
    The REST Interface
    HTTP requests of up to 4 KB (not including the body) the amount of metadata you can supply is
    restricted
    Note
    SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
    features will not be supported for SOAP We recommend that you use either the REST API or
    the AWS SDKs
    The REST Interface
    The REST API is an HTTP interface to Amazon S3 Using REST you use standard HTTP requests to
    create fetch and delete buckets and objects
    You can use any toolkit that supports HTTP to use the REST API You can even use a browser to fetch
    objects as long as they are anonymously readable
    The REST API uses the standard HTTP headers and status codes so that standard browsers and
    toolkits work as expected In some areas we have added functionality to HTTP (for example we
    added headers to support access control) In these cases we have done our best to add the new
    functionality in a way that matched the style of standard HTTP usage
    The SOAP Interface
    Note
    SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
    features will not be supported for SOAP We recommend that you use either the REST API or
    the AWS SDKs
    The SOAP API provides a SOAP 11 interface using document literal encoding The most common
    way to use SOAP is to download the WSDL (go to httpdocs3amazonawscom20060301
    AmazonS3wsdl) use a SOAP toolkit such as Apache Axis or Microsoft NET to create bindings and
    then write code that uses the bindings to call Amazon S3
    Paying for Amazon S3
    Pricing for Amazon S3 is designed so that you don't have to plan for the storage requirements of your
    application Most storage providers force you to purchase a predetermined amount of storage and
    network transfer capacity If you exceed that capacity your service is shut off or you are charged high
    overage fees If you do not exceed that capacity you pay as though you used it all
    Amazon S3 charges you only for what you actually use with no hidden fees and no overage charges
    This gives developers a variablecost service that can grow with their business while enjoying the cost
    advantages of Amazon's infrastructure
    Before storing anything in Amazon S3 you need to register with the service and provide a payment
    instrument that will be charged at the end of each month There are no setup fees to begin using the
    service At the end of the month your payment instrument is automatically charged for that month's
    usage
    For information about paying for Amazon S3 storage see Amazon S3 Pricing
    Related Services
    Once you load your data into Amazon S3 you can use it with other services that we provide The
    following services are the ones you might use most frequently
    API Version 20060301
    9Amazon Simple Storage Service Developer Guide
    Related Services
    • Amazon Elastic Compute Cloud – This web service provides virtual compute resources in the
    cloud For more information go to the Amazon EC2 product details page
    • Amazon EMR – This web service enables businesses researchers data analysts and developers
    to easily and costeffectively process vast amounts of data It utilizes a hosted Hadoop framework
    running on the webscale infrastructure of Amazon EC2 and Amazon S3 For more information go to
    the Amazon EMR product details page
    • AWS ImportExport – AWS ImportExport enables you to mail a storage device such as a
    RAID drive to Amazon so that we can upload your (terabytes) of data into Amazon S3 For more
    information go to the AWS ImportExport Developer Guide
    API Version 20060301
    10Amazon Simple Storage Service Developer Guide
    About Access Keys
    Making Requests
    Topics
    • About Access Keys (p 11)
    • Request Endpoints (p 13)
    • Making Requests to Amazon S3 over IPv6 (p 13)
    • Making Requests Using the AWS SDKs (p 19)
    • Making Requests Using the REST API (p 49)
    Amazon S3 is a REST service You can send requests to Amazon S3 using the REST API or the AWS
    SDK (see Sample Code and Libraries) wrapper libraries that wrap the underlying Amazon S3 REST
    API simplifying your programming tasks
    Every interaction with Amazon S3 is either authenticated or anonymous Authentication is a process
    of verifying the identity of the requester trying to access an Amazon Web Services (AWS) product
    Authenticated requests must include a signature value that authenticates the request sender The
    signature value is in part generated from the requester's AWS access keys (access key ID and secret
    access key) For more information about getting access keys see How Do I Get Security Credentials
    in the AWS General Reference
    If you are using the AWS SDK the libraries compute the signature from the keys you provide
    However if you make direct REST API calls in your application you must write the code to compute
    the signature and add it to the request
    About Access Keys
    The following sections review the types of access keys that you can use to make authenticated
    requests
    AWS Account Access Keys
    The account access keys provide full access to the AWS resources owned by the account The
    following are examples of access keys
    • Access key ID (a 20character alphanumeric string) For example AKIAIOSFODNN7EXAMPLE
    • Secret access key (a 40character string) For example wJalrXUtnFEMIK7MDENG
    bPxRfiCYEXAMPLEKEY
    API Version 20060301
    11Amazon Simple Storage Service Developer Guide
    IAM User Access Keys
    The access key ID uniquely identifies an AWS account You can use these access keys to send
    authenticated requests to Amazon S3
    IAM User Access Keys
    You can create one AWS account for your company however there may be several employees in
    the organization who need access to your organization's AWS resources Sharing your AWS account
    access keys reduces security and creating individual AWS accounts for each employee might not
    be practical Also you cannot easily share resources such as buckets and objects because they are
    owned by different accounts To share resources you must grant permissions which is additional
    work
    In such scenarios you can use AWS Identity and Access Management (IAM) to create users under
    your AWS account with their own access keys and attach IAM user policies granting appropriate
    resource access permissions to them To better manage these users IAM enables you to create
    groups of users and grant grouplevel permissions that apply to all users in that group
    These users are referred as IAM users that you create and manage within AWS The parent account
    controls a user's ability to access AWS Any resources an IAM user creates are under the control of
    and paid for by the parent AWS account These IAM users can send authenticated requests to Amazon
    S3 using their own security credentials For more information about creating and managing users
    under your AWS account go to the AWS Identity and Access Management product details page
    Temporary Security Credentials
    In addition to creating IAM users with their own access keys IAM also enables you to grant temporary
    security credentials (temporary access keys and a security token) to any IAM user to enable them
    to access your AWS services and resources You can also manage users in your system outside
    AWS These are referred as federated users Additionally users can be applications that you create to
    access your AWS resources
    IAM provides the AWS Security Token Service API for you to request temporary security credentials
    You can use either the AWS STS API or the AWS SDK to request these credentials The API returns
    temporary security credentials (access key ID and secret access key) and a security token These
    credentials are valid only for the duration you specify when you request them You use the access key
    ID and secret key the same way you use them when sending requests using your AWS account or IAM
    user access keys In addition you must include the token in each request you send to Amazon S3
    An IAM user can request these temporary security credentials for their own use or hand them out to
    federated users or applications When requesting temporary security credentials for federated users
    you must provide a user name and an IAM policy defining the permissions you want to associate with
    these temporary security credentials The federated user cannot get more permissions than the parent
    IAM user who requested the temporary credentials
    You can use these temporary security credentials in making requests to Amazon S3 The API libraries
    compute the necessary signature value using those credentials to authenticate your request If you
    send requests using expired credentials Amazon S3 denies the request
    For information on signing requests using temporary security credentials in your REST API requests
    see Signing and Authenticating REST Requests (p 575) For information about sending requests
    using AWS SDKs see Making Requests Using the AWS SDKs (p 19)
    For more information about IAM support for temporary security credentials see Temporary Security
    Credentials in the IAM User Guide
    For added security you can require multifactor authentication (MFA) when accessing your Amazon S3
    resources by configuring a bucket policy For information see Adding a Bucket Policy to Require MFA
    Authentication (p 339) After you require MFA to access your Amazon S3 resources the only way
    API Version 20060301
    12Amazon Simple Storage Service Developer Guide
    Request Endpoints
    you can access these resources is by providing temporary credentials that are created with an MFA
    key For more information see the AWS MultiFactor Authentication detail page and Configuring MFA
    Protected API Access in the IAM User Guide
    Request Endpoints
    You send REST requests to the service's predefined endpoint For a list of all AWS services and their
    corresponding endpoints go to Regions and Endpoints in the AWS General Reference
    Making Requests to Amazon S3 over IPv6
    Amazon Simple Storage Service (Amazon S3) supports the ability to access S3 buckets using the
    Internet Protocol version 6 (IPv6) in addition to the IPv4 protocol Amazon S3 dualstack endpoints
    support requests to S3 buckets over IPv6 and IPv4 There are no additional charges for accessing
    Amazon S3 over IPv6 For more information about pricing see Amazon S3 Pricing
    Topics
    • Getting Started Making Requests over IPv6 (p 13)
    • Using IPv6 Addresses in IAM Policies (p 14)
    • Testing IP Address Compatibility (p 15)
    • Using Amazon S3 DualStack Endpoints (p 16)
    Getting Started Making Requests over IPv6
    To make a request to an S3 bucket over IPv6 you need to use a dualstack endpoint The next section
    describes how to make requests over IPv6 by using dualstack endpoints
    The following are some things you should know before trying to access a bucket over IPv6
    • The client and the network accessing the bucket must be enabled to use IPv6
    • Both virtual hostedstyle and path style requests are supported for IPv6 access For more
    information see Amazon S3 DualStack Endpoints (p 16)
    • If you use source IP address filtering in your AWS Identity and Access Management (IAM) user
    or bucket policies you need to update the policies to include IPv6 address ranges For more
    information see Using IPv6 Addresses in IAM Policies (p 14)
    • When using IPv6 server access log files output IP addresses in an IPv6 format You need to update
    existing tools scripts and software that you use to parse Amazon S3 log files so that they can
    parse the IPv6 formatted Remote IP addresses For more information see Server Access Log
    Format (p 553) and Server Access Logging (p 546)
    Note
    If you experience issues related to the presence of IPv6 addresses in log files contact AWS
    Support
    Making Requests over IPv6 by Using DualStack Endpoints
    You make requests with Amazon S3 API calls over IPv6 by using dualstack endpoints The Amazon
    S3 API operations work the same way whether you're accessing Amazon S3 over IPv6 or over IPv4
    Performance should be the same too
    API Version 20060301
    13Amazon Simple Storage Service Developer Guide
    Using IPv6 Addresses in IAM Policies
    When using the REST API you access a dualstack endpoint directly For more information see Dual
    Stack Endpoints (p 16)
    When using the AWS Command Line Interface (AWS CLI) and AWS SDKs you can use a parameter
    or flag to change to a dualstack endpoint You can also specify the dualstack endpoint directly as an
    override of the Amazon S3 endpoint in the config file
    You can use a dualstack endpoint to access a bucket over IPv6 from any of the following
    • The AWS CLI see Using DualStack Endpoints from the AWS CLI (p 16)
    • The AWS SDKs see Using DualStack Endpoints from the AWS SDKs (p 17)
    • The REST API see Making Requests to DualStack Endpoints by Using the REST API (p 50)
    Features Not Available over IPv6
    The following features are not currently supported when accessing an S3 bucket over IPv6
    • Static website hosting from an S3 bucket
    • Amazon S3 Transfer Acceleration
    • BitTorrent
    Amazon S3 IPv6 Access from Amazon EC2
    Amazon EC2 instances currently support IPv4 only They cannot reach Amazon S3 over IPv6 If you
    use the dualstack endpoints normally the OS or applications automatically establish the connection
    over IPv4 Before EC2 (VPC) supports IPv6 we recommend that you continue using the standard
    IPv4only endpoints from EC2 instances or conduct sufficient testing before switching to the dual
    stack endpoints For a list of Amazon S3 endpoints see Regions and Endpoints in the AWS General
    Reference
    Using IPv6 Addresses in IAM Policies
    Before trying to access a bucket using IPv6 you must ensure that any IAM user or S3 bucket polices
    that are used for IP address filtering are updated to include IPv6 address ranges IP address filtering
    policies that are not updated to handle IPv6 addresses may result in clients incorrectly losing or
    gaining access to the bucket when they start using IPv6 For more information about managing access
    permissions with IAM see Managing Access Permissions to Your Amazon S3 Resources (p 266)
    IAM policies that filter IP addresses use IP Address Condition Operators The following bucket policy
    identifies the 54240143* range of allowed IPv4 addresses by using IP address condition operators
    Any IP addresses outside of this range will be denied access to the bucket (examplebucket) Since
    all IPv6 addresses are outside of the allowed range this policy prevents IPv6 addresses from being
    able to access examplebucket
    {
    Version 20121017
    Statement [
    {
    Sid IPAllow
    Effect Allow
    Principal *
    Action s3*
    Resource arnawss3examplebucket*
    Condition {
    API Version 20060301
    14Amazon Simple Storage Service Developer Guide
    Testing IP Address Compatibility
    IpAddress {awsSourceIp 54240143024}
    }
    }
    ]
    }
    You can modify the bucket policy's Condition element to allow both IPv4 (54240143024) and
    IPv6 (2001DB81234567864) address ranges as shown in the following example You can use
    the same type of Condition block shown in the example to update both your IAM user and bucket
    policies
    Condition {
    IpAddress {
    awsSourceIp [
    54240143024
    2001DB81234567864
    ]
    }
    }
    Before using IPv6 you must update all relevant IAM user and bucket policies that use IP address
    filtering to allow IPv6 address ranges We recommend that you update your IAM policies with your
    organization's IPv6 address ranges in addition to your existing IPv4 address ranges For an example
    of a bucket policy that allows access over both IPv6 and IPv4 see Restricting Access to Specific IP
    Addresses (p 336)
    You can review your IAM user policies using the IAM console at httpsconsoleawsamazoncomiam
    For more information about IAM see the IAM User Guide For information about editing S3 bucket
    policies see Edit Bucket Permissions in the Amazon Simple Storage Service Console User Guide
    Testing IP Address Compatibility
    If you are using use LinuxUnix or Mac OS X you can test whether you can access a dualstack
    endpoint over IPv6 by using the curl command as shown in the following example
    curl v https3dualstackuswest2amazonawscom
    You get back information similar to the following example If you are connected over IPv6 the
    connected IP address will be an IPv6 address
    * About to connect() to s3uswest2amazonawscom port 80 (#0)
    * Trying IPv6 address connected
    * Connected to s3dualstackuswest2amazonawscom (IPv6 address) port 80
    (#0)
    > GET HTTP11
    > UserAgent curl7181 (x86_64unknownlinuxgnu) libcurl7181
    OpenSSL101t zlib123
    > Host s3dualstackuswest2amazonawscom
    If you are using Microsoft Windows 7 you can test whether you can access a dualstack endpoint over
    IPv6 or IPv4 by using the ping command as shown in the following example
    ping ipv6s3dualstackuswest2amazonawscom
    API Version 20060301
    15Amazon Simple Storage Service Developer Guide
    Using DualStack Endpoints
    Using Amazon S3 DualStack Endpoints
    Amazon S3 dualstack endpoints support requests to S3 buckets over IPv6 and IPv4 This section
    describes how to use dualstack endpoints
    Topics
    • Amazon S3 DualStack Endpoints (p 16)
    • Using DualStack Endpoints from the AWS CLI (p 16)
    • Using DualStack Endpoints from the AWS SDKs (p 17)
    • Using DualStack Endpoints from the REST API (p 18)
    Amazon S3 DualStack Endpoints
    When you make a request to a dualstack endpoint the bucket URL resolves to an IPv6 or an IPv4
    address For more information about accessing a bucket over IPv6 see Making Requests to Amazon
    S3 over IPv6 (p 13)
    When using the REST API you directly access an Amazon S3 endpoint by using the endpoint name
    (URI) You can access an S3 bucket through a dualstack endpoint by using a virtual hostedstyle or a
    pathstyle endpoint name Amazon S3 supports only regional dualstack endpoint names which means
    that you must specify the region as part of the name
    Use the following naming conventions for the dualstack virtual hostedstyle and pathstyle endpoint
    names
    • Virtual hostedstyle dualstack endpoint
    bucketnames3dualstackawsregionamazonawscom

    • Pathstyle dualstack endpoint
    s3dualstackawsregionamazonawscombucketname
    For more information about endpoint name style see Accessing a Bucket (p 60) For a list of
    Amazon S3 endpoints see Regions and Endpoints in the AWS General Reference
    When using the AWS Command Line Interface (AWS CLI) and AWS SDKs you can use a parameter
    or flag to change to a dualstack endpoint You can also specify the dualstack endpoint directly as an
    override of the Amazon S3 endpoint in the config file The following sections describe how to use dual
    stack endpoints from the AWS CLI and the AWS SDKs
    Using DualStack Endpoints from the AWS CLI
    This section provides examples of AWS CLI commands used to make requests to a dualstack
    endpoint For instructions on setting up the AWS CLI see Set Up the AWS CLI (p 562)
    You set the configuration value use_dualstack_endpoint to true in a profile in your AWS Config
    file to direct all Amazon S3 requests made by the s3 and s3api AWS CLI commands to the dualstack
    endpoint for the specified region You specify the region in the config file or in a command using the
    region option
    When using dualstack endpoints with the AWS CLI both path and virtual addressing styles are
    supported The addressing style set in the config file controls if the bucket name is in the hostname or
    part of the URL By default the CLI will attempt to use virtual style where possible but will fall back to
    path style if necessary For more information see AWS CLI Amazon S3 Configuration
    API Version 20060301
    16Amazon Simple Storage Service Developer Guide
    Using DualStack Endpoints
    You can also make configuration changes by using a command as shown in the following example
    which sets use_dualstack_endpoint to true and addressing_style to virtual in the default
    profile
    aws configure set defaults3use_dualstack_endpoint true
    aws configure set defaults3addressing_style virtual
    If you want to use a dualstack endpoint for specified AWS CLI commands only (not all commands)
    you can use either of the following methods
    • You can use the dualstack endpoint per command by setting the endpointurl parameter
    to httpss3dualstackawsregionamazonawscom or https3dualstackaws
    regionamazonawscom for any s3 or s3api command
    aws s3api listobjects bucket bucketname endpointurl https
    s3dualstackawsregionamazonawscom
    • You can set up separate profiles in your AWS Config file For example create one profile that sets
    use_dualstack_endpoint to true and a profile that does not set use_dualstack_endpoint
    When you run a command specify which profile you want to use depending upon whether or not
    you want to use the dualstack endpoint
    Note
    You currently cannot use transfer acceleration with dualstack endpoints For more
    information see Using Transfer Acceleration from the AWS Command Line Interface (AWS
    CLI) (p 84)
    Using DualStack Endpoints from the AWS SDKs
    This section provides examples of how to access a dualstack endpoint by using the AWS SDKs
    AWS Java SDK DualStack Endpoint Example
    You use the setS3ClientOptions method in the AWS Java SDK to enable the use of a dualstack
    endpoint when creating an instance of AmazonS3Client as shown in the following example
    AmazonS3 s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    s3ClientsetRegion(RegiongetRegion(RegionsUS_WEST_2))
    s3ClientsetS3ClientOptions(S3ClientOptionsbuilder()enableDualstack()build())
    If you are using the AWS Java SDK on Microsoft Windows you might have to set the following Java
    virtual machine (JVM) property
    javanetpreferIPv6Addressestrue
    Note
    You currently cannot use transfer acceleration with dualstack endpoints The
    Java SDK will throw an exception if you configure both enableDualstack and
    setAccelerateModeEnabled on the config object For more information see Using
    Transfer Acceleration from the AWS SDK for Java (p 85)
    For information about how to create and test a working Java sample see Testing the Java Code
    Examples (p 564)
    API Version 20060301
    17Amazon Simple Storage Service Developer Guide
    Using DualStack Endpoints
    AWS NET SDK DualStack Endpoint Example
    When using the AWS SDK for NET you use the AmazonS3Config class to enable the use of a dual
    stack endpoint as shown in the following example
    var config new AmazonS3Config
    {
    UseDualstackEndpoint true
    RegionEndpoint RegionEndpointUSWest2
    }
    using (var s3Client new AmazonS3Client(config))
    {
    var request new ListObjectsRequest
    {
    BucketName myBucket
    }
    var response s3ClientListObjects(request)
    }
    For a full NET sample for listing objects see Listing Keys Using the AWS SDK for NET (p 233)
    Note
    You currently cannot use transfer acceleration with dualstack endpoints The NET
    SDK will throw an exception if you configure both UseAccelerateEndpoint and
    UseDualstackEndpoint on the config object For more information see Using Transfer
    Acceleration from the AWS SDK for NET (p 88)
    For information about how to create and test a working NET sample see Running the Amazon
    S3 NET Code Examples (p 566)
    Using DualStack Endpoints from the REST API
    For information about making requests to dualstack endpoints by using the REST API see Making
    Requests to DualStack Endpoints by Using the REST API (p 50)
    API Version 20060301
    18Amazon Simple Storage Service Developer Guide
    Making Requests Using the AWS SDKs
    Making Requests Using the AWS SDKs
    Topics
    • Making Requests Using AWS Account or IAM User Credentials (p 20)
    • Making Requests Using IAM User Temporary Credentials (p 25)
    • Making Requests Using Federated User Temporary Credentials (p 36)
    You can send authenticated requests to Amazon S3 using either the AWS SDK or by making the
    REST API calls directly in your application The AWS SDK API uses the credentials that you provide
    to compute the signature for authentication If you use the REST API directly in your applications you
    must write the necessary code to compute the signature for authenticating your request For a list of
    available AWS SDKs go to Sample Code and Libraries
    API Version 20060301
    19Amazon Simple Storage Service Developer Guide
    Using AWS Account or IAM User Credentials
    Making Requests Using AWS Account or IAM User
    Credentials
    You can use an AWS account or IAM user security credentials to send authenticated requests to
    Amazon S3 This section provides examples of how you can send authenticated requests using the
    AWS SDK for Java AWS SDK for NET and AWS SDK for PHP For a list of available AWS SDKs go
    to Sample Code and Libraries
    Topics
    • Making Requests Using AWS Account or IAM User Credentials AWS SDK for Java (p 20)
    • Making Requests Using AWS Account or IAM User Credentials AWS SDK for NET (p 21)
    • Making Requests Using AWS Account or IAM User Credentials AWS SDK for PHP (p 23)
    • Making Requests Using AWS Account or IAM User Credentials AWS SDK for Ruby (p 24)
    For more information about setting up your AWS credentials for use with the AWS SDK for Java see
    Testing the Java Code Examples (p 564)
    Making Requests Using AWS Account or IAM User Credentials
    AWS SDK for Java
    The following tasks guide you through using the Java classes to send authenticated requests using
    your AWS account credentials or IAM user credentials
    Making Requests Using Your AWS account or IAM user credentials
    1 Create an instance of the AmazonS3Client class
    2 Execute one of the AmazonS3Client methods to send requests to Amazon S3 The
    client generates the necessary signature value from your credentials and includes it in the
    request it sends to Amazon S3
    The following Java code sample demonstrates the preceding tasks
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())

    Send sample request (list objects in a given bucket)
    ObjectListing objectListing s3clientlistObjects(new
    ListObjectsRequest()withBucketName(bucketName))
    Note
    You can create the AmazonS3Client class without providing your security credentials
    Requests sent using this client are anonymous requests without a signature Amazon S3
    returns an error if you send anonymous requests for a resource that is not publicly available
    To see how to make requests using your AWS credentials within the context of an example of listing
    all the object keys in your bucket see Listing Keys Using the AWS SDK for Java (p 231) For
    more examples see Working with Amazon S3 Objects (p 98) and Working with Amazon S3
    Buckets (p 58) You can test these examples using your AWS Account or IAM user credentials
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    API Version 20060301
    20Amazon Simple Storage Service Developer Guide
    Using AWS Account or IAM User Credentials
    Making Requests Using AWS Account or IAM User Credentials
    AWS SDK for NET
    The following tasks guide you through using the NET classes to send authenticated requests using
    your AWS account or IAM user credentials
    Making Requests Using Your AWS Account or IAM User Credentials
    1 Create an instance of the AmazonS3Client class
    2 Execute one of the AmazonS3Client methods to send requests to Amazon S3 The
    client generates the necessary signature from your credentials and includes it in the
    request it sends to Amazon S3
    The following C# code sample demonstrates the preceding tasks
    For information on running the NET examples in this guide and for instructions on how to store your
    credentials in a configuration file see Running the Amazon S3 NET Code Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class MakeS3Request
    {
    static string bucketName *** Provide bucket name ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    ConsoleWriteLine(Listing objects stored in a bucket)
    ListingObjects()
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void ListingObjects()
    {
    try
    {
    ListObjectsRequest request new ListObjectsRequest
    {
    BucketName bucketName
    MaxKeys 2
    }
    do
    {
    ListObjectsResponse response
    clientListObjects(request)
    API Version 20060301
    21Amazon Simple Storage Service Developer Guide
    Using AWS Account or IAM User Credentials
    Process response
    foreach (S3Object entry in responseS3Objects)
    {
    ConsoleWriteLine(key {0} size {1}
    entryKey entrySize)
    }
    If response is truncated set the marker to get the
    next
    set of keys
    if (responseIsTruncated)
    {
    requestMarker responseNextMarker
    }
    else
    {
    request null
    }
    } while (request null)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(
    To sign up for service go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when listing objects
    amazonS3ExceptionMessage)
    }
    }
    }
    }
    }
    Note
    You can create the AmazonS3Client client without providing your security credentials
    Requests sent using this client are anonymous requests without a signature Amazon S3
    returns an error if you send anonymous requests for a resource that is not publicly available
    For working examples see Working with Amazon S3 Objects (p 98) and Working with Amazon S3
    Buckets (p 58) You can test these examples using your AWS Account or an IAM user credentials
    For example to list all the object keys in your bucket see Listing Keys Using the AWS SDK
    for NET (p 233)
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    API Version 20060301
    22Amazon Simple Storage Service Developer Guide
    Using AWS Account or IAM User Credentials
    Making Requests Using AWS Account or IAM User Credentials
    AWS SDK for PHP
    This topic guides you through using a class from the AWS SDK for PHP to send authenticated
    requests using your AWS account or IAM user credentials
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    Making Requests Using Your AWS Account or IAM user Credentials
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Execute one of the Aws\S3\S3Client methods to send requests to Amazon S3 For
    example you can use the Aws\S3\S3ClientlistBuckets() method to send a request to list
    all the buckets for your account The client API generates the necessary signature using
    your credentials and includes it in the request it sends to Amazon S3
    The following PHP code sample demonstrates the preceding tasks and illustrates how the client makes
    a request using your security credentials to list all the buckets for your account
    use Aws\S3\S3Client
    Instantiate the S3 client with your AWS credentials
    s3 S3Clientfactory()
    result s3>listBuckets()
    For working examples see Working with Amazon S3 Objects (p 98) and Working with Amazon S3
    Buckets (p 58) You can test these examples using your AWS account or IAM user credentials
    For an example of listing object keys in a bucket see Listing Keys Using the AWS SDK for
    PHP (p 235)
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientlistBuckets() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    API Version 20060301
    23Amazon Simple Storage Service Developer Guide
    Using AWS Account or IAM User Credentials
    Making Requests Using AWS Account or IAM User Credentials
    AWS SDK for Ruby
    The following tasks guide you through using the AWS SDK for Ruby to send authenticated requests
    using your AWS Account credentials or IAM user credentials
    Making Requests Using Your AWS Account or IAM user Credentials
    1 Create an instance of the AWSS3 class
    2 Make a request to Amazon S3 by enumerating objects in a bucket using the buckets
    method of AWSS3 The client generates the necessary signature value from your
    credentials and includes it in the request it sends to Amazon S3
    The following Ruby code sample demonstrates the preceding tasks
    # Get an instance of the S3 interface using the specified credentials
    configuration
    s3 AWSS3new()
    # Get a list of all object keys in a bucket
    bucket s3buckets[bucket_name]objectscollect(&key)
    puts bucket
    Note
    You can create the AWSS3 client without providing your security credentials Requests sent
    using this client are anonymous requests without a signature Amazon S3 returns an error if
    you send anonymous requests for a resource that is not publicly available
    For working examples see Working with Amazon S3 Objects (p 98) and Working with Amazon S3
    Buckets (p 58) You can test these examples using your AWS Account or IAM user credentials
    API Version 20060301
    24Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    Making Requests Using IAM User Temporary
    Credentials
    Topics
    • Making Requests Using IAM User Temporary Credentials AWS SDK for Java (p 25)
    • Making Requests Using IAM User Temporary Credentials AWS SDK for NET (p 28)
    • Making Requests Using AWS Account or IAM User Temporary Credentials AWS SDK for
    PHP (p 31)
    • Making Requests Using IAM User Temporary Credentials AWS SDK for Ruby (p 34)
    An AWS Account or an IAM user can request temporary security credentials and use them to send
    authenticated requests to Amazon S3 This section provides examples of how to use the AWS SDK
    for Java NET and PHP to obtain temporary security credentials and use them to authenticate your
    requests to Amazon S3
    Making Requests Using IAM User Temporary Credentials
    AWS SDK for Java
    An IAM user or an AWS Account can request temporary security credentials (see Making
    Requests (p 11)) using AWS SDK for Java and use them to access Amazon S3 These credentials
    expire after the session duration By default the session duration is one hour If you use IAM user
    credentials you can specify duration between 1 and 36 hours when requesting the temporary security
    credentials
    Making Requests Using IAM User Temporary Security Credentials
    1 Create an instance of the AWS Security Token Service client
    AWSSecurityTokenServiceClient
    2 Start a session by calling the GetSessionToken method of the STS client you
    created in the preceding step You provide session information to this method using a
    GetSessionTokenRequest object
    The method returns your temporary security credentials
    3 Package the temporary security credentials in an instance of the
    BasicSessionCredentials object so you can provide the credentials to your Amazon
    S3 client
    4 Create an instance of the AmazonS3Client class by passing in the temporary security
    credentials
    You send the requests to Amazon S3 using this client If you send requests using
    expired credentials Amazon S3 returns an error
    The following Java code sample demonstrates the preceding tasks
    In real applications the following code is part of your trusted code It
    has
    your security credentials you use to obtain temporary security
    credentials
    AWSSecurityTokenServiceClient stsClient
    new AWSSecurityTokenServiceClient(new
    ProfileCredentialsProvider())

    API Version 20060301
    25Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials

    Manually start a session
    GetSessionTokenRequest getSessionTokenRequest new GetSessionTokenRequest()
    Following duration can be set only if temporary credentials are requested
    by an IAM user
    getSessionTokenRequestsetDurationSeconds(7200)
    GetSessionTokenResult sessionTokenResult
    stsClientgetSessionToken(getSessionTokenRequest)
    Credentials sessionCredentials sessionTokenResultgetCredentials()

    Package the temporary security credentials as
    a BasicSessionCredentials object for an Amazon S3 client object to use
    BasicSessionCredentials basicSessionCredentials
    new
    BasicSessionCredentials(sessionCredentialsgetAccessKeyId()

    sessionCredentialsgetSecretAccessKey()
    sessionCredentialsgetSessionToken())
    The following will be part of your less trusted code You provide
    temporary security
    credentials so it can send authenticated requests to Amazon S3
    Create Amazon S3 client by passing in the basicSessionCredentials object
    AmazonS3Client s3 new AmazonS3Client(basicSessionCredentials)

    Test For example get object keys in a bucket
    ObjectListing objects s3listObjects(bucketName)
    API Version 20060301
    26Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    Example
    Note
    If you obtain temporary security credentials using your AWS account credentials the
    temporary security credentials are valid for only one hour You can specify session duration
    only if you use IAM user credentials to request a session
    The following Java code example lists the object keys in the specified bucket For illustration the code
    example obtains temporary security credentials for a default one hour session and uses them to send
    an authenticated request to Amazon S3
    If you want to test the sample using IAM user credentials you will need to create an IAM user under
    your AWS Account For more information about how to create an IAM user see Creating Your First
    IAM User and Administrators Group in the IAM User Guide
    import javaioIOException
    import comamazonawsauthBasicSessionCredentials
    import comamazonawsauthPropertiesCredentials
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicessecuritytokenAWSSecurityTokenServiceClient
    import comamazonawsservicessecuritytokenmodelCredentials
    import comamazonawsservicessecuritytokenmodelGetSessionTokenRequest
    import comamazonawsservicessecuritytokenmodelGetSessionTokenResult
    import comamazonawsservicess3modelObjectListing
    public class S3Sample {
    private static String bucketName *** Provide bucket name ***
    public static void main(String[] args) throws IOException {
    AWSSecurityTokenServiceClient stsClient
    new AWSSecurityTokenServiceClient(new
    ProfileCredentialsProvider())

    Start a session
    GetSessionTokenRequest getSessionTokenRequest
    new GetSessionTokenRequest()
    GetSessionTokenResult sessionTokenResult

    stsClientgetSessionToken(getSessionTokenRequest)
    Credentials sessionCredentials sessionTokenResultgetCredentials()
    Systemoutprintln(Session Credentials
    +
    sessionCredentialstoString())


    Package the session credentials as a BasicSessionCredentials
    object for an S3 client object to use
    BasicSessionCredentials basicSessionCredentials
    new
    BasicSessionCredentials(sessionCredentialsgetAccessKeyId()
    sessionCredentialsgetSecretAccessKey()
    sessionCredentialsgetSessionToken())
    AmazonS3Client s3 new AmazonS3Client(basicSessionCredentials)
    Test For example get object keys for a given bucket
    ObjectListing objects s3listObjects(bucketName)
    Systemoutprintln(No of Objects +

    objectsgetObjectSummaries()size())
    }
    }
    API Version 20060301
    27Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    Making Requests Using IAM User Temporary Credentials
    AWS SDK for NET
    An IAM user or an AWS Account can request temporary security credentials (see Making
    Requests (p 11)) using the AWS SDK for NET and use them to access Amazon S3 These
    credentials expire after the session duration By default the session duration is one hour If you
    use IAM user credentials you can specify duration between 1 and 36 hours when requesting the
    temporary security credentials
    Making Requests Using IAM User Temporary Security Credentials
    1 Create an instance of the AWS Security Token Service client
    AmazonSecurityTokenServiceClient For information about providing credentials
    see Using the AWS SDKs CLI and Explorers (p 560)
    2 Start a session by calling the GetSessionToken method of the STS client you
    created in the preceding step You provide session information to this method using a
    GetSessionTokenRequest object
    The method returns you temporary security credentials
    3 Package up the temporary security credentials in an instance of the
    SessionAWSCredentials object You use this object to provide the temporary
    security credentials to your Amazon S3 client
    4 Create an instance of the AmazonS3Client class by passing in the temporary security
    credentials
    You send requests to Amazon S3 using this client If you send requests using expired
    credentials Amazon S3 returns an error
    The following C# code sample demonstrates the preceding tasks
    In real applications the following code is part of your trusted code It
    has
    your security credentials you use to obtain temporary security
    credentials
    AmazonSecurityTokenServiceConfig config new
    AmazonSecurityTokenServiceConfig()
    AmazonSecurityTokenServiceClient stsClient
    new AmazonSecurityTokenServiceClient(config)
    GetSessionTokenRequest getSessionTokenRequest new GetSessionTokenRequest()
    Following duration can be set only if temporary credentials are requested
    by an IAM user
    getSessionTokenRequestDurationSeconds 7200 seconds
    Credentials credentials

    stsClientGetSessionToken(getSessionTokenRequest)GetSessionTokenResultCredentials
    SessionAWSCredentials sessionCredentials
    new SessionAWSCredentials(credentialsAccessKeyId
    API Version 20060301
    28Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials

    credentialsSecretAccessKey

    credentialsSessionToken)
    The following will be part of your less trusted code You provide
    temporary security
    credentials so it can send authenticated requests to Amazon S3
    Create Amazon S3 client by passing in the basicSessionCredentials object
    AmazonS3Client s3Client new AmazonS3Client(sessionCredentials)
    Test For example send request to list object key in a bucket
    var response s3ClientListObjects(bucketName)
    API Version 20060301
    29Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    Example
    Note
    If you obtain temporary security credentials using your AWS account security credentials the
    temporary security credentials are valid for only one hour You can specify session duration
    only if you use IAM user credentials to request a session
    The following C# code example lists object keys in the specified bucket For illustration the code
    example obtains temporary security credentials for a default one hour session and uses them to send
    authenticated request to Amazon S3
    If you want to test the sample using IAM user credentials you will need to create an IAM user under
    your AWS Account For more information about how to create an IAM user see Creating Your First
    IAM User and Administrators Group in the IAM User Guide
    For instructions on how to create and test a working example see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using SystemConfiguration
    using SystemCollectionsSpecialized
    using AmazonS3
    using AmazonSecurityToken
    using AmazonSecurityTokenModel
    using AmazonRuntime
    using AmazonS3Model
    using SystemCollectionsGeneric
    namespace s3amazoncomdocsamples
    {
    class TempCredExplicitSessionStart
    {
    static string bucketName *** Provide bucket name ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    NameValueCollection appConfig ConfigurationManagerAppSettings
    string accessKeyID appConfig[AWSAccessKey]
    string secretAccessKeyID appConfig[AWSSecretKey]
    try
    {
    ConsoleWriteLine(Listing objects stored in a bucket)
    SessionAWSCredentials tempCredentials
    GetTemporaryCredentials(accessKeyID secretAccessKeyID)
    Create client by providing temporary security credentials
    using (client new AmazonS3Client(tempCredentials
    AmazonRegionEndpointUSEast1))
    {
    ListObjectsRequest listObjectRequest
    new ListObjectsRequest()
    listObjectRequestBucketName bucketName
    Send request to Amazon S3
    ListObjectsResponse response
    clientListObjects(listObjectRequest)
    List objects responseS3Objects
    ConsoleWriteLine(Object count {0} objectsCount)
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    }
    catch (AmazonS3Exception s3Exception)
    {
    ConsoleWriteLine(s3ExceptionMessage
    s3ExceptionInnerException)
    }
    catch (AmazonSecurityTokenServiceException stsException)
    {
    ConsoleWriteLine(stsExceptionMessage
    stsExceptionInnerException)
    }
    }
    private static SessionAWSCredentials GetTemporaryCredentials(
    string accessKeyId string secretAccessKeyId)
    {
    AmazonSecurityTokenServiceClient stsClient
    new AmazonSecurityTokenServiceClient(accessKeyId
    secretAccessKeyId)
    GetSessionTokenRequest getSessionTokenRequest
    new GetSessionTokenRequest()
    getSessionTokenRequestDurationSeconds 7200 seconds
    GetSessionTokenResponse sessionTokenResponse
    stsClientGetSessionToken(getSessionTokenRequest)
    Credentials credentials sessionTokenResponseCredentials
    SessionAWSCredentials sessionCredentials
    new SessionAWSCredentials(credentialsAccessKeyId
    credentialsSecretAccessKey
    credentialsSessionToken)
    return sessionCredentials
    }
    }
    }
    API Version 20060301
    30Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    Making Requests Using AWS Account or IAM User Temporary
    Credentials AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to request temporary security
    credentials and use them to access Amazon S3
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    An IAM user or an AWS Account can request temporary security credentials (see Making
    Requests (p 11)) using the AWS SDK for PHP and use them to access Amazon S3 These credentials
    expire when the session duration expires By default the session duration is one hour If you use
    IAM user credentials you can specify the duration between 1 and 36 hours when requesting the
    temporary security credentials For more information about temporary security credentials see
    Temporary Security Credentials in the IAM User Guide
    Making Requests Using AWS Account or IAM User Temporary Security Credentials
    1 Create an instance of an AWS Security Token Service (AWS STS) client by using the
    Aws\Sts\StsClient class factory() method
    2 Execute the Aws\Sts\StsClientgetSessionToken() method to start a session
    The method returns you temporary security credentials
    3 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class
    factory() method with the temporary security credentials you obtained in the preceding
    step
    Any methods in the S3Client class that you call use the temporary security
    credentials to send authenticated requests to Amazon S3
    The following PHP code sample demonstrates how to request temporary security credentials and use
    them to access Amazon S3
    use Aws\Sts\StsClient
    use Aws\S3\S3Client
    In real applications the following code is part of your trusted code
    It has your security credentials that you use to obtain temporary
    security credentials
    sts StsClientfactory()
    result sts>getSessionToken()
    The following will be part of your less trusted code You provide
    temporary
    security credentials so it can send authenticated requests to Amazon S3
    Create an Amazon S3 client using temporary security credentials
    credentials result>get('Credentials')
    API Version 20060301
    31Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    s3 S3Clientfactory(array(
    'key' > credentials['AccessKeyId']
    'secret' > credentials['SecretAccessKey']
    'token' > credentials['SessionToken']
    ))
    result s3>listBuckets()
    Note
    If you obtain temporary security credentials using your AWS account security credentials
    the temporary security credentials are valid for only one hour You can specify the session
    duration only if you use IAM user credentials to request a session
    Example of Making an Amazon S3 Request Using Temporary Security Credentials
    The following PHP code example lists object keys in the specified bucket using temporary security
    credentials The code example obtains temporary security credentials for a default one hour session
    and uses them to send authenticated request to Amazon S3 For information about running the PHP
    examples in this guide go to Running PHP Examples (p 567)
    If you want to test the example using IAM user credentials you will need to create an IAM user under
    your AWS Account For information about how to create an IAM user see Creating Your First IAM
    User and Administrators Group in the IAM User Guide For an example of setting session duration
    when using IAM user credentials to request a session see Making Requests Using Federated User
    Temporary Credentials AWS SDK for PHP (p 43)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\Sts\StsClient
    use Aws\S3\S3Client
    use Aws\S3\Exception\S3Exception
    bucket '*** Your Bucket Name ***'
    sts StsClientfactory()
    credentials sts>getSessionToken()>get('Credentials')
    s3 S3Clientfactory(array(
    'key' > credentials['AccessKeyId']
    'secret' > credentials['SecretAccessKey']
    'token' > credentials['SessionToken']
    ))
    try {
    objects s3>getIterator('ListObjects' array(
    'Bucket' > bucket
    ))
    echo Keys retrieved\n
    foreach (objects as object) {
    echo object['Key'] \n
    }
    } catch (S3Exception e) {
    echo e>getMessage() \n
    }
    API Version 20060301
    32Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\Sts\StsClient Class
    • AWS SDK for PHP for Amazon S3 Aws\Sts\StsClientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\Sts\StsClientgetSessionToken() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    API Version 20060301
    33Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    Making Requests Using IAM User Temporary Credentials
    AWS SDK for Ruby
    An IAM user or an AWS Account can request temporary security credentials (see Making
    Requests (p 11)) using AWS SDK for Ruby and use them to access Amazon S3 These credentials
    expire after the session duration By default the session duration is one hour If you use IAM user
    credentials you can specify the duration between 1 and 36 hours when requesting the temporary
    security credentials
    Making Requests Using IAM User Temporary Security Credentials
    1 Create an instance of the AWS Security Token Service client AWSSTSSession by
    providing your credentials
    2 Start a session by calling the new_session method of the STS client that you
    created in the preceding step You provide session information to this method using a
    GetSessionTokenRequest object
    The method returns your temporary security credentials
    3 Use the temporary credentials in a new instance of the AWSS3 class by passing in the
    temporary security credentials
    You send the requests to Amazon S3 using this client If you send requests using
    expired credentials Amazon S3 returns an error
    The following Ruby code sample demonstrates the preceding tasks
    # Start a session
    # In real applications the following code is part of your trusted code It
    has
    # your security credentials that you use to obtain temporary security
    credentials
    sts AWSSTSnew()

    session stsnew_session()
    puts Session expires at #{sessionexpires_atto_s}
    # Get an instance of the S3 interface using the session credentials
    s3 AWSS3new(sessioncredentials)
    # Get a list of all object keys in a bucket
    bucket s3buckets[bucket_name]objectscollect(&key)
    API Version 20060301
    34Amazon Simple Storage Service Developer Guide
    Using IAM User Temporary Credentials
    Example
    Note
    If you obtain temporary security credentials using your AWS account security credentials the
    temporary security credentials are valid for only one hour You can specify session duration
    only if you use IAM user credentials to request a session
    The following Ruby code example lists the object keys in the specified bucket For illustration the code
    example obtains temporary security credentials for a default one hour session and uses them to send
    an authenticated request to Amazon S3
    If you want to test the sample using IAM user credentials you will need to create an IAM user under
    your AWS Account For more information about how to create an IAM user see Creating Your First
    IAM User and Administrators Group in the IAM User Guide
    require 'rubygems'
    require 'awssdk'
    # In real applications the following code is part of your trusted code It
    has
    # your security credentials you use to obtain temporary security credentials
    bucket_name '*** Provide bucket name ***'
    # Start a session
    sts AWSSTSnew()
    session stsnew_session()
    puts Session expires at #{sessionexpires_atto_s}
    # get an instance of the S3 interface using the session credentials
    s3 AWSS3new(sessioncredentials)
    # get a list of all object keys in a bucket
    bucket s3buckets[bucket_name]objectscollect(&key)
    puts bucket
    API Version 20060301
    35Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Making Requests Using Federated User Temporary
    Credentials
    Topics
    • Making Requests Using Federated User Temporary Credentials AWS SDK for Java (p 36)
    • Making Requests Using Federated User Temporary Credentials AWS SDK for NET (p 40)
    • Making Requests Using Federated User Temporary Credentials AWS SDK for PHP (p 43)
    • Making Requests Using Federated User Temporary Credentials AWS SDK for Ruby (p 47)
    You can request temporary security credentials and provide them to your federated users or
    applications who need to access your AWS resources This section provides examples of how you can
    use the AWS SDK to obtain temporary security credentials for your federated users or applications and
    send authenticated requests to Amazon S3 using those credentials For a list of available AWS SDKs
    go to Sample Code and Libraries
    Note
    Both the AWS account and an IAM user can request temporary security credentials
    for federated users However for added security only an IAM user with the necessary
    permissions should request these temporary credentials to ensure that the federated user
    gets at most the permissions of the requesting IAM user In some applications you might
    find suitable to create an IAM user with specific permissions for the sole purpose of granting
    temporary security credentials to your federated users and applications
    Making Requests Using Federated User Temporary
    Credentials AWS SDK for Java
    You can provide temporary security credentials for your federated users and applications (see Making
    Requests (p 11)) so they can send authenticated requests to access your AWS resources When
    requesting these temporary credentials from the IAM service you must provide a user name and an
    IAM policy describing the resource permissions you want to grant By default the session duration is
    one hour However if you are requesting temporary credentials using IAM user credentials you can
    explicitly set a different duration value when requesting the temporary security credentials for federated
    users and applications
    Note
    To request temporary security credentials for federated users and applications for added
    security you might want to use a dedicated IAM user with only the necessary access
    permissions The temporary user you create can never get more permissions than the IAM
    user who requested the temporary security credentials For more information go to AWS
    Identity and Access Management FAQs
    Making Requests Using Federated User Temporary Security Credentials
    1 Create an instance of the AWS Security Token Service client
    AWSSecurityTokenServiceClient
    2 Start a session by calling the getFederationToken method of the STS client you
    created in the preceding step
    You will need to provide session information including the user name and an IAM policy
    that you want to attach to the temporary credentials
    This method returns your temporary security credentials
    API Version 20060301
    36Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    3 Package the temporary security credentials in an instance of the
    BasicSessionCredentials object You use this object to provide the temporary
    security credentials to your Amazon S3 client
    4 Create an instance of the AmazonS3Client class by passing the temporary security
    credentials
    You send requests to Amazon S3 using this client If you send requests using expired
    credentials Amazon S3 returns an error
    The following Java code sample demonstrates the preceding tasks
    In real applications the following code is part of your trusted code It
    has
    your security credentials you use to obtain temporary security
    credentials
    AWSSecurityTokenServiceClient stsClient
    new AWSSecurityTokenServiceClient(new
    ProfileCredentialsProvider())
    GetFederationTokenRequest getFederationTokenRequest
    new GetFederationTokenRequest()
    getFederationTokenRequestsetDurationSeconds(7200)
    getFederationTokenRequestsetName(User1)
    Define the policy and add to the request
    Policy policy new Policy()
    Define the policy here
    Add the policy to the request
    getFederationTokenRequestsetPolicy(policytoJson())
    GetFederationTokenResult federationTokenResult
    stsClientgetFederationToken(getFederationTokenRequest)
    Credentials sessionCredentials federationTokenResultgetCredentials()
    Package the session credentials as a BasicSessionCredentials object
    for an S3 client object to use
    BasicSessionCredentials basicSessionCredentials new
    BasicSessionCredentials(
    sessionCredentialsgetAccessKeyId()
    sessionCredentialsgetSecretAccessKey()
    sessionCredentialsgetSessionToken())
    The following will be part of your less trusted code You provide
    temporary security
    credentials so it can send authenticated requests to Amazon S3
    Create an Amazon S3 client by passing in the basicSessionCredentials
    object
    AmazonS3Client s3 new AmazonS3Client(basicSessionCredentials)
    Test For example send list object keys in a bucket
    ObjectListing objects s3listObjects(bucketName)
    To set a condition in the policy create a Condition object and associate it with the policy The
    following code sample shows a condition that allows users from a specified IP range to list objects
    Policy policy new Policy()
    API Version 20060301
    37Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Allow only a specified IP range
    Condition condition new
    StringCondition(StringConditionStringComparisonTypeStringLike
    ConditionFactorySOURCE_IP_CONDITION_KEY 192168143*)

    policywithStatements(new Statement(EffectAllow)
    withActions(S3ActionsListObjects)
    withConditions(condition)
    withResources(new Resource(arnawss3+ bucketName)))
    getFederationTokenRequestsetPolicy(policytoJson())
    API Version 20060301
    38Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Example
    The following Java code example lists keys in the specified bucket In the code example you first
    obtain temporary security credentials for a twohour session for your federated user (User1) and use
    them to send authenticated requests to Amazon S3
    When requesting temporary credentials for others for added security you use the security credentials
    of an IAM user who has permissions to request temporary security credentials You can also limit the
    access permissions of this IAM user to ensure that the IAM user grants only the minimum application
    specific permissions when requesting temporary security credentials This sample only lists objects in a
    specific bucket Therefore first create an IAM user with the following policy attached
    {
    Statement[{
    Action[s3ListBucket
    stsGetFederationToken*
    ]
    EffectAllow
    Resource*
    }
    ]
    }
    The policy allows the IAM user to request temporary security credentials and access permission only to
    list your AWS resources For information about how to create an IAM user see Creating Your First IAM
    User and Administrators Group in the IAM User Guide
    You can now use the IAM user security credentials to test the following example The example sends
    authenticated request to Amazon S3 using temporary security credentials The example specifies the
    following policy when requesting temporary security credentials for the federated user (User1) which
    restricts access to list objects in a specific bucket (YourBucketName) You must update the policy and
    provide your own existing bucket name
    {
    Statement[
    {
    Sid1
    Action[s3ListBucket]
    EffectAllow
    Resourcearnawss3YourBucketName
    }
    ]
    }
    You must update the following sample and provide the bucket name that you specified in the preceding
    federated user access policy
    import javaioIOException
    import comamazonawsauthBasicSessionCredentials
    import comamazonawsauthPropertiesCredentials
    import comamazonawsauthpolicyPolicy
    import comamazonawsauthpolicyResource
    import comamazonawsauthpolicyStatement
    import comamazonawsauthpolicyStatementEffect
    import comamazonawsauthpolicyactionsS3Actions
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicessecuritytokenAWSSecurityTokenServiceClient
    import comamazonawsservicessecuritytokenmodelCredentials
    import comamazonawsservicessecuritytokenmodelGetFederationTokenRequest
    import comamazonawsservicessecuritytokenmodelGetFederationTokenResult
    import comamazonawsservicess3modelObjectListing
    public class S3Sample {
    private static String bucketName *** Specify bucket name ***
    public static void main(String[] args) throws IOException {
    AWSSecurityTokenServiceClient stsClient
    new AWSSecurityTokenServiceClient(new
    ProfileCredentialsProvider())
    GetFederationTokenRequest getFederationTokenRequest
    new GetFederationTokenRequest()
    getFederationTokenRequestsetDurationSeconds(7200)
    getFederationTokenRequestsetName(User1)

    Define the policy and add to the request
    Policy policy new Policy()
    policywithStatements(new Statement(EffectAllow)
    withActions(S3ActionsListObjects)
    withResources(new Resource(arnawss3ExampleBucket)))
    getFederationTokenRequestsetPolicy(policytoJson())

    Get the temporary security credentials
    GetFederationTokenResult federationTokenResult

    stsClientgetFederationToken(getFederationTokenRequest)
    Credentials sessionCredentials
    federationTokenResultgetCredentials()

    Package the session credentials as a BasicSessionCredentials
    object for an S3 client object to use
    BasicSessionCredentials basicSessionCredentials
    new
    BasicSessionCredentials(sessionCredentialsgetAccessKeyId()
    sessionCredentialsgetSecretAccessKey()
    sessionCredentialsgetSessionToken())
    AmazonS3Client s3 new AmazonS3Client(basicSessionCredentials)

    Test For example send ListBucket request using the temporary
    security credentials
    ObjectListing objects s3listObjects(bucketName)
    Systemoutprintln(No of Objects +
    objectsgetObjectSummaries()size())
    }
    }
    API Version 20060301
    39Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    Making Requests Using Federated User Temporary
    Credentials AWS SDK for NET
    You can provide temporary security credentials for your federated users and applications (see Making
    Requests (p 11)) so they can send authenticated requests to access your AWS resources When
    requesting these temporary credentials you must provide a user name and an IAM policy describing
    the resource permissions you want to grant By default the session duration is one hour You can
    explicitly set a different duration value when requesting the temporary security credentials for federated
    users and applications
    Note
    To request temporary security credentials for federated users and applications for added
    security you might want to use a dedicated IAM user with only the necessary access
    permissions The temporary user you create can never get more permissions than the IAM
    user who requested the temporary security credentials For more information go to AWS
    Identity and Access Management FAQs
    Making Requests Using Federated User Temporary Credentials
    1 Create an instance of the AWS Security Token Service client
    AmazonSecurityTokenServiceClient class For information about providing
    credentials see Using the AWS SDK for NET (p 565)
    2 Start a session by calling the GetFederationToken method of the STS client
    You will need to provide session information including the user name and an IAM
    policy that you want to attach to the temporary credentials You can provide an optional
    session duration
    This method returns your temporary security credentials
    3 Package the temporary security credentials in an instance of the
    SessionAWSCredentials object You use this object to provide the temporary
    security credentials to your Amazon S3 client
    4 Create an instance of the AmazonS3Client class by passing the temporary security
    credentials
    You send requests to Amazon S3 using this client If you send requests using expired
    credentials Amazon S3 returns an error
    The following C# code sample demonstrates the preceding tasks
    In real applications the following code is part of your trusted code It
    has
    your security credentials you use to obtain temporary security
    credentials
    AmazonSecurityTokenServiceConfig config new
    AmazonSecurityTokenServiceConfig()
    AmazonSecurityTokenServiceClient stsClient
    new AmazonSecurityTokenServiceClient(config)

    GetFederationTokenRequest federationTokenRequest
    new GetFederationTokenRequest()
    federationTokenRequestName User1
    API Version 20060301
    40Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    federationTokenRequestPolicy *** Specify policy ***
    federationTokenRequestDurationSeconds 7200
    GetFederationTokenResponse federationTokenResponse
    stsClientGetFederationToken(federationTokenRequest)
    GetFederationTokenResult federationTokenResult
    federationTokenResponseGetFederationTokenResult
    Credentials credentials federationTokenResultCredentials
    SessionAWSCredentials sessionCredentials
    new SessionAWSCredentials(credentialsAccessKeyId
    credentialsSecretAccessKey
    credentialsSessionToken)
    The following will be part of your less trusted code You provide
    temporary security
    credentials so it can send authenticated requests to Amazon S3
    Create Amazon S3 client by passing in the basicSessionCredentials object
    AmazonS3Client s3Client new AmazonS3Client(sessionCredentials)
    Test For example send list object keys in a bucket
    ListObjectsRequest listObjectRequest new ListObjectsRequest()
    listObjectRequestBucketName bucketName
    ListObjectsResponse response s3ClientListObjects(listObjectRequest)
    API Version 20060301
    41Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Example
    The following C# code example lists keys in the specified bucket In the code example you first obtain
    temporary security credentials for a twohour session for your federated user (User1) and use them to
    send authenticated requests to Amazon S3
    When requesting temporary credentials for others for added security you use the security credentials
    of an IAM user who has permissions to request temporary security credentials You can also limit
    the access permissions of this IAM user to ensure that the IAM user grants only the minimum
    applicationspecific permissions to the federated user This sample only lists objects in a specific
    bucket Therefore first create an IAM user with the following policy attached
    {
    Statement[{
    Action[s3ListBucket
    stsGetFederationToken*
    ]
    EffectAllow
    Resource*
    }
    ]
    }
    The policy allows the IAM user to request temporary security credentials and access permission only
    to list your AWS resources For more information about how to create an IAM user see Creating Your
    First IAM User and Administrators Group in the IAM User Guide
    You can now use the IAM user security credentials to test the following example The example sends
    authenticated request to Amazon S3 using temporary security credentials The example specifies the
    following policy when requesting temporary security credentials for the federated user (User1) which
    restricts access to list objects in a specific bucket (YourBucketName) You must update the policy and
    provide your own existing bucket name
    {
    Statement[
    {
    Sid1
    Action[s3ListBucket]
    EffectAllow
    Resourcearnawss3YourBucketName
    }
    ]
    }
    You must update the following sample and provide the bucket name that you specified in the preceding
    federated user access policy For instructions on how to create and test a working example see
    Running the Amazon S3 NET Code Examples (p 566)
    using System
    using SystemConfiguration
    using SystemCollectionsSpecialized
    using AmazonS3
    using AmazonSecurityToken
    using AmazonSecurityTokenModel
    using AmazonRuntime
    using AmazonS3Model
    using SystemCollectionsGeneric
    namespace s3amazoncomdocsamples
    {
    class TempFederatedCredentials
    {
    static string bucketName *** Provide bucket name ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    NameValueCollection appConfig ConfigurationManagerAppSettings
    string accessKeyID appConfig[AWSAccessKey]
    string secretAccessKeyID appConfig[AWSSecretKey]
    try
    {
    ConsoleWriteLine(Listing objects stored in a bucket)
    SessionAWSCredentials tempCredentials
    GetTemporaryFederatedCredentials(accessKeyID
    secretAccessKeyID)

    Create client by providing temporary security credentials
    using (client new AmazonS3Client(tempCredentials
    AmazonRegionEndpointUSEast1))
    {
    ListObjectsRequest listObjectRequest new
    ListObjectsRequest()
    listObjectRequestBucketName bucketName
    ListObjectsResponse response
    clientListObjects(listObjectRequest)
    List objects responseS3Objects
    ConsoleWriteLine(Object count {0} objectsCount)
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    }
    catch (AmazonS3Exception s3Exception)
    {
    ConsoleWriteLine(s3ExceptionMessage
    s3ExceptionInnerException)
    }
    catch (AmazonSecurityTokenServiceException stsException)
    {
    ConsoleWriteLine(stsExceptionMessage
    stsExceptionInnerException)
    }
    }
    private static SessionAWSCredentials GetTemporaryFederatedCredentials(
    string accessKeyId string secretAccessKeyId)
    {
    AmazonSecurityTokenServiceConfig config new
    AmazonSecurityTokenServiceConfig()
    AmazonSecurityTokenServiceClient stsClient
    new AmazonSecurityTokenServiceClient(
    accessKeyId secretAccessKeyId
    config)

    GetFederationTokenRequest federationTokenRequest
    new GetFederationTokenRequest()
    federationTokenRequestDurationSeconds 7200
    federationTokenRequestName User1
    federationTokenRequestPolicy @{
    Statement
    [
    {
    SidStmt1311212314284
    Action[s3ListBucket]
    EffectAllow
    Resourcearnawss3YourBucketName
    }
    ]
    }


    GetFederationTokenResponse federationTokenResponse

    stsClientGetFederationToken(federationTokenRequest)
    Credentials credentials federationTokenResponseCredentials
    SessionAWSCredentials sessionCredentials
    new SessionAWSCredentials(credentialsAccessKeyId
    credentialsSecretAccessKey
    credentialsSessionToken)
    return sessionCredentials
    }
    }
    }
    API Version 20060301
    42Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    Making Requests Using Federated User Temporary
    Credentials AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to request temporary security
    credentials for federated users and applications and use them to access Amazon S3
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    You can provide temporary security credentials to your federated users and applications (see Making
    Requests (p 11)) so they can send authenticated requests to access your AWS resources When
    requesting these temporary credentials you must provide a user name and an IAM policy describing
    the resource permissions you want to grant These credentials expire when the session duration
    expires By default the session duration is one hour You can explicitly set a different duration value
    when requesting the temporary security credentials for federated users and applications For more
    information about temporary security credentials see Temporary Security Credentials in the IAM User
    Guide
    To request temporary security credentials for federated users and applications for added security
    you might want to use a dedicated IAM user with only the necessary access permissions The
    temporary user you create can never get more permissions than the IAM user who requested the
    temporary security credentials For information about identity federation go to AWS Identity and
    Access Management FAQs
    Making Requests Using Federated User Temporary Credentials
    1 Create an instance of an AWS Security Token Service (AWS STS) client by using the
    Aws\Sts\StsClient class factory() method
    2 Execute the Aws\Sts\StsClientgetFederationToken() method by providing the name
    of the federated user in the array parameter's required Name key You can also add
    the optional array parameter's Policy and DurationSeconds keys
    The method returns temporary security credentials that you can provide to your
    federated users
    3 Any federated user who has the temporary security credentials can send requests to
    Amazon S3 by creating an instance of an Amazon S3 client by using Aws\S3\S3Client
    class factory method with the temporary security credentials
    Any methods in the S3Client class that you call use the temporary security
    credentials to send authenticated requests to Amazon S3
    The following PHP code sample demonstrates obtaining temporary security credentials for a federated
    user and using the credentials to access Amazon S3
    use Aws\Sts\StsClient
    use Aws\S3\S3Client
    In real applications the following code is part of your trusted code It
    has
    API Version 20060301
    43Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    your security credentials that you use to obtain temporary security
    credentials
    sts StsClientfactory()
    Fetch the federated credentials
    result sts>getFederationToken(array(
    'Name' > 'User1'
    'DurationSeconds' > 3600
    'Policy' > json_encode(array(
    'Statement' > array(
    array(
    'Sid' > 'randomstatementid' time()
    'Action' > array('s3ListBucket')
    'Effect' > 'Allow'
    'Resource' > 'arnawss3YourBucketName'
    )
    )
    ))
    ))
    The following will be part of your less trusted code You provide
    temporary
    security credentials so it can send authenticated requests to Amazon S3
    credentials result>get('Credentials')
    s3 new S3Clientfactory(array(
    'key' > credentials['AccessKeyId']
    'secret' > credentials['SecretAccessKey']
    'token' > credentials['SessionToken']
    ))
    result s3>listObjects()
    API Version 20060301
    44Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Example of a Federated User Making an Amazon S3 Request Using Temporary Security
    Credentials
    The following PHP code example lists keys in the specified bucket In the code example you first
    obtain temporary security credentials for an hour session for your federated user (User1) and use them
    to send authenticated requests to Amazon S3 For information about running the PHP examples in this
    guide go to Running PHP Examples (p 567)
    When requesting temporary credentials for others for added security you use the security credentials
    of an IAM user who has permissions to request temporary security credentials You can also limit the
    access permissions of this IAM user to ensure that the IAM user grants only the minimum application
    specific permissions to the federated user This example only lists objects in a specific bucket
    Therefore first create an IAM user with the following policy attached
    {
    Statement[{
    Action[s3ListBucket
    stsGetFederationToken*
    ]
    EffectAllow
    Resource*
    }
    ]
    }
    The policy allows the IAM user to request temporary security credentials and access permission only
    to list your AWS resources For more information about how to create an IAM user see Creating Your
    First IAM User and Administrators Group in the IAM User Guide
    You can now use the IAM user security credentials to test the following example The example sends
    an authenticated request to Amazon S3 using temporary security credentials The example specifies
    the following policy when requesting temporary security credentials for the federated user (User1)
    which restricts access to list objects in a specific bucket You must update the policy with your own
    existing bucket name
    {
    Statement[
    {
    Sid1
    Action[s3ListBucket]
    EffectAllow
    Resourcearnawss3YourBucketName
    }
    ]
    }
    In the following example you must replace YourBucketName with your own existing bucket name when
    specifying the policy resource
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    bucket '*** Your Bucket Name ***'
    use Aws\Sts\StsClient
    use Aws\S3\S3Client
    use Aws\S3\Exception\S3Exception
    Instantiate the client
    sts StsClientfactory()
    result sts>getFederationToken(array(
    'Name' > 'User1'
    'DurationSeconds' > 3600
    'Policy' > json_encode(array(
    'Statement' > array(
    array(
    'Sid' > 'randomstatementid' time()
    'Action' > array('s3ListBucket')
    'Effect' > 'Allow'
    'Resource' > 'arnawss3YourBucketName'
    )
    )
    ))
    ))
    credentials result>get('Credentials')
    s3 S3Clientfactory(array(
    'key' > credentials['AccessKeyId']
    'secret' > credentials['SecretAccessKey']
    'token' > credentials['SessionToken']
    ))
    try {
    objects s3>getIterator('ListObjects' array(
    'Bucket' > bucket
    ))
    echo Keys retrieved\n
    foreach (objects as object) {
    echo object['Key'] \n
    }
    } catch (S3Exception e) {
    echo e>getMessage() \n
    }
    API Version 20060301
    45Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\Sts\StsClient Class
    • AWS SDK for PHP for Amazon S3 Aws\Sts\StsClientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\Sts\StsClientgetSessionToken() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    API Version 20060301
    46Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Making Requests Using Federated User Temporary
    Credentials AWS SDK for Ruby
    You can provide temporary security credentials for your federated users and applications (see Making
    Requests (p 11)) so that they can send authenticated requests to access your AWS resources When
    requesting these temporary credentials from the IAM service you must provide a user name and an
    IAM policy describing the resource permissions you want to grant By default the session duration is
    one hour However if you are requesting temporary credentials using IAM user credentials you can
    explicitly set a different duration value when requesting the temporary security credentials for federated
    users and applications
    Note
    To request temporary security credentials for federated users and applications for added
    security you might want to use a dedicated IAM user with only the necessary access
    permissions The temporary user you create can never get more permissions than the IAM
    user who requested the temporary security credentials For more information go to AWS
    Identity and Access Management FAQs
    Making Requests Using Federated User Temporary Security Credentials
    1 Create an instance of the AWS Security Token Service client AWSSTSSession
    2 Start a session by calling the new_federated_session method of the STS client you
    created in the preceding step
    You will need to provide session information including the user name and an IAM policy
    that you want to attach to the temporary credentials
    This method returns your temporary security credentials
    3 Create an instance of the AWSS3 class by passing the temporary security credentials
    You send requests to Amazon S3 using this client If you send requests using expired
    credentials Amazon S3 returns an error
    The following Ruby code sample demonstrates the preceding tasks
    # Start a session with restricted permissions
    sts AWSSTSnew()
    policy AWSSTSPolicynew
    policyallow(
    actions > [s3ListBucket]
    resources > arnawss3#{bucket_name})

    session stsnew_federated_session(
    'User1'
    policy > policy
    duration > 2*60*60)
    puts Policy #{policyto_json}
    # Get an instance of the S3 interface using the session credentials
    s3 AWSS3new(sessioncredentials)
    # Get a list of all object keys in a bucket
    bucket s3buckets[bucket_name]objectscollect(&key)
    API Version 20060301
    47Amazon Simple Storage Service Developer Guide
    Using Federated User Temporary Credentials
    Example
    The following Ruby code example lists keys in the specified bucket In the code example you first
    obtain temporary security credentials for a two hour session for your federated user (User1) and use
    them to send authenticated requests to Amazon S3When requesting temporary credentials for others for added security you use the security credentials
    of an IAM user who has permissions to request temporary security credentials You can also limit the
    access permissions of this IAM user to ensure that the IAM user grants only the minimum application
    specific permissions when requesting temporary security credentials This sample only lists objects in a
    specific bucket Therefore first create an IAM user with the following policy attached
    {
    Statement[{
    Action[s3ListBucket
    stsGetFederationToken*
    ]
    EffectAllow
    Resource*
    }
    ]
    }The policy allows the IAM user to request temporary security credentials and access permission only
    to list your AWS resources For more information about how to create an IAM user see Creating Your
    First IAM User and Administrators Group in the IAM User GuideYou can now use the IAM user security credentials to test the following example The example sends
    an authenticated request to Amazon S3 using temporary security credentials The example specifies
    the following policy when requesting temporary security credentials for the federated user (User1)
    which restricts access to listing objects in a specific bucket (YourBucketName) To use this example in
    your code update the policy and provide your own bucket name
    {
    Statement[
    {
    Sid1
    Action[s3ListBucket]
    EffectAllow
    Resourcearnawss3YourBucketName
    }
    ]
    }To use this example in your code provide your access key ID and secret key and the bucket name that
    you specified in the preceding federated user access policyrequire 'rubygems'
    require 'awssdk'
    # In real applications the following code is part of your trusted code It
    has
    # your security credentials that you use to obtain temporary security
    credentials
    bucket_name '*** Provide bucket name ***'
    # Start a session with restricted permissions
    sts AWSSTSnew()
    policy AWSSTSPolicynew
    policyallow(
    actions > [s3ListBucket]
    resources > arnawss3#{bucket_name})
    session stsnew_federated_session(
    'User1'
    policy > policy
    duration > 2*60*60)
    puts Policy #{policyto_json}
    # Get an instance of the S3 interface using the session credentials
    s3 AWSS3new(sessioncredentials)
    # Get a list of all object keys in a bucket
    bucket s3buckets[bucket_name]objectscollect(&key)
    puts No of Objects #{bucketcountto_s}
    puts bucket
    API Version 20060301
    48Amazon Simple Storage Service Developer Guide
    Making Requests Using the REST API
    Making Requests Using the REST API
    This section contains information on how to make requests to Amazon S3 endpoints by using the
    REST API For a list of Amazon S3 endpoints see Regions and Endpoints in the AWS General
    Reference
    Topics
    • Making Requests to DualStack Endpoints by Using the REST API (p 50)
    • Virtual Hosting of Buckets (p 50)
    • Request Redirection and the REST API (p 55)
    When making requests by using the REST API you can use virtual hosted–style or pathstyle URIs for
    the Amazon S3 endpoints For more information see Working with Amazon S3 Buckets (p 58)
    Example Virtual Hosted–Style Request
    Following is an example of a virtual hosted–style request to delete the puppyjpg file from the bucket
    named examplebucket
    DELETE puppyjpg HTTP11
    Host examplebuckets3uswest2amazonawscom
    Date Mon 11 Apr 2016 120000 GMT
    xamzdate Mon 11 Apr 2016 120000 GMT
    Authorization authorization string
    Example PathStyle Request
    Following is an example of a pathstyle version of the same request
    DELETE examplebucketpuppyjpg HTTP11
    Host s3uswest2amazonawscom
    Date Mon 11 Apr 2016 120000 GMT
    xamzdate Mon 11 Apr 2016 120000 GMT
    Authorization authorization string

    Amazon S3 supports virtual hostedstyle and pathstyle access in all regions The pathstyle syntax
    however requires that you use the regionspecific endpoint when attempting to access a bucket
    For example if you have a bucket called mybucket that resides in the EU (Ireland) region you want
    to use pathstyle syntax and the object is named puppyjpg the correct URI is https3eu
    west1amazonawscommybucketpuppyjpg
    You will receive an HTTP response code 307 Temporary Redirect error and a message indicating
    what the correct URI is for your resource if you try to access a bucket outside the US East (N Virginia)
    region with pathstyle syntax that uses either of the following
    • https3amazonawscom
    • An endpoint for a region different from the one where the bucket resides For example if you use
    https3euwest1amazonawscom for a bucket that was created in the US West (N
    California) region
    API Version 20060301
    49Amazon Simple Storage Service Developer Guide
    DualStack Endpoints (REST API)
    Making Requests to DualStack Endpoints by Using
    the REST API
    When using the REST API you can directly access a dualstack endpoint by using a virtual hosted–
    style or a path style endpoint name (URI) All Amazon S3 dualstack endpoint names include the
    region in the name Unlike the standard IPv4only endpoints both virtual hosted–style and a pathstyle
    endpoints use regionspecific endpoint names
    Example Virtual Hosted–Style DualStack Endpoint Request
    You can use a virtual hosted–style endpoint in your REST request as shown in the following example
    that retrieves the puppyjpg object from the bucket named examplebucket
    GET puppyjpg HTTP11
    Host examplebuckets3dualstackuswest2amazonawscom
    Date Mon 11 Apr 2016 120000 GMT
    xamzdate Mon 11 Apr 2016 120000 GMT
    Authorization authorization string
    Example PathStyle DualStack Endpoint Request
    Or you can use a pathstyle endpoint in your request as shown in the following example
    GET examplebucketpuppyjpg HTTP11
    Host s3dualstackuswest2amazonawscom
    Date Mon 11 Apr 2016 120000 GMT
    xamzdate Mon 11 Apr 2016 120000 GMT
    Authorization authorization string
    For more information about dualstack endpoints see Using Amazon S3 DualStack Endpoints (p 16)
    Virtual Hosting of Buckets
    Topics
    • HTTP Host Header Bucket Specification (p 51)
    • Examples (p 51)
    • Customizing Amazon S3 URLs with CNAMEs (p 53)
    • Limitations (p 54)
    • Backward Compatibility (p 55)
    In general virtual hosting is the practice of serving multiple web sites from a single web server
    One way to differentiate sites is by using the apparent host name of the request instead of just the
    path name part of the URI An ordinary Amazon S3 REST request specifies a bucket by using the
    first slashdelimited component of the RequestURI path Alternatively you can use Amazon S3
    virtual hosting to address a bucket in a REST API call by using the HTTP Host header In practice
    Amazon S3 interprets Host as meaning that most buckets are automatically accessible (for limited
    types of requests) at httpbucketnames3amazonawscom Furthermore by naming your
    bucket after your registered domain name and by making that name a DNS alias for Amazon S3
    you can completely customize the URL of your Amazon S3 resources for example http
    mybucketnamecom
    Besides the attractiveness of customized URLs a second benefit of virtual hosting is the ability to
    publish to the root directory of your bucket's virtual server This ability can be important because
    API Version 20060301
    50Amazon Simple Storage Service Developer Guide
    Virtual Hosting of Buckets
    many existing applications search for files in this standard location For example faviconico
    robotstxt crossdomainxml are all expected to be found at the root
    Important
    Amazon S3 supports virtual hostedstyle and pathstyle access in all regions The pathstyle
    syntax however requires that you use the regionspecific endpoint when attempting to access
    a bucket For example if you have a bucket called mybucket that resides in the EU (Ireland)
    region you want to use pathstyle syntax and the object is named puppyjpg the correct
    URI is https3euwest1amazonawscommybucketpuppyjpg
    You will receive an HTTP response code 307 Temporary Redirect error and a message
    indicating what the correct URI is for your resource if you try to access a bucket outside the
    US East (N Virginia) region with pathstyle syntax that uses either of the following
    • https3amazonawscom
    • An endpoint for a region different from the one where the bucket resides For example if
    you use https3euwest1amazonawscom for a bucket that was created in the US
    West (N California) region
    Note
    Amazon S3 routes any virtual hosted–style requests to the US East (N Virginia) region by
    default if you use the US East (N Virginia) endpoint (s3amazonawscom) instead of the
    regionspecific endpoint (for example s3euwest1amazonawscom) When you create a
    bucket in any region Amazon S3 updates DNS to reroute the request to the correct location
    which might take time In the meantime the default rule applies and your virtual hosted–style
    request goes to the US East (N Virginia) region and Amazon S3 redirects it with HTTP 307
    redirect to the correct region For more information see Request Redirection and the REST
    API (p 513)
    When using virtual hosted–style buckets with SSL the SSL wild card certificate only matches
    buckets that do not contain periods To work around this use HTTP or write your own
    certificate verification logic
    HTTP Host Header Bucket Specification
    As long as your GET request does not use the SSL endpoint you can specify the bucket for the request
    by using the HTTP Host header The Host header in a REST request is interpreted as follows
    • If the Host header is omitted or its value is 's3amazonawscom' the bucket for the request will be
    the first slashdelimited component of the RequestURI and the key for the request will be the rest of
    the RequestURI This is the ordinary method as illustrated by the first and second examples in this
    section Omitting the Host header is valid only for HTTP 10 requests
    • Otherwise if the value of the Host header ends in 's3amazonawscom' the bucket name is the
    leading component of the Host header's value up to 's3amazonawscom' The key for the request
    is the RequestURI This interpretation exposes buckets as subdomains of s3amazonawscom as
    illustrated by the third and fourth examples in this section
    • Otherwise the bucket for the request is the lowercase value of the Host header and the key for the
    request is the RequestURI This interpretation is useful when you have registered the same DNS
    name as your bucket name and have configured that name to be a CNAME alias for Amazon S3
    The procedure for registering domain names and configuring DNS is beyond the scope of this guide
    but the result is illustrated by the final example in this section
    Examples
    This section provides example URLs and requests
    API Version 20060301
    51Amazon Simple Storage Service Developer Guide
    Virtual Hosting of Buckets
    Example Path Style Method
    This example uses johnsmithnet as the bucket name and homepagehtml as the key name
    The URL is as follows
    https3amazonawscomjohnsmithnethomepagehtml
    The request is as follows
    GET johnsmithnethomepagehtml HTTP11
    Host s3amazonawscom
    The request with HTTP 10 and omitting the host header is as follows
    GET johnsmithnethomepagehtml HTTP10
    For information about DNScompatible names see Limitations (p 54) For more information about
    keys see Keys (p 4)
    Example Virtual Hosted–Style Method
    This example uses johnsmithnet as the bucket name and homepagehtml as the key name
    The URL is as follows
    httpjohnsmithnets3amazonawscomhomepagehtml
    The request is as follows
    GET homepagehtml HTTP11
    Host johnsmithnets3amazonawscom
    The virtual hosted–style method requires the bucket name to be DNScompliant
    API Version 20060301
    52Amazon Simple Storage Service Developer Guide
    Virtual Hosting of Buckets
    Example Virtual Hosted–Style Method for a Bucket in a Region Other Than US East (N
    Virginia) region
    This example uses johnsmitheu as the name for a bucket in the EU (Ireland) region and
    homepagehtml as the key name
    The URL is as follows
    httpjohnsmitheus3euwest1amazonawscomhomepagehtml
    The request is as follows
    GET homepagehtml HTTP11
    Host johnsmitheus3euwest1amazonawscom
    Note that instead of using the regionspecific endpoint you can also use the US East (N Virginia)
    region endpoint no matter what region the bucket resides
    httpjohnsmitheus3amazonawscomhomepagehtml
    The request is as follows
    GET homepagehtml HTTP11
    Host johnsmitheus3amazonawscom
    Example CNAME Method
    This example uses wwwjohnsmithnet as the bucket name and homepagehtml as the
    key name To use this method you must configure your DNS name as a CNAME alias for
    bucketnames3amazonawscom
    The URL is as follows
    httpwwwjohnsmithnethomepagehtml
    The example is as follows
    GET homepagehtml HTTP11
    Host wwwjohnsmithnet
    Customizing Amazon S3 URLs with CNAMEs
    Depending on your needs you might not want s3amazonawscom to appear on your website or
    service For example if you host your website images on Amazon S3 you might prefer http
    imagesjohnsmithnet instead of httpjohnsmithimagess3amazonawscom
    The bucket name must be the same as the CNAME So httpimagesjohnsmithnet
    filename would be the same as httpimagesjohnsmithnets3amazonawscom
    filename if a CNAME were created to map imagesjohnsmithnet to
    imagesjohnsmithnets3amazonawscom
    Any bucket with a DNScompatible name can be referenced as follows http
    [BucketName]s3amazonawscom[Filename] for example http
    API Version 20060301
    53Amazon Simple Storage Service Developer Guide
    Virtual Hosting of Buckets
    imagesjohnsmithnets3amazonawscommydogjpg By using CNAME you can map
    imagesjohnsmithnet to an Amazon S3 host name so that the previous URL could become
    httpimagesjohnsmithnetmydogjpg
    The CNAME DNS record should alias your domain name to the appropriate virtual hosted–style
    host name For example if your bucket name and domain name are imagesjohnsmithnet the
    CNAME record should alias to imagesjohnsmithnets3amazonawscom
    imagesjohnsmithnet CNAME imagesjohnsmithnets3amazonawscom
    Setting the alias target to s3amazonawscom also works but it may result in extra HTTP redirects
    Amazon S3 uses the host name to determine the bucket name For example suppose that you have
    configured wwwexamplecom as a CNAME for wwwexamplecoms3amazonawscom When you
    access httpwwwexamplecom Amazon S3 receives a request similar to the following
    GET HTTP11
    Host wwwexamplecom
    Date date
    Authorization signatureValue
    Because Amazon S3 sees only the original host name wwwexamplecom and is unaware of the
    CNAME mapping used to resolve the request the CNAME and the bucket name must be the same
    Any Amazon S3 endpoint can be used in a CNAME For example s3ap
    southeast1amazonawscom can be used in CNAMEs For more information about endpoints see
    Request Endpoints (p 13)
    To associate a host name with an Amazon S3 bucket using CNAMEs
    1 Select a host name that belongs to a domain you control This example uses the images
    subdomain of the johnsmithnet domain
    2 Create a bucket that matches the host name In this example the host and bucket names are
    imagesjohnsmithnet
    Note
    The bucket name must exactly match the host name
    3 Create a CNAME record that defines the host name as an alias for the Amazon S3 bucket For
    example
    imagesjohnsmithnet CNAME imagesjohnsmithnets3amazonawscom
    Important
    For request routing reasons the CNAME record must be defined exactly as shown in the
    preceding example Otherwise it might appear to operate correctly but will eventually
    result in unpredictable behavior
    Note
    The procedure for configuring DNS depends on your DNS server or DNS provider For
    specific information see your server documentation or contact your provider
    Limitations
    Specifying the bucket for the request by using the HTTP Host header is supported for nonSSL
    requests and when using the REST API You cannot specify the bucket in SOAP by using a different
    endpoint
    API Version 20060301
    54Amazon Simple Storage Service Developer Guide
    Request Redirection and the REST API
    Note
    SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
    features will not be supported for SOAP We recommend that you use either the REST API or
    the AWS SDKs
    Backward Compatibility
    Early versions of Amazon S3 incorrectly ignored the HTTP Host header Applications that depend on
    this undocumented behavior must be updated to set the Host header correctly Because Amazon S3
    determines the bucket name from Host when it is present the most likely symptom of this problem is
    to receive an unexpected NoSuchBucket error result code
    Request Redirection and the REST API
    Topics
    • Redirects and HTTP UserAgents (p 55)
    • Redirects and 100Continue (p 55)
    • Redirect Example (p 56)
    This section describes how to handle HTTP redirects by using the Amazon S3 REST API For general
    information about Amazon S3 redirects see Request Redirection and the REST API (p 513) in the
    Amazon Simple Storage Service API Reference
    Redirects and HTTP UserAgents
    Programs that use the Amazon S3 REST API should handle redirects either at the application layer
    or the HTTP layer Many HTTP client libraries and user agents can be configured to correctly handle
    redirects automatically however many others have incorrect or incomplete redirect implementations
    Before you rely on a library to fulfill the redirect requirement test the following cases
    • Verify all HTTP request headers are correctly included in the redirected request (the second request
    after receiving a redirect) including HTTP standards such as Authorization and Date
    • Verify nonGET redirects such as PUT and DELETE work correctly
    • Verify large PUT requests follow redirects correctly
    • Verify PUT requests follow redirects correctly if the 100continue response takes a long time to
    arrive
    HTTP useragents that strictly conform to RFC 2616 might require explicit confirmation before following
    a redirect when the HTTP request method is not GET or HEAD It is generally safe to follow redirects
    generated by Amazon S3 automatically as the system will issue redirects only to hosts within the
    amazonawscom domain and the effect of the redirected request will be the same as that of the original
    request
    Redirects and 100Continue
    To simplify redirect handling improve efficiencies and avoid the costs associated with sending a
    redirected request body twice configure your application to use 100continues for PUT operations
    When your application uses 100continue it does not send the request body until it receives an
    acknowledgement If the message is rejected based on the headers the body of the message is not
    sent For more information about 100continue go to RFC 2616 Section 823
    Note
    According to RFC 2616 when using Expect Continue with an unknown HTTP server
    you should not wait an indefinite period before sending the request body This is because
    API Version 20060301
    55Amazon Simple Storage Service Developer Guide
    Request Redirection and the REST API
    some HTTP servers do not recognize 100continue However Amazon S3 does recognize
    if your request contains an Expect Continue and will respond with a provisional 100
    continue status or a final status code Additionally no redirect error will occur after receiving
    the provisional 100 continue goahead This will help you avoid receiving a redirect response
    while you are still writing the request body
    Redirect Example
    This section provides an example of clientserver interaction using HTTP redirects and 100continue
    Following is a sample PUT to the quotess3amazonawscom bucket
    PUT nelsontxt HTTP11
    Host quotess3amazonawscom
    Date Mon 15 Oct 2007 221846 +0000
    ContentLength 6
    Expect 100continue
    Amazon S3 returns the following
    HTTP11 307 Temporary Redirect
    Location httpquotess34c25d83bamazonawscomnelsontxtrk8d47490b
    ContentType applicationxml
    TransferEncoding chunked
    Date Mon 15 Oct 2007 221846 GMT
    Server AmazonS3


    TemporaryRedirect
    Please resend this request to the
    specified temporary endpoint Continue to use the
    original request endpoint for future requests

    quotess34c25d83bamazonawscom
    quotes

    The client follows the redirect response and issues a new request to the
    quotess34c25d83bamazonawscom temporary endpoint
    PUT nelsontxtrk8d47490b HTTP11
    Host quotess34c25d83bamazonawscom
    Date Mon 15 Oct 2007 221846 +0000
    ContentLength 6
    Expect 100continue
    Amazon S3 returns a 100continue indicating the client should proceed with sending the request body
    HTTP11 100 Continue
    The client sends the request body
    API Version 20060301
    56Amazon Simple Storage Service Developer Guide
    Request Redirection and the REST API
    ha ha\n
    Amazon S3 returns the final response
    HTTP11 200 OK
    Date Mon 15 Oct 2007 221848 GMT
    ETag a2c8d6b872054293afd41061e93bc289
    ContentLength 0
    Server AmazonS3
    API Version 20060301
    57Amazon Simple Storage Service Developer Guide
    Working with Amazon S3 Buckets
    Amazon S3 is cloud storage for the Internet To upload your data (photos videos documents etc)
    you first create a bucket in one of the AWS Regions You can then upload any number of objects to the
    bucket
    In terms of implementation buckets and objects are resources and Amazon S3 provides APIs for you
    to manage them For example you can create a bucket and upload objects using the Amazon S3 API
    You can also use the Amazon S3 console to perform these operations The console internally uses the
    Amazon S3 APIs to send requests to Amazon S3
    In this section we explain working with buckets For information about working with objects see
    Working with Amazon S3 Objects (p 98)
    Amazon S3 bucket names are globally unique regardless of the AWS Region in which you create the
    bucket You specify the name at the time you create the bucket For bucket naming guidelines see
    Bucket Restrictions and Limitations (p 62)
    Amazon S3 creates bucket in a region you specify You can choose any AWS Region that is
    geographically close to you to optimize latency minimize costs or address regulatory requirements
    For example if you reside in Europe you might find it advantageous to create buckets in the EU
    (Ireland) or EU (Frankfurt) regions For a list of AWS Amazon S3 regions go to Regions and Endpoints
    in the AWS General Reference
    Note
    Objects belonging to a bucket that you create in a specific AWS Region never leave that
    region unless you explicitly transfer them to another region For example objects stored in
    the EU (Ireland) region never leave it
    Topics
    • Creating a Bucket (p 59)
    • Accessing a Bucket (p 60)
    • Bucket Configuration Options (p 61)
    • Bucket Restrictions and Limitations (p 62)
    • Examples of Creating a Bucket (p 64)
    • Deleting or Emptying a Bucket (p 67)
    • Managing Bucket Website Configuration (p 73)
    API Version 20060301
    58Amazon Simple Storage Service Developer Guide
    Creating a Bucket
    • Amazon S3 Transfer Acceleration (p 81)
    • Requester Pays Buckets (p 92)
    • Buckets and Access Control (p 96)
    • Billing and Reporting of Buckets (p 96)
    Creating a Bucket
    Amazon S3 provides APIs for you to create and manage buckets By default you can create up to 100
    buckets in each of your AWS accounts If you need additional buckets you can increase your bucket
    limit by submitting a service limit increase To learn more about submitting a bucket limit increase go
    to AWS Service Limits in the AWS General Reference
    When you create a bucket you provide a name and AWS Region where you want the bucket created
    For information about naming buckets see Rules for Bucket Naming (p 63)
    Within each bucket you can store any number of objects You can create a bucket using any of the
    following methods
    • Create the bucket using the console
    • Create the bucket programmatically using the AWS SDKs
    Note
    If you need to you can also make the Amazon S3 REST API calls directly from your code
    However this can be cumbersome because it requires you to write code to authenticate
    your requests For more information go to PUT Bucket in the Amazon Simple Storage
    Service API Reference
    When using AWS SDKs you first create a client and then send a request to create a bucket using
    the client You can specify an AWS Region when you create the client US East (N Virginia) is the
    default region You can also specify a region in your create bucket request Note the following
    • If you create a client by specifying the US East (N Virginia) Region it uses the following endpoint
    to communicate with Amazon S3
    s3amazonawscom
    You can use this client to create a bucket in any AWS Region In your create bucket request
    • If you don’t specify a region Amazon S3 creates the bucket in the US East (N Virginia) Region
    • If you specify an AWS Region Amazon S3 creates the bucket in the specified region
    • If you create a client by specifying any other AWS Region each of these regions maps to the
    regionspecific endpoint
    s3amazonawscom
    For example if you create a client by specifying the euwest1 region it maps to the following
    regionspecific endpoint
    s3euwest1amazonawscom
    In this case you can use the client to create a bucket only in the euwest1 region Amazon S3
    returns an error if you specify any other region in your create bucket request
    • If you create a client to access a dualstack endpoint you must specify an AWS Region For more
    information see DualStack Endpoints (p 16)
    API Version 20060301
    59Amazon Simple Storage Service Developer Guide
    About Permissions
    For a list of available AWS Regions go to Regions and Endpoints in the AWS General Reference
    For examples see Examples of Creating a Bucket (p 64)
    About Permissions
    You can use your AWS account root credentials to create a bucket and perform any other Amazon S3
    operation However AWS recommends not using the root credentials of your AWS account to make
    requests such as create a bucket Instead create an IAM user and grant that user full access (users
    by default have no permissions) We refer to these users as administrator users You can use the
    administrator user credentials instead of the root credentials of your account to interact with AWS and
    perform tasks such as create a bucket create users and grant them permissions
    For more information go to Root Account Credentials vs IAM User Credentials in the AWS General
    Reference and IAM Best Practices in the IAM User Guide
    The AWS account that creates a resource owns that resource For example if you create an IAM
    user in your AWS account and grant the user permission to create a bucket the user can create a
    bucket But the user does not own the bucket the AWS account to which the user belongs owns the
    bucket The user will need additional permission from the resource owner to perform any other bucket
    operations For more information about managing permissions for your Amazon S3 resources see
    Managing Access Permissions to Your Amazon S3 Resources (p 266)
    Accessing a Bucket
    You can access your bucket using the Amazon S3 console Using the console UI you can perform
    almost all bucket operations without having to write any code
    If you access a bucket programmatically note that Amazon S3 supports RESTful architecture in which
    your buckets and objects are resources each with a resource URI that uniquely identify the resource
    Amazon S3 supports both virtualhosted–style and pathstyle URLs to access a bucket
    • In a virtualhosted–style URL the bucket name is part of the domain name in the URL For example

    • httpbuckets3amazonawscom
    • httpbuckets3awsregionamazonawscom
    In a virtualhosted–style URL you can use either of these endpoints If you make a request to the
    httpbuckets3amazonawscom endpoint the DNS has sufficient information to route your
    request directly to the region where your bucket resides
    For more information see Virtual Hosting of Buckets (p 50)

    • In a pathstyle URL the bucket name is not part of the domain (unless you use a regionspecific
    endpoint) For example
    • US East (N Virginia) region endpoint https3amazonawscombucket
    • Regionspecific endpoint https3awsregionamazonawscombucket
    In a pathstyle URL the endpoint you use must match the region in which the bucket resides For
    example if your bucket is in the South America (São Paulo) region you must use the http
    s3saeast1amazonawscombucket endpoint If your bucket is in the US East (N Virginia)
    region you must use the https3amazonawscombucket endpoint
    API Version 20060301
    60Amazon Simple Storage Service Developer Guide
    Bucket Configuration Options
    Important
    Because buckets can be accessed using pathstyle and virtualhosted–style URLs we
    recommend you create buckets with DNScompliant bucket names For more information see
    Bucket Restrictions and Limitations (p 62)
    Accessing an S3 Bucket over IPv6
    Amazon S3 has a set of dualstack endpoints which support requests to S3 buckets over both Internet
    Protocol version 6 (IPv6) and IPv4 For more information see Making Requests over IPv6 (p 13)
    Bucket Configuration Options
    Amazon S3 supports various options for you to configure your bucket For example you can configure
    your bucket for website hosting add configuration to manage lifecycle of objects in the bucket and
    configure the bucket to log all access to the bucket Amazon S3 supports subresources for you to
    store and manage the bucket configuration information That is using the Amazon S3 API you can
    create and manage these subresources You can also use the console or the AWS SDKs
    Note
    There are also objectlevel configurations For example you can configure objectlevel
    permissions by configuring an access control list (ACL) specific to that object
    These are referred to as subresources because they exist in the context of a specific bucket or object
    The following table lists subresources that enable you to manage bucketspecific configurations
    Subresource Description
    location When you create a bucket you specify the AWS Region where you want
    Amazon S3 to create the bucket Amazon S3 stores this information in the
    location subresource and provides an API for you to retrieve this information
    policy and ACL
    (Access Control
    List)
    All your resources (such as buckets and objects) are private by default Amazon
    S3 supports both bucket policy and access control list (ACL) options for you to
    grant and manage bucketlevel permissions Amazon S3 stores the permission
    information in the policy and acl subresources
    For more information see Managing Access Permissions to Your Amazon S3
    Resources (p 266)
    cors (crossorigin
    resource sharing)
    You can configure your bucket to allow crossorigin requests
    For more information see Enabling CrossOrigin Resource Sharing
    website You can configure your bucket for static website hosting Amazon S3 stores this
    configuration by creating a website subresource
    For more information see Hosting a Static Website on Amazon S3
    logging Logging enables you to track requests for access to your bucket Each
    access log record provides details about a single access request such as the
    requester bucket name request time request action response status and
    error code if any Access log information can be useful in security and access
    audits It can also help you learn about your customer base and understand
    your Amazon S3 bill
    For more information see Server Access Logging (p 546)
    event notification You can enable your bucket to send you notifications of specified bucket events
    API Version 20060301
    61Amazon Simple Storage Service Developer Guide
    Restrictions and Limitations
    Subresource Description
    For more information see Configuring Amazon S3 Event
    Notifications (p 472)
    versioning Versioning helps you recover accidental overwrites and deletes
    We recommend versioning as a best practice to recover objects from being
    deleted or overwritten by mistake
    For more information see Using Versioning (p 423)
    lifecycle You can define lifecycle rules for objects in your bucket that have a welldefined
    lifecycle For example you can define a rule to archive objects one year after
    creation or delete an object 10 years after creation
    For more information see Object Lifecycle Management
    crossregion
    replication
    Crossregion replication is the automatic asynchronous copying of objects
    across buckets in different AWS Regions For more information see Cross
    Region Replication (p 492)
    tagging You can add cost allocation tags to your bucket to categorize and track your
    AWS costs Amazon S3 provides the tagging subresource to store and manage
    tags on a bucket Using tags you apply to your bucket AWS generates a cost
    allocation report with usage and costs aggregated by your tags
    For more information see Billing and Reporting of Buckets (p 96)
    requestPayment By default the AWS account that creates the bucket (the bucket owner) pays
    for downloads from the bucket Using this subresource the bucket owner
    can specify that the person requesting the download will be charged for the
    download Amazon S3 provides an API for you to manage this subresource
    For more information see Requester Pays Buckets (p 92)
    transfer
    acceleration
    Transfer Acceleration enables fast easy and secure transfers of files over long
    distances between your client and an S3 bucket Transfer Acceleration takes
    advantage of Amazon CloudFront’s globally distributed edge locations
    For more information see Amazon S3 Transfer Acceleration (p 81)
    Bucket Restrictions and Limitations
    A bucket is owned by the AWS account that created it By default you can create up to 100 buckets
    in each of your AWS accounts If you need additional buckets you can increase your bucket limit by
    submitting a service limit increase For information about how to increase your bucket limit go to AWS
    Service Limits in the AWS General Reference
    Bucket ownership is not transferable however if a bucket is empty you can delete it After a bucket
    is deleted the name becomes available to reuse but the name might not be available for you to reuse
    for various reasons For example some other account could create a bucket with that name Note too
    that it might take some time before the name can be reused So if you want to use the same bucket
    name don't delete the bucket
    There is no limit to the number of objects that can be stored in a bucket and no difference in
    performance whether you use many buckets or just a few You can store all of your objects in a single
    bucket or you can organize them across several buckets
    API Version 20060301
    62Amazon Simple Storage Service Developer Guide
    Rules for Naming
    You cannot create a bucket within another bucket
    The highavailability engineering of Amazon S3 is focused on get put list and delete operations
    Because bucket operations work against a centralized global resource space it is not appropriate to
    create or delete buckets on the highavailability code path of your application It is better to create or
    delete buckets in a separate initialization or setup routine that you run less often
    Note
    If your application automatically creates buckets choose a bucket naming scheme that is
    unlikely to cause naming conflicts Ensure that your application logic will choose a different
    bucket name if a bucket name is already taken
    Rules for Bucket Naming
    We recommend that all bucket names comply with DNS naming conventions These conventions are
    enforced in all regions except for the US East (N Virginia) region
    Note
    If you use the AWS management console bucket names must be DNS compliant in all
    regions
    DNScompliant bucket names allow customers to benefit from new features and operational
    improvements as well as providing support for virtualhost style access to buckets While the US East
    (N Virginia) region currently allows noncompliant DNS bucket naming we are moving to the same
    DNScompliant bucket naming convention for the US East (N Virginia) region in the coming months
    This will ensure a single consistent naming approach for Amazon S3 buckets The rules for DNS
    compliant bucket names are
    • Bucket names must be at least 3 and no more than 63 characters long
    • Bucket names must be a series of one or more labels Adjacent labels are separated by a single
    period () Bucket names can contain lowercase letters numbers and hyphens Each label must
    start and end with a lowercase letter or a number
    • Bucket names must not be formatted as an IP address (eg 19216854)
    • When using virtual hosted–style buckets with SSL the SSL wildcard certificate only matches buckets
    that do not contain periods To work around this use HTTP or write your own certificate verification
    logic We recommend that you do not use periods () in bucket names
    The following examples are valid bucket names
    • myawsbucket
    • myawsbucket
    • myawsbucket1
    The following examples are invalid bucket names
    Invalid Bucket Name Comment
    myawsbucket Bucket name cannot start with a period ()
    myawsbucket Bucket name cannot end with a period ()
    myexamplebucket There can be only one period between labels
    Challenges with Non–DNSCompliant Bucket Names
    The US East (N Virginia) region currently allows more relaxed standards for bucket naming which
    can result in a bucket name that is not DNScompliant For example MyAWSBucket is a valid bucket
    API Version 20060301
    63Amazon Simple Storage Service Developer Guide
    Examples of Creating a Bucket
    name even though it contains uppercase letters If you try to access this bucket by using a virtual
    hosted–style request (httpMyAWSBuckets3amazonawscomyourobject) the URL resolves
    to the bucket myawsbucket and not the bucket MyAWSBucket In response Amazon S3 will return a
    bucket not found error
    To avoid this problem we recommend as a best practice that you always use DNScompliant bucket
    names regardless of the region in which you create the bucket For more information about virtual
    hosted–style access to your buckets see Virtual Hosting of Buckets (p 50)
    The name of the bucket used for Amazon S3 Transfer Acceleration must be DNScompliant and
    must not contain periods () For more information about transfer acceleration see see Amazon S3
    Transfer Acceleration (p 81)
    The rules for bucket names in the US East (N Virginia) region allow bucket names to be as long
    as 255 characters and bucket names can contain any combination of uppercase letters lowercase
    letters numbers periods () hyphens () and underscores (_)
    Examples of Creating a Bucket
    Topics
    • Using the Amazon S3 Console (p 65)
    • Using the AWS SDK for Java (p 65)
    • Using the AWS SDK for NET (p 66)
    • Using the AWS SDK for Ruby Version 2 (p 67)
    • Using Other AWS SDKs (p 67)
    This section provides code examples of creating a bucket programmatically using the AWS SDKs for
    Java NET and Ruby The code examples perform the following tasks
    • Create a bucket if it does not exist — The examples create a bucket as follows
    • Create a client by explicitly specifying an AWS Region (example uses the s3eu
    west1 region) Accordingly the client communicates with Amazon S3 using the s3eu
    west1amazonawscom endpoint You can specify any other AWS Region For a list of available
    AWS Regions see Regions and Endpoints in the AWS General Reference
    • Send a create bucket request by specifying only a bucket name The create bucket request does
    not specify another AWS Region therefore the client sends a request to Amazon S3 to create the
    bucket in the region you specified when creating the client
    Note
    If you specify a region in your create bucket request that conflicts with the region you
    specify when you create the client you might get an error For more information see
    Creating a Bucket (p 59)
    The SDK libraries send the PUT bucket request to Amazon S3 (see PUT Bucket) to create the
    bucket
    • Retrieve bucket location information — Amazon S3 stores bucket location information in the location
    subresource associated with the bucket The SDK libraries send the GET Bucket location request
    (see GET Bucket location) to retrieve this information
    API Version 20060301
    64Amazon Simple Storage Service Developer Guide
    Using the Amazon S3 Console
    Using the Amazon S3 Console
    For creating a bucket using Amazon S3 console go to Creating a Bucket in the Amazon Simple
    Storage Service Console User Guide
    Using the AWS SDK for Java
    For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioIOException
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsregionsRegion
    import comamazonawsregionsRegions
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelCreateBucketRequest
    import comamazonawsservicess3modelGetBucketLocationRequest
    public class CreateBucket {
    private static String bucketName *** bucket name ***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3client new AmazonS3Client(new
    ProfileCredentialsProvider())
    s3clientsetRegion(RegiongetRegion(RegionsUS_WEST_1))

    try {
    if((s3clientdoesBucketExist(bucketName)))
    {
    Note that CreateBucketRequest does not specify region So
    bucket is
    created in the region specified in the client
    s3clientcreateBucket(new CreateBucketRequest(
    bucketName))
    }
    Get location
    String bucketLocation s3clientgetBucketLocation(new
    GetBucketLocationRequest(bucketName))
    Systemoutprintln(bucket location + bucketLocation)
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException which +
    means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException which +
    means the client encountered +
    API Version 20060301
    65Amazon Simple Storage Service Developer Guide
    Using the AWS SDK for NET
    an internal error while trying to +
    communicate with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    Using the AWS SDK for NET
    For information about how to create and test a working sample see Running the Amazon S3 NET
    Code Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    using AmazonS3Util
    namespace s3amazoncomdocsamples
    {
    class CreateBucket
    {
    static string bucketName *** bucket name ***
    public static void Main(string[] args)
    {
    using (var client new
    AmazonS3Client(AmazonRegionEndpointEUWest1))
    {
    if ((AmazonS3UtilDoesS3BucketExist(client bucketName)))
    {
    CreateABucket(client)
    }
    Retrieve bucket location
    string bucketLocation FindBucketLocation(client)
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static string FindBucketLocation(IAmazonS3 client)
    {
    string bucketLocation
    GetBucketLocationRequest request new GetBucketLocationRequest()
    {
    BucketName bucketName
    }
    GetBucketLocationResponse response
    clientGetBucketLocation(request)
    bucketLocation responseLocationToString()
    return bucketLocation
    }
    static void CreateABucket(IAmazonS3 client)
    {
    API Version 20060301
    66Amazon Simple Storage Service Developer Guide
    Using the AWS SDK for Ruby Version 2
    try
    {
    PutBucketRequest putRequest1 new PutBucketRequest
    {
    BucketName bucketName
    UseClientRegion true
    }
    PutBucketResponse response1 clientPutBucket(putRequest1)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(
    For service sign up go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when writing an
    object
    amazonS3ExceptionMessage)
    }
    }
    }
    }
    }
    Using the AWS SDK for Ruby Version 2
    For information about how to create and test a working sample see Using the AWS SDK for Ruby
    Version 2 (p 568)
    require 'awssdk'

    s3 AwsS3Clientnew(region 'uswest1')
    s3create_bucket(bucket 'bucketname')

    Using Other AWS SDKs
    For information about using other AWS SDKs go to Sample Code and Libraries
    Deleting or Emptying a Bucket
    Topics
    • Delete a Bucket (p 68)
    • Empty a Bucket (p 71)
    API Version 20060301
    67Amazon Simple Storage Service Developer Guide
    Delete a Bucket
    It is easy to delete an empty bucket however in some situations you may need to delete or empty
    a bucket that contains objects In this section we'll explain how to delete objects in an unversioned
    bucket (the default) and how to delete object versions and delete markers in a bucket that has
    versioning enabled For more information about versioning see Using Versioning (p 423) In some
    situations you may choose to empty a bucket instead of deleting it This section explains various
    options you can use to delete or empty a bucket that contains objects
    Delete a Bucket
    You can delete a bucket and its content programmatically using AWS SDK You can also use lifecycle
    configuration on a bucket to empty its content and then delete the bucket There are additional options
    such as using Amazon S3 console and AWS CLI but there are limitations on this method based on the
    number of objects in your bucket and the bucket's versioning status
    Topics
    • Delete a Bucket Using the Amazon S3 Console (p 68)
    • Delete a Bucket Using the AWS CLI (p 68)
    • Delete a Bucket Using Lifecycle Configuration (p 68)
    • Delete a Bucket Using the AWS SDKs (p 69)
    Delete a Bucket Using the Amazon S3 Console
    The Amazon S3 console supports deleting a bucket that may or may not be empty If the bucket is not
    empty the Amazon S3 console supports deleting a bucket containing up to 100000 objects If your
    bucket contains more than 100000 objects you can use other options such as the AWS CLI bucket
    lifecycle configuration or programmatically using AWS SDKs
    In the Amazon S3 console open the context (rightclick) menu on the bucket and choose Delete
    Bucket or Empty Bucket
    Delete a Bucket Using the AWS CLI
    You can delete a bucket that contains objects using the AWS CLI only if the bucket does not have
    versioning enabled If your bucket does not have versioning enabled you can use the rb (remove
    bucket) AWS CLI command with force parameter to remove a nonempty bucket This command
    deletes all objects first and then deletes the bucket
    aws s3 rb s3bucketname force
    For more information see Using HighLevel S3 Commands with the AWS Command Line Interface in
    the AWS Command Line Interface User Guide
    To delete a nonempty bucket that does not have versioning enabled you have the following options
    • Delete the bucket programmatically using the AWS SDK
    • First delete all of the objects using the bucket's lifecycle configuration and then delete the empty
    bucket using the Amazon S3 console
    Delete a Bucket Using Lifecycle Configuration
    You can configure lifecycle on your bucket to expire objects Amazon S3 then deletes expired objects
    You can add lifecycle configuration rules to expire all or a subset of objects with a specific key name
    API Version 20060301
    68Amazon Simple Storage Service Developer Guide
    Delete a Bucket
    prefix For example to remove all objects in a bucket you can set a lifecycle rule to expire objects one
    day after creation
    If your bucket has versioning enabled you can also configure the rule to expire noncurrent objects
    After Amazon S3 deletes all of the objects in your bucket you can delete the bucket or keep it
    Important
    If you just want to empty the bucket and not delete it make sure you remove the lifecycle
    configuration rule you added to empty the bucket so that any new objects you create in the
    bucket will remain in the bucket
    For more information see Object Lifecycle Management (p 109) and Expiring Objects General
    Considerations (p 112)
    Delete a Bucket Using the AWS SDKs
    You can use the AWS SDKs to delete a bucket The following sections provide examples of how to
    delete a bucket using the AWS SDK for NET and Java First the code deletes objects in the bucket
    and then it deletes the bucket For information about other AWS SDKs see Tools for Amazon Web
    Services
    Delete a Bucket Using the AWS SDK for Java
    The following Java example deletes a nonempty bucket First the code deletes all objects and then it
    deletes the bucket The code example also works for buckets with versioning enabled
    For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioIOException
    import javautilIterator
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsregionsRegion
    import comamazonawsregionsRegions
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelListVersionsRequest
    import comamazonawsservicess3modelObjectListing
    import comamazonawsservicess3modelS3ObjectSummary
    import comamazonawsservicess3modelS3VersionSummary
    import comamazonawsservicess3modelVersionListing
    public class DeleteBucketAndContent {
    private static String bucketName ***bucket name to delete ***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3client new AmazonS3Client(new
    ProfileCredentialsProvider())
    s3clientsetRegion(RegiongetRegion(RegionsAWSRegionWhereBucket
    Resides))

    try {
    Systemoutprintln(Deleting S3 bucket + bucketName)
    API Version 20060301
    69Amazon Simple Storage Service Developer Guide
    Delete a Bucket
    ObjectListing objectListing
    s3clientlistObjects(bucketName)

    while (true) {
    for ( Iterator<> iterator
    objectListinggetObjectSummaries()iterator() iteratorhasNext() ) {
    S3ObjectSummary objectSummary (S3ObjectSummary)
    iteratornext()
    s3clientdeleteObject(bucketName
    objectSummarygetKey())
    }

    if (objectListingisTruncated()) {
    objectListing
    s3clientlistNextBatchOfObjects(objectListing)
    } else {
    break
    }
    }
    VersionListing list s3clientlistVersions(new
    ListVersionsRequest()withBucketName(bucketName))
    for ( Iterator<> iterator
    listgetVersionSummaries()iterator() iteratorhasNext() ) {
    S3VersionSummary s (S3VersionSummary)iteratornext()
    s3clientdeleteVersion(bucketName sgetKey()
    sgetVersionId())
    }
    s3clientdeleteBucket(bucketName)
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException which +
    means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException which +
    means the client encountered +
    an internal error while trying to +
    communicate with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    Delete a Bucket Using the AWS SDK for NET
    The following NET example deletes a nonempty bucket First the code deletes all objects and then it
    deletes the bucket The code example also works for buckets with versioning enabled
    For instructions on how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    API Version 20060301
    70Amazon Simple Storage Service Developer Guide
    Empty a Bucket
    using System
    using AmazonS3
    using AmazonS3Model
    using AmazonS3Util
    namespace s3amazoncomdocsamples
    {
    class CreateBucket
    {
    static string bucketName *** bucket name to delete ***
    public static void Main(string[] args)
    {
    try
    {
    using (var client new
    AmazonS3Client(AmazonRegionEndpointAWSregionwherebucketresides))
    {
    AmazonS3UtilDeleteS3BucketWithObjects(client
    bucketName)
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(
    For service sign up go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when writing an
    object
    amazonS3ExceptionMessage)
    }
    }
    }
    }
    }
    Empty a Bucket
    You can empty a bucket's content (that is delete all content but keep the bucket) programmatically
    using the AWS SDK You can also specify lifecycle configuration on a bucket to expire objects so that
    Amazon S3 can delete them There are additional options such as using Amazon S3 console and
    AWS CLI but there are limitations on this method based on the number of objects in your bucket and
    the bucket's versioning status
    Topics
    API Version 20060301
    71Amazon Simple Storage Service Developer Guide
    Empty a Bucket
    • Empty a Bucket Using the Amazon S3 console (p 72)
    • Empty a Bucket Using the AWS CLI (p 72)
    • Empty a Bucket Using Lifecycle Configuration (p 72)
    • Empty a Bucket Using the AWS SDKs (p 73)
    Empty a Bucket Using the Amazon S3 console
    The Amazon S3 console supports emptying your bucket provided that the bucket contains less than
    100000 objects The Amazon S3 console returns an error if you attempt to empty a bucket that
    contains more than 100000 objects For example if your bucket has versioning enabled you can
    have one object with 101000 object versions and you will not be able to empty this bucket using the
    Amazon S3 console
    In the Amazon S3 console open the context (rightclick) menu on the bucket and choose Empty
    Bucket
    Empty a Bucket Using the AWS CLI
    You can empty a bucket using the AWS CLI only if the bucket does not have versioning enabled If
    your bucket does not have versioning enabled you can use the rm (remove) AWS CLI command with
    the recursive parameter to empty a bucket (or remove a subset of objects with a specific key
    name prefix)
    The following rm command removes objects with key name prefix doc for example docdoc1 and
    docdoc2
    aws s3 rm s3bucketnamedoc recursive
    Use the following command to remove all objects without specifying a prefix
    aws s3 rm s3bucketname recursive
    For more information see Using HighLevel S3 Commands with the AWS Command Line Interface in
    the AWS Command Line Interface User Guide
    Note
    You cannot remove objects from a bucket with versioning enabled Amazon S3 adds a delete
    marker when you delete an object which is what this command will do For more information
    about versioning see Using Versioning (p 423)
    To empty a bucket with versioning enabled you have the following options
    • Delete the bucket programmatically using the AWS SDK
    • Use the bucket's lifecycle configuration to request that Amazon S3 delete the objects
    • Use the Amazon S3 console (can only use this option if your bucket contains less than 100000
    items—including both object versions and delete markers)
    Empty a Bucket Using Lifecycle Configuration
    You can configure lifecycle on you bucket to expire objects and request that Amazon S3 delete expired
    objects You can add lifecycle configuration rules to expire all or a subset of objects with a specific key
    API Version 20060301
    72Amazon Simple Storage Service Developer Guide
    Bucket Website Configuration
    name prefix For example to remove all objects in a bucket you can set lifecycle rule to expire objects
    one day after creation
    If your bucket has versioning enabled you can also configure the rule to expire noncurrent objects
    Caution
    After your objects expire Amazon S3 deletes the expired objects If you just want to empty the
    bucket and not delete it make sure you remove the lifecycle configuration rule you added to
    empty the bucket so that any new objects you create in the bucket will remain in the bucket
    For more information see Object Lifecycle Management (p 109) and Expiring Objects General
    Considerations (p 112)
    Empty a Bucket Using the AWS SDKs
    You can use the AWS SDKs to empty a bucket or remove a subset of objects with a specific key name
    prefix
    For an example of how to empty a bucket using AWS SDK for Java see Delete a Bucket Using the
    AWS SDK for Java (p 69) The code deletes all objects regardless of whether the bucket has
    versioning enabled or not and then it deletes the bucket To just empty the bucket make sure you
    remove the statement that deletes the bucket
    For more information about using other AWS SDKs see Tools for Amazon Web Services
    Managing Bucket Website Configuration
    Topics
    • Managing Websites with the AWS Management Console (p 73)
    • Managing Websites with the AWS SDK for Java (p 73)
    • Managing Websites with the AWS SDK for NET (p 76)
    • Managing Websites with the AWS SDK for PHP (p 79)
    • Managing Websites with the REST API (p 81)
    You can host static websites in Amazon S3 by configuring your bucket for website hosting For more
    information see Hosting a Static Website on Amazon S3 (p 449) There are several ways you
    can manage your bucket's website configuration You can use the AWS Management Console to
    manage configuration without writing any code You can programmatically create update and delete
    the website configuration by using the AWS SDKs The SDKs provide wrapper classes around the
    Amazon S3 REST API If your application requires it you can send REST API requests directly from
    your application
    Managing Websites with the AWS Management
    Console
    For more information see Configure a Bucket for Website Hosting (p 452)
    Managing Websites with the AWS SDK for Java
    The following tasks guide you through using the Java classes to manage website configuration to your
    bucket For more information about the Amazon S3 website feature see Hosting a Static Website on
    Amazon S3 (p 449)
    API Version 20060301
    73Amazon Simple Storage Service Developer Guide
    Using the SDK for Java
    Managing Website Configuration
    1 Create an instance of the AmazonS3 class
    2 To add website configuration to a bucket execute the
    AmazonS3setBucketWebsiteConfiguration method You need to provide the
    bucket name and the website configuration information including the index document
    and the error document names You must provide the index document but the error
    document is optional You provide website configuration information by creating a
    BucketWebsiteConfiguration object
    To retrieve website configuration execute the
    AmazonS3getBucketWebsiteConfiguration method by providing the bucket
    name
    To delete your bucket website configuration execute the
    AmazonS3deleteBucketWebsiteConfiguration method by providing the bucket
    name After you remove the website configuration the bucket is no longer available from
    the website endpoint For more information see Website Endpoints (p 450)
    The following Java code sample demonstrates the preceding tasks
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
    Add website configuration
    s3ClientsetBucketWebsiteConfiguration(bucketName
    new BucketWebsiteConfiguration(indexDoc errorDoc))

    Get website configuration
    BucketWebsiteConfiguration bucketWebsiteConfiguration
    s3ClientgetBucketWebsiteConfiguration(bucketName)

    Delete website configuration
    s3ClientdeleteBucketWebsiteConfiguration(bucketName)
    API Version 20060301
    74Amazon Simple Storage Service Developer Guide
    Using the SDK for Java
    Example
    The following Java code example adds a website configuration to the specified bucket retrieves it and
    deletes the website configuration For instructions on how to create and test a working sample see
    Testing the Java Code Examples (p 564)
    import javaioIOException
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelBucketWebsiteConfiguration
    public class WebsiteConfiguration {
    private static String bucketName *** bucket name ***
    private static String indexDoc *** index document name ***
    private static String errorDoc *** error document name ***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3Client new AmazonS3Client(new
    ProfileCredentialsProvider())

    try {
    Get existing website configuration if any
    getWebsiteConfig(s3Client)

    Set new website configuration
    s3ClientsetBucketWebsiteConfiguration(bucketName
    new BucketWebsiteConfiguration(indexDoc errorDoc))

    Verify (Get website configuration again)
    getWebsiteConfig(s3Client)

    Delete
    s3ClientdeleteBucketWebsiteConfiguration(bucketName)
    Verify (Get website configuration again)
    getWebsiteConfig(s3Client)



    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException which +
    means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException which
    means+
    the client encountered +
    a serious internal problem while trying to +
    communicate with Amazon S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    private static BucketWebsiteConfiguration getWebsiteConfig(
    AmazonS3 s3Client) {
    Systemoutprintln(Get website config)

    1 Get website config
    BucketWebsiteConfiguration bucketWebsiteConfiguration
    s3ClientgetBucketWebsiteConfiguration(bucketName)
    if (bucketWebsiteConfiguration null)
    {
    Systemoutprintln(No website config)
    }
    else
    {
    Systemoutprintln(Index doc +
    bucketWebsiteConfigurationgetIndexDocumentSuffix())
    Systemoutprintln(Error doc +
    bucketWebsiteConfigurationgetErrorDocument())
    }
    return bucketWebsiteConfiguration
    }
    }
    API Version 20060301
    75Amazon Simple Storage Service Developer Guide
    Using the AWS SDK for NET
    Managing Websites with the AWS SDK for NET
    The following tasks guide you through using the NET classes to manage website configuration on your
    bucket For more information about the Amazon S3 website feature see Hosting a Static Website on
    Amazon S3 (p 449)
    Managing Bucket Website Configuration
    1 Create an instance of the AmazonS3Client class
    2 To add website configuration to a bucket execute the PutBucketWebsite method
    You need to provide the bucket name and the website configuration information
    including the index document and the error document names You must provide the index
    document but the error document is optional You provide this information by creating a
    PutBucketWebsiteRequest object
    To retrieve website configuration execute the GetBucketWebsite method by providing
    the bucket name
    To delete your bucket website configuration execute the DeleteBucketWebsite
    method by providing the bucket name After you remove the website configuration
    the bucket is no longer available from the website endpoint For more information see
    Website Endpoints (p 450)
    The following C# code sample demonstrates the preceding tasks
    static IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSWest2)
    Add website configuration
    PutBucketWebsiteRequest putRequest new PutBucketWebsiteRequest()
    {
    BucketName bucketName
    WebsiteConfiguration new WebsiteConfiguration()
    {
    IndexDocumentSuffix indexDocumentSuffix
    ErrorDocument errorDocument
    }
    }
    clientPutBucketWebsite(putRequest)
    Get bucket website configuration
    GetBucketWebsiteRequest getRequest new GetBucketWebsiteRequest()
    {
    BucketName bucketName
    }
    GetBucketWebsiteResponse getResponse clientGetBucketWebsite(getRequest)
    Print configuration data
    ConsoleWriteLine(Index document {0}
    getResponseWebsiteConfigurationIndexDocumentSuffix)
    ConsoleWriteLine(Error document {0}
    getResponseWebsiteConfigurationErrorDocument)
    Delete website configuration
    DeleteBucketWebsiteRequest deleteRequest new DeleteBucketWebsiteRequest()
    {
    API Version 20060301
    76Amazon Simple Storage Service Developer Guide
    Using the AWS SDK for NET
    BucketName bucketName
    }
    clientDeleteBucketWebsite(deleteRequest)
    API Version 20060301
    77Amazon Simple Storage Service Developer Guide
    Using the AWS SDK for NET
    Example
    The following C# code example adds a website configuration to the specified bucket The configuration
    specifies both the index document and the error document names For instructions on how to create
    and test a working sample see Running the Amazon S3 NET Code Examples (p 566)
    using System
    using SystemConfiguration
    using SystemCollectionsSpecialized
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class AddWebsiteConfig
    {
    static string bucketName *** Provide existing bucket name
    ***
    static string indexDocumentSuffix *** Provide index document name
    ***
    static string errorDocument *** Provide error document name
    ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSWest2))
    {
    ConsoleWriteLine(Adding website configuration)
    AddWebsiteConfiguration(bucketName indexDocumentSuffix
    errorDocument)
    }

    Get bucket website configuration
    GetBucketWebsiteRequest getRequest new
    GetBucketWebsiteRequest()
    {
    BucketName bucketName
    }
    GetBucketWebsiteResponse getResponse
    clientGetBucketWebsite(getRequest)
    Print configuration data
    ConsoleWriteLine(Index document {0}
    getResponseWebsiteConfigurationIndexDocumentSuffix)
    ConsoleWriteLine(Error document {0}
    getResponseWebsiteConfigurationErrorDocument)

    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void AddWebsiteConfiguration(string bucketName
    string indexDocumentSuffix
    string errorDocument)
    {
    try
    {
    PutBucketWebsiteRequest putRequest new
    PutBucketWebsiteRequest()
    {
    BucketName bucketName
    WebsiteConfiguration new WebsiteConfiguration()
    {
    IndexDocumentSuffix indexDocumentSuffix
    ErrorDocument errorDocument
    }
    }
    clientPutBucketWebsite(putRequest)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(Sign up for service at http
    awsamazoncoms3)
    }
    else
    {
    ConsoleWriteLine(
    Error{0} occurred when adding website
    configuration Message'{1}
    amazonS3ExceptionErrorCode
    amazonS3ExceptionMessage)
    }
    }
    }
    }
    }
    API Version 20060301
    78Amazon Simple Storage Service Developer Guide
    Using the SDK for PHP
    Managing Websites with the AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to configure and manage an
    Amazon S3 bucket for website hosting For more information about the Amazon S3 website feature
    see Hosting a Static Website on Amazon S3 (p 449)
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    The following tasks guide you through using the PHP SDK classes to configure and manage an
    Amazon S3 bucket for website hosting
    Configuring a Bucket for Website Hosting
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 To configure a bucket as a website execute the Aws\S3\S3ClientputBucketWebsite()
    method You need to provide the bucket name and the website configuration information
    including the index document and the error document names If you don't provide these
    document names this method adds the indexhtml and errorhtml default names
    to the website configuration You must verify that these documents are present in the
    bucket
    3 To retrieve existing bucket website configuration execute the Aws
    \S3\S3ClientgetBucketWebsite() method
    4 To delete website configuration from a bucket execute the Aws
    \S3\S3ClientdeleteBucketWebsite() method passing the bucket name as a parameter
    If you remove the website configuration the bucket is no longer accessible from the
    website endpoints
    The following PHP code sample demonstrates the preceding tasks
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'

    1 Instantiate the client
    s3 S3Clientfactory()
    2 Add website configuration
    result s3>putBucketWebsite(array(
    'Bucket' > bucket
    'IndexDocument' > array('Suffix' > 'indexhtml')
    'ErrorDocument' > array('Key' > 'errorhtml')
    ))
    3 Retrieve website configuration
    result s3>getBucketWebsite(array(
    'Bucket' > bucket
    ))
    echo result>getPath('IndexDocumentSuffix')
    API Version 20060301
    79Amazon Simple Storage Service Developer Guide
    Using the SDK for PHP
    4) Delete website configuration
    result s3>deleteBucketWebsite(array(
    'Bucket' > bucket
    ))
    Example of Configuring an Bucket Amazon S3 for Website Hosting
    The following PHP code example first adds a website configuration to the specified bucket The
    create_website_config method explicitly provides the index document and error document names
    The sample also retrieves the website configuration and prints the response For more information
    about the Amazon S3 website feature see Hosting a Static Website on Amazon S3 (p 449)
    For instructions on how to create and test a working sample see Using the AWS SDK for PHP and
    Running PHP Examples (p 566)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    Instantiate the client
    s3 S3Clientfactory()
    1) Add website configuration
    result s3>putBucketWebsite(array(
    'Bucket' > bucket
    'IndexDocument' > array('Suffix' > 'indexhtml')
    'ErrorDocument' > array('Key' > 'errorhtml')
    ))
    2) Retrieve website configuration
    result s3>getBucketWebsite(array(
    'Bucket' > bucket
    ))
    echo result>getPath('IndexDocumentSuffix')
    3) Delete website configuration
    result s3>deleteBucketWebsite(array(
    'Bucket' > bucket
    ))
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientdeleteBucketWebsite() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientgetBucketWebsite() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientputBucketWebsite() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    API Version 20060301
    80Amazon Simple Storage Service Developer Guide
    Using the REST API
    Managing Websites with the REST API
    You can use the AWS Management Console or the AWS SDK to configure a bucket as a website
    However if your application requires it you can send REST requests directly For more information
    see the following sections in the Amazon Simple Storage Service API Reference
    • PUT Bucket website
    • GET Bucket website
    • DELETE Bucket website
    Amazon S3 Transfer Acceleration
    Amazon S3 Transfer Acceleration enables fast easy and secure transfers of files over long distances
    between your client and an S3 bucket Transfer Acceleration takes advantage of Amazon CloudFront’s
    globally distributed edge locations As the data arrives at an edge location data is routed to Amazon
    S3 over an optimized network path
    When using Transfer Acceleration additional data transfer charges may apply For more information
    about pricing see Amazon S3 Pricing
    Topics
    • Why Use Amazon S3 Transfer Acceleration (p 81)
    • Getting Started with Amazon S3 Transfer Acceleration (p 82)
    • Requirements for Using Amazon S3 Transfer Acceleration (p 83)
    • Amazon S3 Transfer Acceleration Examples (p 83)
    Why Use Amazon S3 Transfer Acceleration
    You might want to use Transfer Acceleration on a bucket for various reasons including the following
    • You have customers that upload to a centralized bucket from all over the world
    • You transfer gigabytes to terabytes of data on a regular basis across continents
    • You underutilize the available bandwidth over the Internet when uploading to Amazon S3
    For more information about when to use Transfer Acceleration see Amazon S3 FAQs
    Using the Amazon S3 Transfer Acceleration Speed
    Comparison Tool
    You can use the Amazon S3 Transfer Acceleration Speed Comparison tool to compare accelerated
    and nonaccelerated upload speeds across Amazon S3 regions The Speed Comparison tool uses
    multipart uploads to transfer a file from your browser to various Amazon S3 regions with and without
    using Transfer Acceleration
    You can access the Speed Comparison tool using either of the following methods
    • Copy the following URL into your browser window replacing region with the region that you are
    using (for example uswest2) and yourBucketName with the name of the bucket that you want to
    evaluate
    https3acceleratespeedtests3accelerateamazonawscomenaccelerate
    speedcomparsionhtmlregionregion&origBucketNameyourBucketName
    API Version 20060301
    81Amazon Simple Storage Service Developer Guide
    Getting Started

    For a list of the regions supported by Amazon S3 see Regions and Endpoints in the Amazon Web
    Services General Reference
    • Use the Amazon S3 console For details see Enabling Transfer Acceleration in the Amazon Simple
    Storage Service Console User Guide
    Getting Started with Amazon S3 Transfer
    Acceleration
    To get started using Amazon S3 Transfer Acceleration perform the following steps
    1 Enable Transfer Acceleration on a bucket – For your bucket to work with transfer acceleration
    the bucket name must conform to DNS naming requirements and must not contain periods ()

    You can enable Transfer Acceleration on a bucket any of the following ways
    • Use the Amazon S3 console For more information see Enabling Transfer Acceleration in the
    Amazon Simple Storage Service Console User Guide
    • Use the REST API PUT Bucket accelerate operation
    • Use the AWS CLI and AWS SDKs For more information see Using the AWS SDKs CLI and
    Explorers (p 560)

    2 Transfer data to the accelerationenabled bucket using the bucketnames3
    accelerateamazonawscom endpoint – When uploading to or downloading from the Transfer
    Acceleration enabled bucket you must use the bucket endpoint domain name bucketnames3
    accelerateamazonawscom to get accelerated data transfers You can find the unique Transfer
    Acceleration endpoint name for your bucket in the Amazon S3 management console
    Note
    You can continue to use the regular endpoint in addition to the accelerate endpoint
    For example let's say you currently have a REST API application using PUT Object that uses the
    host name mybuckets3amazonawscom in the PUT request To accelerate the PUT you simply
    change the host name in your request to mybuckets3accelerateamazonawscom To go back to
    using the standard upload speed simply change the name back to mybuckets3amazonawscom

    You can use the new accelerate endpoint in the AWS CLI AWS SDKs and other tools that transfer
    data to and from Amazon S3 If you are using the AWS SDKs some of the supported languages
    use an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint
    for Transfer Acceleration to bucketnames3accelerateamazonawscom For examples of how
    to use an accelerate endpoint client configuration flag see Amazon S3 Transfer Acceleration
    Examples (p 83)
    You can use all of the Amazon S3 operations through the transaction acceleration endpoint except
    for the following the operations GET Service (list buckets) PUT Bucket (create bucket) and DELETE
    Bucket Also Amazon S3 Transfer Acceleration does not support cross region copies using PUT
    Object Copy
    API Version 20060301
    82Amazon Simple Storage Service Developer Guide
    Requirements for Using Amazon S3 Transfer Acceleration
    Requirements for Using Amazon S3 Transfer
    Acceleration
    The following are the requirements for using Transfer Acceleration on an S3 bucket
    • Transfer Acceleration is only supported on virtual style requests For more information about virtual
    style requests see Making Requests Using the REST API (p 49)
    • The name of the bucket used for Transfer Acceleration must be DNScompliant and must not contain
    periods ()
    • Transfer Acceleration must be enabled on the bucket After enabling Transfer Acceleration on a
    bucket it might take up to thirty minutes before the data transfer speed to the bucket increases
    • You must use the use the endpoint bucketnames3accelerateamazonawscom to access the
    enabled bucket
    • You must be the bucket owner to set the transfer acceleration state The bucket owner can
    assign permissions to other users to allow them to set the acceleration state on a bucket The
    s3PutAccelerateConfiguration permission permits users to enable or disable Transfer
    Acceleration on a bucket The s3GetAccelerateConfiguration permission permits users
    to return the Transfer Acceleration state of a bucket which is either Enabled or Suspended
    For more information about these permissions see Permissions Related to Bucket Subresource
    Operations (p 314) and Managing Access Permissions to Your Amazon S3 Resources (p 266)
    • Transfer Acceleration is not Health Insurance Portability and Accountability Act (HIPAA) compliant
    Important
    Transfer Acceleration uses AWS Edge infrastructure (edge locations) which are not Health
    Insurance Portability and Accountability Act (HIPAA) compliant If your organization has
    personal health information (PHI) workloads covered under the HIPAA Business Associate
    Agreement (BAA) you can't use Transfer Acceleration For more information contact AWS
    Support at Contact Us
    Related Topics
    • GET Bucket accelerate
    • PUT Bucket accelerate
    Amazon S3 Transfer Acceleration Examples
    This section provides examples of how to enable Amazon S3 Transfer Acceleration on a bucket and
    use the acceleration endpoint for the enabled bucket Some of the AWS SDK supported languages
    (for example Java and NET) use an accelerate endpoint client configuration flag so you don't need to
    explicitly set the endpoint for Transfer Acceleration to bucketnames3accelerateamazonawscom
    For more information about Transfer Acceleration see Amazon S3 Transfer Acceleration (p 81)
    Topics
    • Using the Amazon S3 Console (p 84)
    • Using Transfer Acceleration from the AWS Command Line Interface (AWS CLI) (p 84)
    • Using Transfer Acceleration from the AWS SDK for Java (p 85)
    • Using Transfer Acceleration from the AWS SDK for NET (p 88)
    • Using Other AWS SDKs (p 92)
    API Version 20060301
    83Amazon Simple Storage Service Developer Guide
    Transfer Acceleration Examples
    Using the Amazon S3 Console
    For information about enabling Transfer Acceleration on a bucket using the Amazon S3 console see
    Enabling Transfer Acceleration in the Amazon Simple Storage Service Console User Guide
    Using Transfer Acceleration from the AWS Command Line
    Interface (AWS CLI)
    This section provides examples of AWS CLI commands used for Transfer Acceleration For
    instructions on setting up the AWS CLI see Set Up the AWS CLI (p 562)
    Enabling Transfer Acceleration on a Bucket Using the AWS CLI
    Use the AWS CLI putbucketaccelerateconfiguration command to enable or suspend Transfer
    Acceleration on a bucket The following example sets StatusEnabled to enable Transfer
    Acceleration on a bucket You use StatusSuspended to suspend Transfer Acceleration
    aws s3api putbucketaccelerateconfiguration bucket bucketname
    accelerateconfiguration StatusEnabled
    Using the Transfer Acceleration from the AWS CLI
    Setting the configuration value use_accelerate_endpoint to true in a profile in your AWS Config
    File will direct all Amazon S3 requests made by s3 and s3api AWS CLI commands to the accelerate
    endpoint s3accelerateamazonawscom Transfer Acceleration must be enabled on your bucket
    to use the accelerate endpoint
    All request are sent using the virtual style of bucket addressing mybuckets3
    accelerateamazonawscom Any ListBuckets CreateBucket and DeleteBucket requests
    will not be sent to the accelerate endpoint as the endpoint does not support those operations For more
    information about use_accelerate_endpoint see AWS CLI S3 Configuration
    The following example sets use_accelerate_endpoint to true in the default profile
    aws configure set defaults3use_accelerate_endpoint true
    If you want to use the accelerate endpoint for some AWS CLI commands but not others you can use
    either one of the following two methods
    • You can use the accelerate endpoint per command by setting the endpointurl parameter to
    httpss3accelerateamazonawscom or https3accelerateamazonawscom for
    any s3 or s3api command
    • You can setup separate profiles in your AWS Config File For example create one
    profile that sets use_accelerate_endpoint to true and a profile that does not set
    use_accelerate_endpoint When you execute a command specify which profile you want to use
    depending upon whether or not you want to use the accelerate endpoint
    AWS CLI Examples of Uploading an Object to a Transfer Acceleration Enabled
    Bucket
    The following example uploads a file to a Transfer Acceleration enabled bucket by using the default
    profile that has been configured to use the accelerate endpoint
    aws s3 cp filetxt s3bucketnamekeyname region region
    API Version 20060301
    84Amazon Simple Storage Service Developer Guide
    Transfer Acceleration Examples
    The following example uploads a file to a Transfer Acceleration enabled bucket by using the
    endpointurl parameter to specify the accelerate endpoint
    aws configure set s3addressing_style virtual
    aws s3 cp filetxt s3bucketnamekeyname region region endpointurl
    https3accelerateamazonawscom
    Using Transfer Acceleration from the AWS SDK for Java
    This section provides examples of using the AWS SDK for Java for Transfer Acceleration For
    information about how to create and test a working Java sample see Testing the Java Code
    Examples (p 564)
    Java Example 1 Enable Amazon S3 Transfer Acceleration on a Bucket
    The following Java example shows how to enable Transfer Acceleration on a bucket
    import javaioIOException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsregionsRegion
    import comamazonawsregionsRegions
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelBucketAccelerateConfiguration
    import comamazonawsservicess3modelBucketAccelerateStatus
    import
    comamazonawsservicess3modelGetBucketAccelerateConfigurationRequest
    import
    comamazonawsservicess3modelSetBucketAccelerateConfigurationRequest
    public class BucketAccelertionConfiguration {
    public static String bucketName *** Provide bucket name ***
    public static AmazonS3Client s3Client

    public static void main(String[] args) throws IOException {

    s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    s3ClientsetRegion(RegiongetRegion(RegionsUS_WEST_2))

    1 Enable bucket for Amazon S3 Transfer Acceleration
    s3ClientsetBucketAccelerateConfiguration(new
    SetBucketAccelerateConfigurationRequest(bucketName
    new BucketAccelerateConfiguration(BucketAccelerateStatusEnabled)))

    2 Get the acceleration status of the bucket
    String accelerateStatus
    s3ClientgetBucketAccelerateConfiguration(new
    GetBucketAccelerateConfigurationRequest(bucketName))getStatus()

    Systemoutprintln(Acceleration status + accelerateStatus)

    }
    }
    API Version 20060301
    85Amazon Simple Storage Service Developer Guide
    Transfer Acceleration Examples
    Java Example 2 Uploading a Single Object to a Transfer Acceleration Enabled
    Bucket
    The following Java example shows how to use the accelerate endpoint to upload a single object
    import javaioFile
    import javaioIOException
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsregionsRegion
    import comamazonawsregionsRegions
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3S3ClientOptions
    import comamazonawsservicess3modelPutObjectRequest
    public class AcceleratedUploadSingleObject {
    private static String bucketName *** Provide bucket name ***
    private static String keyName *** Provide key name ***
    private static String uploadFileName *** Provide file name with full
    path ***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3Client new AmazonS3Client(new
    ProfileCredentialsProvider())
    s3ClientsetRegion(RegiongetRegion(RegionsUS_WEST_2))

    Use Amazon S3 Transfer Acceleration endpoint

    s3ClientsetS3ClientOptions(S3ClientOptionsbuilder()setAccelerateModeEnabled(true)build())

    try {
    Systemoutprintln(Uploading a new object to S3 from a file
    \n)
    File file new File(uploadFileName)
    s3ClientputObject(new PutObjectRequest(
    bucketName keyName file))
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException which
    +
    means your request made it +
    to Amazon S3 but was rejected with an error
    response +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code +
    asegetStatusCode())
    Systemoutprintln(AWS Error Code +
    asegetErrorCode())
    Systemoutprintln(Error Type +
    asegetErrorType())
    Systemoutprintln(Request ID +
    asegetRequestId())
    API Version 20060301
    86Amazon Simple Storage Service Developer Guide
    Transfer Acceleration Examples
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException which
    +
    means the client encountered +
    an internal error while trying to +
    communicate with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    Java Example 3 Multipart Upload to a Transfer Acceleration Enabled Bucket
    The following Java example shows how to use the accelerate endpoint for a multipart upload
    import javaioFile

    import comamazonawsAmazonClientException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsregionsRegions
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3S3ClientOptions

    import comamazonawsservicess3transferTransferManager
    import comamazonawsservicess3transferUpload

    public class AccelerateMultipartUploadUsingHighLevelAPI {

    private static String EXISTING_BUCKET_NAME *** Provide bucket name
    ***
    private static String KEY_NAME *** Provide key name ***
    private static String FILE_PATH *** Provide file name with full path
    ***

    public static void main(String[] args) throws Exception {

    AmazonS3Client s3Client new AmazonS3Client(new
    ProfileCredentialsProvider())
    s3ClientconfigureRegion(RegionsUS_WEST_2)

    Use Amazon S3 Transfer Acceleration endpoint

    s3ClientsetS3ClientOptions(S3ClientOptionsbuilder()setAccelerateModeEnabled(true)build())

    TransferManager tm new TransferManager(s3Client)
    Systemoutprintln(TransferManager)
    TransferManager processes all transfers asynchronously
    so this call will return immediately
    Upload upload tmupload(
    EXISTING_BUCKET_NAME KEY_NAME new File(FILE_PATH))
    Systemoutprintln(Upload)

    try {
    Or you can block and wait for the upload to finish
    uploadwaitForCompletion()
    Systemoutprintln(Upload complete)
    } catch (AmazonClientException amazonClientException) {
    API Version 20060301
    87Amazon Simple Storage Service Developer Guide
    Transfer Acceleration Examples
    Systemoutprintln(Unable to upload file upload was aborted)
    amazonClientExceptionprintStackTrace()
    }
    }
    }
    Using Transfer Acceleration from the AWS SDK for NET
    This section provides examples of using the AWS SDK for NET for Transfer Acceleration For
    information about how to create and test a working NET sample see Running the Amazon S3 NET
    Code Examples (p 566)
    NET Example 1 Enable Transfer Acceleration on a Bucket
    The following NET example shows how to enable Transfer Acceleration on a bucket
    using System
    using SystemCollectionsGeneric
    using AmazonS3
    using AmazonS3Model
    using AmazonS3Util

    namespace s3amazoncomdocsamples
    {

    class SetTransferAccelerateState
    {
    private static string bucketName Provide bucket name

    public static void Main(string[] args)
    {
    using (var s3Client new
    AmazonS3Client(AmazonRegionEndpointUSWest2))

    try
    {
    EnableTransferAcclerationOnBucket(s3Client)
    BucketAccelerateStatus bucketAcclerationStatus
    GetBucketAccelerateState(s3Client)

    ConsoleWriteLine(Acceleration state '{0}'
    bucketAcclerationStatus)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&

    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS
    Credentials)
    ConsoleWriteLine(
    To sign up for the service go to httpawsamazoncom
    s3)
    }
    else
    API Version 20060301
    88Amazon Simple Storage Service Developer Guide
    Transfer Acceleration Examples
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when setting transfer
    acceleration
    amazonS3ExceptionMessage)
    }
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }

    static void EnableTransferAcclerationOnBucket(IAmazonS3 s3Client)
    {
    PutBucketAccelerateConfigurationRequest request new
    PutBucketAccelerateConfigurationRequest
    {
    BucketName bucketName
    AccelerateConfiguration new AccelerateConfiguration
    {
    Status BucketAccelerateStatusEnabled
    }
    }

    PutBucketAccelerateConfigurationResponse response
    s3ClientPutBucketAccelerateConfiguration(request)
    }

    static BucketAccelerateStatus GetBucketAccelerateState(IAmazonS3
    s3Client)
    {
    GetBucketAccelerateConfigurationRequest request new
    GetBucketAccelerateConfigurationRequest
    {
    BucketName bucketName
    }

    GetBucketAccelerateConfigurationResponse response
    s3ClientGetBucketAccelerateConfiguration(request)
    return responseStatus
    }
    }
    }
    NET Example 2 Uploading a Single Object to a Transfer Acceleration
    Enabled Bucket
    The following NET example shows how to use the accelerate endpoint to upload a single object
    using System
    using SystemCollectionsGeneric
    using Amazon
    using AmazonS3
    using AmazonS3Model
    using AmazonS3Util

    namespace s3amazoncomdocsamples
    {

    API Version 20060301
    89Amazon Simple Storage Service Developer Guide
    Transfer Acceleration Examples
    public class UploadtoAcceleratedBucket
    {
    private static RegionEndpoint TestRegionEndpoint
    RegionEndpointUSWest2
    private static string bucketName Provide bucket name
    static string keyName *** Provide key name ***
    static string filePath *** Provide filename of file to upload with
    the full path ***

    public static void Main(string[] args)
    {
    using (var client new AmazonS3Client(new AmazonS3Config
    {
    RegionEndpoint TestRegionEndpoint
    UseAccelerateEndpoint true
    }))

    {
    WriteObject(client)
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    }

    static void WriteObject(IAmazonS3 client)
    {
    try
    {
    PutObjectRequest putRequest new PutObjectRequest
    {
    BucketName bucketName
    Key keyName
    FilePath filePath
    }
    clientPutObject(putRequest)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&

    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS
    Credentials)
    ConsoleWriteLine(
    For service sign up go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when writing an
    object
    amazonS3ExceptionMessage)
    }
    }
    }
    API Version 20060301
    90Amazon Simple Storage Service Developer Guide
    Transfer Acceleration Examples
    }
    }
    NET Example 3 Multipart Upload to a Transfer Acceleration Enabled Bucket
    The following NET example shows how to use the accelerate endpoint for a multipart upload
    using System
    using SystemIO
    using Amazon
    using AmazonS3
    using AmazonS3Model
    using AmazonS3Transfer

    namespace s3amazoncomdocsamples
    {
    class AcceleratedUploadFileMPUHAPI
    {
    private static RegionEndpoint TestRegionEndpoint
    RegionEndpointUSWest2
    private static string existingBucketName Provide bucket name
    private static string keyName *** Provide your object key
    ***
    private static string filePath *** Provide file name with full
    path ***

    static void Main(string[] args)
    {
    try
    {
    var client new AmazonS3Client(new AmazonS3Config
    {
    RegionEndpoint TestRegionEndpoint
    UseAccelerateEndpoint true
    })
    using (TransferUtility fileTransferUtility new
    TransferUtility(client))
    {

    1 Upload a file file name is used as the object key
    name
    fileTransferUtilityUpload(filePath
    existingBucketName)
    ConsoleWriteLine(Upload 1 completed)

    2 Specify object key name explicitly
    fileTransferUtilityUpload(filePath
    existingBucketName keyName)
    ConsoleWriteLine(Upload 2 completed)

    3 Upload data from a type of SystemIOStream
    using (FileStream fileToUpload
    new FileStream(filePath FileModeOpen
    FileAccessRead))
    {
    fileTransferUtilityUpload(fileToUpload
    existingBucketName
    keyName)
    API Version 20060301
    91Amazon Simple Storage Service Developer Guide
    Requester Pays Buckets
    }
    ConsoleWriteLine(Upload 3 completed)

    4Specify advanced settingsoptions
    TransferUtilityUploadRequest fileTransferUtilityRequest
    new TransferUtilityUploadRequest
    {
    BucketName existingBucketName
    FilePath filePath
    StorageClass S3StorageClassReducedRedundancy
    PartSize 6291456 6 MB
    Key keyName
    CannedACL S3CannedACLPublicRead
    }
    fileTransferUtilityRequestMetadataAdd(param1
    Value1)
    fileTransferUtilityRequestMetadataAdd(param2
    Value2)
    fileTransferUtilityUpload(fileTransferUtilityRequest)
    ConsoleWriteLine(Upload 4 completed)
    }
    }
    catch (AmazonS3Exception s3Exception)
    {
    ConsoleWriteLine({0} {1} s3ExceptionMessage
    s3ExceptionInnerException)
    }
    }
    }
    }
    Using Other AWS SDKs
    For information about using other AWS SDKs see Sample Code and Libraries
    Requester Pays Buckets
    Topics
    • Configure Requester Pays by Using the Amazon S3 Console (p 93)
    • Configure Requester Pays with the REST API (p 93)
    • DevPay and Requester Pays (p 96)
    • Charge Details (p 96)
    In general bucket owners pay for all Amazon S3 storage and data transfer costs associated with
    their bucket A bucket owner however can configure a bucket to be a Requester Pays bucket With
    Requester Pays buckets the requester instead of the bucket owner pays the cost of the request and
    the data download from the bucket The bucket owner always pays the cost of storing data
    Typically you configure buckets to be Requester Pays when you want to share data but not incur
    charges associated with others accessing the data You might for example use Requester Pays
    buckets when making available large data sets such as zip code directories reference data
    geospatial information or web crawling data
    Important
    If you enable Requester Pays on a bucket anonymous access to that bucket is not allowed
    API Version 20060301
    92Amazon Simple Storage Service Developer Guide
    Configure with the Console
    You must authenticate all requests involving Requester Pays buckets The request authentication
    enables Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket
    When the requester assumes an AWS Identity and Access Management (IAM) role prior to making
    their request the account to which the role belongs is charged for the request For more information
    about IAM roles see IAM Roles in the IAM User Guide
    After you configure a bucket to be a Requester Pays bucket requesters must include xamz
    requestpayer in their requests either in the header for POST GET and HEAD requests or as a
    parameter in a REST request to show that they understand that they will be charged for the request
    and the data download
    Requester Pays buckets do not support the following
    • Anonymous requests
    • BitTorrent
    • SOAP requests
    • You cannot use a Requester Pays bucket as the target bucket for end user logging or vice versa
    however you can turn on end user logging on a Requester Pays bucket where the target bucket is
    not a Requester Pays bucket
    Configure Requester Pays by Using the Amazon S3
    Console
    You can configure a bucket for Requester Pays by using the Amazon S3 console
    To configure a bucket for Requester Pays
    1 Sign in to the AWS Management Console and open the Amazon S3 console at https
    consoleawsamazoncoms3
    2 In the Buckets list click the details icon on the left of the bucket name and then click Properties
    to display bucket properties
    3 In the Properties pane click Requester Pays
    4 Select the Enabled check box
    Configure Requester Pays with the REST API
    Topics
    • Setting the requestPayment Bucket Configuration (p 94)
    • Retrieving the requestPayment Configuration (p 94)
    • Downloading Objects in Requester Pays Buckets (p 95)
    API Version 20060301
    93Amazon Simple Storage Service Developer Guide
    Configure with the REST API
    Setting the requestPayment Bucket Configuration
    Only the bucket owner can set the RequestPaymentConfigurationpayer configuration value
    of a bucket to BucketOwner the default or Requester Setting the requestPayment resource is
    optional By default the bucket is not a Requester Pays bucket
    To revert a Requester Pays bucket to a regular bucket you use the value BucketOwner Typically
    you would use BucketOwner when uploading data to the Amazon S3 bucket and then you would set
    the value to Requester before publishing the objects in the bucket
    To set requestPayment
    • Use a PUT request to set the Payer value to Requester on a specified bucket
    PUT requestPayment HTTP11
    Host [BucketName]s3amazonawscom
    ContentLength 173
    Date Wed 01 Mar 2009 120000 GMT
    Authorization AWS [Signature]
    doc20060301>
    Requester

    If the request succeeds Amazon S3 returns a response similar to the following
    HTTP11 200 OK
    xamzid2 [id]
    xamzrequestid [request_id]
    Date Wed 01 Mar 2009 120000 GMT
    ContentLength 0
    Connection close
    Server AmazonS3
    xamzrequestchargedrequester
    You can set Requester Pays only at the bucket level you cannot set Requester Pays for specific
    objects within the bucket
    You can configure a bucket to be BucketOwner or Requester at any time Realize however that
    there might be a small delay on the order of minutes before the new configuration value takes effect
    Note
    Bucket owners who give out presigned URLs should think twice before configuring a bucket
    to be Requester Pays especially if the URL has a very long lifetime The bucket owner
    is charged each time the requester uses a presigned URL that uses the bucket owner's
    credentials
    Retrieving the requestPayment Configuration
    You can determine the Payer value that is set on a bucket by requesting the resource
    requestPayment
    To return the requestPayment resource
    • Use a GET request to obtain the requestPayment resource as shown in the following request
    API Version 20060301
    94Amazon Simple Storage Service Developer Guide
    Configure with the REST API
    GET requestPayment HTTP11
    Host [BucketName]s3amazonawscom
    Date Wed 01 Mar 2009 120000 GMT
    Authorization AWS [Signature]
    If the request succeeds Amazon S3 returns a response similar to the following
    HTTP11 200 OK
    xamzid2 [id]
    xamzrequestid [request_id]
    Date Wed 01 Mar 2009 120000 GMT
    ContentType [type]
    ContentLength [length]
    Connection close
    Server AmazonS3


    Requester

    This response shows that the payer value is set to Requester
    Downloading Objects in Requester Pays Buckets
    Because requesters are charged for downloading data from Requester Pays buckets the requests
    must contain a special parameter xamzrequestpayer which confirms that the requester knows
    he or she will be charged for the download To access objects in Requester Pays buckets requests
    must include one of the following
    • For GET HEAD and POST requests include xamzrequestpayer requester in the
    header
    • For signed URLs include xamzrequestpayerrequester in the request
    If the request succeeds and the requester is charged the response includes the header xamz
    requestchargedrequester If xamzrequestpayer is not in the request Amazon S3 returns
    a 403 error and charges the bucket owner for the request
    Note
    Bucket owners do not need to add xamzrequestpayer to their requests
    Ensure that you have included xamzrequestpayer and its value in your signature
    calculation For more information see Constructing the CanonicalizedAmzHeaders
    Element (p 579)
    To download objects from a Requester Pays bucket
    • Use a GET request to download an object from a Requester Pays bucket as shown in the following
    request
    GET [destinationObject] HTTP11
    Host [BucketName]s3amazonawscom
    xamzrequestpayer requester
    Date Wed 01 Mar 2009 120000 GMT
    Authorization AWS [Signature]
    API Version 20060301
    95Amazon Simple Storage Service Developer Guide
    DevPay and Requester Pays
    If the GET request succeeds and the requester is charged the response includes xamzrequest
    chargedrequester
    Amazon S3 can return an Access Denied error for requests that try to get objects from a Requester
    Pays bucket For more information go to Error Responses
    DevPay and Requester Pays
    You can use Amazon DevPay to sell content that is stored in your Requester Pays bucket For
    more information go to Using Amazon S3 Requester Pays with DevPay in the Using Amazon S3
    Requester Pays with DevPay
    Charge Details
    The charge for successful Requester Pays requests is straightforward the requester pays for the data
    transfer and the request the bucket owner pays for the data storage However the bucket owner is
    charged for the request under the following conditions
    • The requester doesn't include the parameter xamzrequestpayer in the header (GET HEAD or
    POST) or as a parameter (REST) in the request (HTTP code 403)
    • Request authentication fails (HTTP code 403)
    • The request is anonymous (HTTP code 403)
    • The request is a SOAP request
    Buckets and Access Control
    Each bucket has an associated access control policy This policy governs the creation deletion and
    enumeration of objects within the bucket For more information see Managing Access Permissions to
    Your Amazon S3 Resources (p 266)
    Billing and Reporting of Buckets
    Fees for object storage and network data transfer are always billed to the owner of the bucket that
    contains the object unless the bucket was created as a Requester Pays bucket
    The reporting tools available at the AWS developer portal organize your Amazon S3 usage reports by
    bucket For more information about cost considerations see Amazon S3 Pricing
    Cost Allocation Tagging
    You can use cost allocation tagging to label Amazon S3 buckets so that you can more easily track their
    cost against projects or other criteria
    Use tags to organize your AWS bill to reflect your own cost structure To do this sign up to get your
    AWS account bill with tag key values included Then to see the cost of combined resources organize
    your billing information according to resources with the same tag key values For example you can tag
    several resources with a specific application name and then organize your billing information to see
    the total cost of that application across several services For more information see Cost Allocation and
    Tagging in About AWS Billing and Cost Management
    A cost allocation tag is a namevalue pair that you define and associate with an Amazon S3 bucket
    We recommend that you use a consistent set of tag keys to make it easier to track costs associated
    with your Amazon S3 buckets
    API Version 20060301
    96Amazon Simple Storage Service Developer Guide
    Cost Allocation Tagging
    Each Amazon S3 bucket has a tag set which contains all the tags that are assigned to that bucket A
    tag set can contain as many as ten tags or it can be empty
    If you add a tag that has the same key as an existing tag on a bucket the new value overwrites the old
    value
    AWS does not apply any semantic meaning to your tags tags are interpreted strictly as character
    strings AWS does not automatically set any tags on buckets
    You can use the Amazon S3 console the CLI or the Amazon S3 API to add list edit or delete tags
    For more information about creating tags in the console go to Managing Cost Allocation Tagging in the
    Amazon Simple Storage Service Console User Guide
    The following list describes the characteristics of a cost allocation tag
    • The tag key is the required name of the tag The string value can contain 1 to 128 Unicode
    characters It cannot be prefixed with aws The string can contain only the set of Unicode letters
    digits whitespace '_' '' '' '' '+' '' (Java regex ^([\\p{L}\\p{Z}\\p{N}_+\\]*))
    • The tag value is a required string value of the tag The string value can contain from 1 to 256
    Unicode characters It cannot be prefixed with aws The string can contain only the set of Unicode
    letters digits whitespace '_' '' '' '' '+' '' (Java regex ^([\\p{L}\\p{Z}\\p{N}_+\\]*))
    Values do not have to be unique in a tag set and they can be null For example you can have the
    same keyvalue pair in tag sets named projectTrinity and costcenterTrinity
    API Version 20060301
    97Amazon Simple Storage Service Developer Guide
    Working with Amazon S3 Objects
    Amazon S3 is a simple key value store designed to store as many objects as you want You store
    these objects in one or more buckets An object consists of the following
    • Key – The name that you assign to an object You use the object key to retrieve the object
    For more information see Object Key and Metadata (p 99)
    • Version ID – Within a bucket a key and version ID uniquely identify an object
    The version ID is a string that Amazon S3 generates when you add an object to a bucket For more
    information see Object Versioning (p 106)
    • Value – The content that you are storing
    An object value can be any sequence of bytes Objects can range in size from zero to 5 TB For
    more information see Uploading Objects (p 157)
    • Metadata – A set of namevalue pairs with which you can store information regarding the object
    You can assign metadata referred to as userdefined metadata to your objects in Amazon S3
    Amazon S3 also assigns systemmetadata to these objects which it uses for managing objects For
    more information see Object Key and Metadata (p 99)
    • Subresources – Amazon S3 uses the subresource mechanism to store objectspecific additional
    information
    Because subresources are subordinates to objects they are always associated with some other
    entity such as an object or a bucket For more information see Object Subresources (p 105)
    • Access Control Information – You can control access to the objects you store in Amazon S3
    Amazon S3 supports both the resourcebased access control such as an Access Control List (ACL)
    and bucket policies and userbased access control For more information see Managing Access
    Permissions to Your Amazon S3 Resources (p 266)
    For more information about working with objects see the following sections Note that your Amazon
    S3 resources (for example buckets and objects) are private by default You will need to explicitly grant
    permission for others to access these resources For example you might want to share a video or a
    photo stored in your Amazon S3 bucket on your website That will work only if you either make the
    object public or use a presigned URL on your website For more information about sharing objects see
    Share an Object with Others (p 152)
    Topics
    API Version 20060301
    98Amazon Simple Storage Service Developer Guide
    Object Key and Metadata
    • Object Key and Metadata (p 99)
    • Storage Classes (p 103)
    • Object Subresources (p 105)
    • Object Versioning (p 106)
    • Object Lifecycle Management (p 109)
    • CrossOrigin Resource Sharing (CORS) (p 131)
    • Operations on Objects (p 142)
    Object Key and Metadata
    Topics
    • Object Keys (p 99)
    • Object Metadata (p 101)
    Each Amazon S3 object has data a key and metadata Object key (or key name) uniquely identifies
    the object in a bucket Object metadata is a set of namevalue pairs You can set object metadata at
    the time you upload it After you upload the object you cannot modify object metadata The only way to
    modify object metadata is to make a copy of the object and set the metadata
    Object Keys
    When you create an object you specify the key name which uniquely identifies the object in the
    bucket For example in the Amazon S3 console (see AWS Management Console) when you highlight
    a bucket a list of objects in your bucket appears These names are the object keys The name for a
    key is a sequence of Unicode characters whose UTF8 encoding is at most 1024 bytes long
    Note
    If you anticipate that your workload against Amazon S3 will exceed 100 requests per second
    follow the Amazon S3 key naming guidelines for best performance For information see
    Request Rate and Performance Considerations (p 518)
    Object Key Naming Guidelines
    Although you can use any UTF8 characters in an object key name the following key naming best
    practices help ensure maximum compatibility with other applications Each application may parse
    special characters differently The following guidelines help you maximize compliance with DNS web
    safe characters XML parsers and other APIs
    Safe Characters
    The following character sets are generally safe for use in key names
    • Alphanumeric characters [09azAZ]
    • Special characters _ * ' ( and )
    The following are examples of valid object key names
    • 4myorganization
    • mygreat_photos2014janmyvacationjpg
    API Version 20060301
    99Amazon Simple Storage Service Developer Guide
    Object Keys
    • videos2014birthdayvideo1wmv
    Note that the Amazon S3 data model is a flat structure you create a bucket and the bucket stores
    objects There is no hierarchy of subbuckets or subfolders however you can infer logical hierarchy
    using key name prefixes and delimiters as the Amazon S3 console does The Amazon S3 console
    supports a concept of folders Suppose your bucket (companybucket) has four objects with the
    following object keys
    DevelopmentProjects1xls
    Financestatement1pdf
    Privatetaxdocumentpdf
    s3dgpdf
    The console uses the key name prefixes (Development Finance and Private) and delimiter
    ('') to present a folder structure as shown
    The s3dgpdf key does not have a prefix so its object appears directly at the root level of the
    bucket If you open the Development folder you will see the Project1xls object in it
    Note
    Amazon S3 supports buckets and objects there is no hierarchy in Amazon S3 However the
    prefixes and delimiters in an object key name enables the Amazon S3 console and the AWS
    SDKs to infer hierarchy and introduce concept of folders
    Characters That Might Require Special Handling
    The following characters in a key name may require additional code handling and will likely need to be
    URL encoded or referenced as HEX Some of these are nonprintable characters and your browser
    may not handle them which will also require special handling
    Ampersand (&) Dollar () ASCII character ranges 00–1F
    hex (0–31 decimal) and 7F (127
    decimal)
    'At' symbol (@) Equals () Semicolon ()
    Colon () Plus (+) Space – Significant sequences
    of spaces may be lost in some
    uses (especially multiple
    spaces)
    API Version 20060301
    100Amazon Simple Storage Service Developer Guide
    Object Metadata
    Comma () Question mark ()
    Characters to Avoid
    You should avoid the following characters in a key name because of significant special handling for
    consistency across all applications
    Backslash (\) Left curly brace ({) Nonprintable ASCII characters
    (128–255 decimal characters)
    Caret (^) Right curly brace (}) Percent character ()
    Grave accent back tick (`) Right square bracket (]) Quotation marks
    'Greater Than' symbol (>) Left square bracket ([) Tilde (~)
    'Less Than' symbol (<) 'Pound' character (#) Vertical bar pipe (|)
    Object Metadata
    There are two kinds of metadata system metadata and userdefined metadata
    SystemDefined Metadata
    For each object stored in a bucket Amazon S3 maintains a set of system metadata Amazon S3
    processes this system metadata as needed For example Amazon S3 maintains object creation date
    and size metadata and uses this information as part of object management
    There are two categories of system metadata
    • Metadata such as object creation date is system controlled where only Amazon S3 can modify the
    value
    • Other system metadata such as the storage class configured for the object and whether the object
    has serverside encryption enabled are examples of system metadata whose values you control If
    you have your bucket configured as a website sometimes you might want to redirect a page request
    to another page or an external URL In this case a web page is an object in your bucket Amazon S3
    stores the page redirect value as system metadata whose value you control
    When you create objects you can configure values of these system metadata items or update the
    values when you need For more information about storage class see Storage Classes (p 103)
    For more information about serverside encryption see Protecting Data Using Encryption (p 380)
    The following table provides a list of systemdefined metadata and whether you can update it
    Name Description Can User
    Modify the
    Value
    Date Current date and time No
    ContentLength Object size in bytes No
    LastModified Object creation date or the last modified date whichever is
    the latest
    No
    API Version 20060301
    101Amazon Simple Storage Service Developer Guide
    Object Metadata
    Name Description Can User
    Modify the
    Value
    ContentMD5 The base64encoded 128bit MD5 digest of the object No
    xamzserverside
    encryption
    Indicates whether serverside encryption is enabled for the
    object and whether that encryption is from the AWS Key
    Management Service (SSEKMS) or from AWSManaged
    Encryption (SSES3) For more information see Protecting
    Data Using ServerSide Encryption (p 381)
    Yes
    xamzversionid Object version When you enable versioning on a
    bucket Amazon S3 assigns a version number to objects
    added to the bucket For more information see Using
    Versioning (p 423)
    No
    xamzdeletemarker In a bucket that has versioning enabled this Boolean
    marker indicates whether the object is a delete marker
    No
    xamzstorageclass Storage class used for storing the object For more
    information see Storage Classes (p 103)
    Yes
    xamzwebsite
    redirectlocation
    Redirects requests for the associated object to another
    object in the same bucket or an external URL For
    more information see Configuring a Web Page
    Redirect (p 460)
    Yes
    xamzserverside
    encryptionawskms
    keyid
    If the xamzserversideencryption is present and has
    the value of awskms this indicates the ID of the Key
    Management Service (KMS) master encryption key that
    was used for the object
    Yes
    xamzserverside
    encryptioncustomer
    algorithm
    Indicates whether serverside encryption with customer
    provided encryption keys (SSEC) is enabled For more
    information see Protecting Data Using ServerSide
    Encryption with CustomerProvided Encryption Keys (SSE
    C) (p 395)
    Yes
    UserDefined Metadata
    When uploading an object you can also assign metadata to the object You provide this optional
    information as a namevalue (keyvalue) pair when you send a PUT or POST request to create the
    object When uploading objects using the REST API the optional userdefined metadata names must
    begin with xamzmeta to distinguish them from other HTTP headers When you retrieve the object
    using the REST API this prefix is returned When uploading objects using the SOAP API the prefix is
    not required When you retrieve the object using the SOAP API the prefix is removed regardless of
    which API you used to upload the object
    Note
    SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
    features will not be supported for SOAP We recommend that you use either the REST API or
    the AWS SDKs
    When metadata is retrieved through the REST API Amazon S3 combines headers that have the same
    name (ignoring case) into a commadelimited list If some metadata contains unprintable characters it
    is not returned Instead the xamzmissingmeta header is returned with a value of the number of
    the unprintable metadata entries
    API Version 20060301
    102Amazon Simple Storage Service Developer Guide
    Storage Classes
    Userdefined metadata is a set of keyvalue pairs Amazon S3 stores userdefined metadata keys in
    lowercase Each keyvalue pair must conform to USASCII when using REST and UTF8 when using
    SOAP or browserbased uploads via POST
    Note
    The PUT request header is limited to 8 KB in size Within the PUT request header the user
    defined metadata is limited to 2 KB in size The size of userdefined metadata is measured by
    taking the sum of the number of bytes in the UTF8 encoding of each key and value
    Storage Classes
    Each object in Amazon S3 has a storage class associated with it For example if you list all objects in
    the bucket the console shows the storage class for all the objects in the list
    Amazon S3 offers the following storage classes for the objects that you store You choose one
    depending on your use case scenario and performance access requirements All of these storage
    classes offer high durability
    • STANDARD – This storage class is ideal for performancesensitive use cases and frequently
    accessed data
    STANDARD is the default storage class if you don't specify storage class at the time that you upload
    an object Amazon S3 assumes the STANDARD storage class
    • STANDARD_IA – This storage class (IA for infrequent access) is optimized for longlived and less
    frequently accessed data for example backups and older data where of access has diminished but
    the use case still demands high performance
    Note
    There is a retrieval fee associated with STANDARD_IA objects which makes it most
    suitable for infrequently accessed data For pricing information see Amazon S3 Pricing
    For example initially you might upload objects using the STANDARD storage class and then use a
    bucket lifecycle configuration rule to transition objects (see Object Lifecycle Management (p 109))
    to the STANDARD_IA (or GLACIER) storage class at some point in the object's lifetime For more
    information about lifecycle management see Object Lifecycle Management (p 109)
    The STANDARD_IA objects are available for realtime access The table at the end of this section
    highlights some of the differences in these storage classes
    The STANDARD_IA storage class is suitable for larger objects greater than 128 Kilobytes that
    you want to keep for at least 30 days For example bucket lifecycle configuration has minimum
    object size limit for Amazon S3 to transition objects For more information see Supported
    Transitions (p 110)
    • GLACIER – The GLACIER storage class is suitable for archiving data where data access is
    infrequent and retrieval time of several hours is acceptable (Archived objects are not available for
    realtime access You must first restore the objects before you can access them)
    API Version 20060301
    103Amazon Simple Storage Service Developer Guide
    Storage Classes
    The GLACIER storage class uses the very lowcost Amazon Glacier storage service but you still
    manage objects in this storage class through Amazon S3 Note the following about the GLACIER
    storage class
    • You cannot specify GLACIER as the storage class at the time that you create an object You
    create GLACIER objects by first uploading objects using STANDARD RRS or STANDARD_IA as
    the storage class Then you transition these objects to the GLACIER storage class using lifecycle
    management For more information see Object Lifecycle Management (p 109)
    • You must first restore the GLACIER objects before you can access them (STANDARD RRS
    and STANDARD_IA objects are available for anytime access) For more information GLACIER
    Storage Class Additional Lifecycle Configuration Considerations (p 124)
    To learn more about the Amazon Glacier service see the Amazon Glacier Developer Guide
    All the preceding storage classes are designed to sustain the concurrent loss of data in two facilities
    (for details see the following availability and durability table)
    In addition to the performance requirements of your application scenario there is also price
    performance considerations For the Amazon S3 storage classes and pricing see Amazon S3 Pricing
    Amazon S3 also offers the following storage class that enables you to save costs by maintaining fewer
    redundant copies of your data
    • REDUCED_REDUNDANCY – The Reduced Redundancy Storage (RRS) storage class is designed
    for noncritical reproducible data stored at lower levels of redundancy than the STANDARD storage
    class which reduces storage costs For example if you upload an image and use STANDARD
    storage class for it you might compute a thumbnail and save it as an object of the RRS storage
    class
    The durability level (see the following table) corresponds to an average annual expected loss of
    001 of objects For example if you store 10000 objects using the RRS option you can on
    average expect to incur an annual loss of a single object per year (001 of 10000 objects)
    Note
    This annual loss represents an expected average and does not guarantee the loss of less
    than 001 of objects in a given year
    RRS provides a costeffective highly available solution for distributing or sharing content that is
    durably stored elsewhere or for storing thumbnails transcoded media or other processed data that
    can be easily reproduced
    If an RRS object is lost Amazon S3 returns a 405 error on requests made to that object
    Amazon S3 can send an event notification to alert a user or start a workflow when it detects that an
    RRS object is lost To receive notifications you need to add notification configuration to your bucket
    For more information see Configuring Amazon S3 Event Notifications (p 472)
    The following table summarizes the durability and availability offered by each of the storage classes
    Storage Class Durability (designed for) Availability
    (designed for)
    Other
    Considerations
    STANDARD 99999999999 9999 None
    STANDARD_IA 99999999999 999 There is a retrieval
    fee associated with
    API Version 20060301
    104Amazon Simple Storage Service Developer Guide
    Subresources
    Storage Class Durability (designed for) Availability
    (designed for)
    Other
    Considerations
    STANDARD_IA
    objects which
    makes it most
    suitable for
    infrequently
    accessed data For
    pricing information
    see Amazon S3
    Pricing
    GLACIER 99999999999 9999 (after
    you restore
    objects)
    GLACIER objects
    are not available for
    realtime access
    You must first
    restore archived
    objects before
    you can access
    them and restoring
    objects can take
    34 hours For more
    information see
    Restoring Archived
    Objects (p 125)
    RRS 9999 9999 None
    Object Subresources
    Amazon S3 defines a set of subresources associated with buckets and objects Subresources are
    subordinates to objects that is subresources do not exist on their own they are always associated
    with some other entity such as an object or a bucket
    The following table lists the subresources associated with Amazon S3 objects
    Subresource Description
    acl Contains a list of grants identifying the grantees and the permissions granted When
    you create an object the acl identifies the object owner as having full control over
    the object You can retrieve an object ACL or replace it with updated list of grants
    Any update to an ACL requires you to replace the existing ACL For more information
    about ACLs see Managing Access with ACLs (p 364)
    torrent Amazon S3 supports the BitTorrent protocol Amazon S3 uses the torrent
    subresource to return the torrent file associated with the specific object To retrieve a
    torrent file you specify the torrent subresource in your GET request Amazon S3
    creates a torrent file and returns it You can only retrieve the torrent subresource
    you cannot create update or delete the torrent subresource For more information
    see Using BitTorrent with Amazon S3 (p 531)
    API Version 20060301
    105Amazon Simple Storage Service Developer Guide
    Versioning
    Object Versioning
    Versioning enables you to keep multiple versions of an object in one bucket for example my
    imagejpg (version 111111) and myimagejpg (version 222222) You might want to enable
    versioning to protect yourself from unintended overwrites and deletions or to archive objects so that
    you can retrieve previous versions of them
    Note
    The SOAP API does not support versioning SOAP support over HTTP is deprecated but it is
    still available over HTTPS New Amazon S3 features will not be supported for SOAP
    Object versioning can be used in combination with Object Lifecycle Management (p 109) allowing
    you to customize your data retention needs while controlling your related storage costs For more
    information about adding lifecycle configuration to versioningenabled buckets using the AWS
    Management Console see Lifecycle Configuration for a Bucket with Versioning in the Amazon Simple
    Storage Service Console User Guide
    Important
    If you have an object expiration lifecycle policy in your nonversioned bucket and you want to
    maintain the same permanent delete behavior when you enable versioning you must add a
    noncurrent expiration policy The noncurrent expiration lifecycle policy will manage the deletes
    of the noncurrent object versions in the versionenabled bucket (A versionenabled bucket
    maintains one current and zero or more noncurrent object versions)
    You must explicitly enable versioning on your bucket By default versioning is disabled Regardless
    of whether you have enabled versioning each object in your bucket has a version ID If you have not
    enabled versioning then Amazon S3 sets the version ID value to null If you have enabled versioning
    Amazon S3 assigns a unique version ID value for the object When you enable versioning on a bucket
    existing objects if any in the bucket are unchanged the version IDs (null) contents and permissions
    remain the same
    Enabling and suspending versioning is done at the bucket level When you enable versioning
    for a bucket all objects added to it will have a unique version ID Unique version IDs are
    randomly generated Unicode UTF8 encoded URLready opaque strings that are at most
    1024 bytes long An example version ID is 3L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY
    +MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo Only Amazon S3 generates version IDs They cannot be
    edited
    Note
    For simplicity we will use much shorter IDs in all our examples
    When you PUT an object in a versioningenabled bucket the noncurrent version is not overwritten
    The following figure shows that when a new version of photogif is PUT into a bucket that already
    contains an object with the same name the original object (ID 111111) remains in the bucket
    Amazon S3 generates a new version ID (121212) and adds the newer version to the bucket
    API Version 20060301
    106Amazon Simple Storage Service Developer Guide
    Versioning
    This functionality prevents you from accidentally overwriting or deleting objects and affords you the
    opportunity to retrieve a previous version of an object
    When you DELETE an object all versions remain in the bucket and Amazon S3 inserts a delete marker
    as shown in the following figure
    The delete marker becomes the current version of the object By default GET requests retrieve the
    most recently stored version Performing a simple GET Object request when the current version is a
    delete marker returns a 404 Not Found error as shown in the following figure
    API Version 20060301
    107Amazon Simple Storage Service Developer Guide
    Versioning
    You can however GET a noncurrent version of an object by specifying its version ID In the following
    figure we GET a specific object version 111111 Amazon S3 returns that object version even though
    it's not the current version
    You can permanently delete an object by specifying the version you want to delete Only the owner
    of an Amazon S3 bucket can permanently delete a version The following figure shows how DELETE
    versionId permanently deletes an object from a bucket and that Amazon S3 doesn't insert a delete
    marker
    API Version 20060301
    108Amazon Simple Storage Service Developer Guide
    Lifecycle Management
    You can add additional security by configuring a bucket to enable MFA (MultiFactor Authentication)
    Delete When you do the bucket owner must include two forms of authentication in any request
    to delete a version or change the versioning state of the bucket For more information see MFA
    Delete (p 424)
    For more information see Using Versioning (p 423)
    Object Lifecycle Management
    This section provides an overview of the Amazon S3 lifecycle feature that you can use to manage
    lifecycle of objects in your bucket
    What Is Lifecycle Configuration
    You manage an object's lifecycle by using a lifecycle configuration which defines how Amazon S3
    manages objects during their lifetime Lifecycle configuration enables you to simplify the lifecycle
    management of your objects such as automated transition of lessfrequently accessed objects to low
    cost storage alternatives and scheduled deletions You can configure as many as 1000 lifecycle rules
    per bucket
    You can define lifecycle configuration rules for objects that have a welldefined lifecycle You can use
    lifecycle configurations for objects you want to switch to different storage classes or delete during their
    lifecycle for example
    • If you are uploading periodic logs to your bucket your application might need these logs for a week
    or a month after creation and after that you might want to delete them
    • Some documents are frequently accessed for a limited period of time After that these documents
    are less frequently accessed Over time you might not need realtime access to these objects
    but your organization or regulations might require you to archive them for a longer period and then
    optionally delete them later
    • You might also upload some types of data to Amazon S3 primarily for archival purposes for example
    digital media archives financial and healthcare records raw genomics sequence data longterm
    database backups and data that must be retained for regulatory compliance
    API Version 20060301
    109Amazon Simple Storage Service Developer Guide
    How Do I Configure a Lifecycle
    How Do I Configure a Lifecycle
    You can specify a lifecycle configuration as XML A lifecycle configuration comprises a set of rules with
    predefined actions that you want Amazon S3 to perform on objects during their lifetime These actions
    include
    • Transition actions in which you define when objects transition to another Amazon S3 storage class
    For example you may choose to transition objects to the STANDARD_IA (IA for infrequent access)
    storage class 30 days after creation or archive objects to the GLACIER storage class one year after
    creation
    • Expiration actions in which you specify when the objects expire Then Amazon S3 deletes the
    expired objects on your behalf
    For more information about lifecycle rules see Lifecycle Configuration Elements (p 113)
    Amazon S3 stores the configuration as a lifecycle subresource attached to your bucket Using the
    Amazon S3 API you can PUT GET or DELETE a lifecycle configuration For more information see
    PUT Bucket lifecycle GET Bucket lifecycle or DELETE Bucket lifecycle You can also configure
    the lifecycle by using the Amazon S3 console or programmatically by using the AWS SDK wrapper
    libraries and if you need to you can also make the REST API calls directly Then Amazon S3 applies
    the lifecycle rules to all or specific objects identified in the rule
    Transitioning Objects General Considerations
    You can add rules in a lifecycle configuration to transition objects to another Amazon S3 storage
    class For example you might transition objects to the STANDARD_IA storage class when you know
    those objects are infrequently accessed You might also want to archive objects that don't need real
    time access to the GLACIER storage class The following sections describe transitioning related
    considerations and constraints
    Supported Transitions
    In a lifecycle configuration you can define rules to transition objects from one storage class to another
    The following are supported transitions
    • From the STANDARD or REDUCED_REDUNDANCY storage classes to STANDARD_IA The
    following constraints apply
    • Amazon S3 does not transition objects less than 128 Kilobytes in size to the STANDARD_IA
    storage class Cost benefits of transitioning to STANDARD_IA can be realized for larger objects
    For smaller objects it is not cost effective and Amazon S3 will not transition them
    • Objects must be stored at least 30 days in the current storage class before you can transition
    them to STANDARD_IA For example you cannot create a lifecycle rule to transition objects to the
    STANDARD_IA storage class one day after creation
    Transitions before the first 30 days are not supported because often younger objects are accessed
    more frequently or deleted sooner than is suitable for STANDARD_IA
    • If you are transitioning noncurrent objects (versioned bucket scenario) you can transition to
    STANDARD_IA only objects that are at least 30 days noncurrent
    • From any storage class to GLACIER
    For more information see GLACIER Storage Class Additional Lifecycle Configuration
    Considerations (p 124)
    • You can combine these rules to manage an object's complete lifecycle including a first transition to
    STANDARD_IA a second transition to GLACIER for archival and an expiration
    API Version 20060301
    110Amazon Simple Storage Service Developer Guide
    Transitioning Objects General Considerations
    Note
    When configuring lifecycle the API will not allow you to create a lifecycle policy in which
    you specify both of these transitions but the GLACIER transition occurs less than 30 days
    after the STANDARD_IA transition This is because such a lifecycle policy may increase
    costs because of the minimum 30 day storage charge associated with the STANDARD_IA
    storage class For more information about cost considerations see Amazon S3 Pricing
    For example suppose the objects you create have a welldefined lifecycle Initially the objects are
    frequently accessed for a period of 30 days After the initial period the frequency of access diminishes
    where objects are infrequently accessed for up to 90 days After that the objects are no longer needed
    You may choose to archive or delete them You can use a lifecycle configuration to define transition
    and expiration of objects that matches this example scenario (transition to STANDARD_IA 30 days
    after creation and transition to GLACIER 90 days after creation and perhaps expire them after certain
    number of days) As you tier down the object's storage class in the transition you can benefit from the
    storage cost savings For more information about cost considerations see Amazon S3 Pricing
    You can think of lifecycle transitions as supporting storage class tiers (see Storage Classes (p 103))
    which offer different costs and benefits You may choose to transition an object to another storage
    class in the object's lifetime for cost saving considerations and lifecycle configuration enables you to
    do that For example to manage storage costs you might configure lifecycle to change an object's
    storage class from the STANDARD which is most available and durable storage class to the
    STANDARD_IA (IA for infrequent access) and then to the GLACIER storage class (where the objects
    are archived and only available after you restore) These transitions can lower your storage costs
    The following are not supported transitions
    • You cannot transition from STANDARD_IA to STANDARD or REDUCED_REDUNDANCY
    • You cannot transition from GLACIER to any other storage class
    • You cannot transition from any storage class to REDUCED_REDUNDANCY
    Transitioning to the GLACIER storage class (Object Archival)
    Using lifecycle configuration you can transition objects to the GLACIER storage class—that is archive
    data to Amazon Glacier a lowercost storage solution Before you archive objects note the following
    • Objects in the GLACIER storage class are not available in real time
    Archived objects are Amazon S3 objects but before you can access an archived object you must
    first restore a temporary copy of it The restored object copy is available only for the duration you
    specify in the restore request After that Amazon S3 deletes the temporary copy and the object
    remains archived in Amazon Glacier
    Note that object restoration from an archive can take up to five hours
    You can restore an object by using the Amazon S3 console or programmatically by using the AWS
    SDKs wrapper libraries or the Amazon S3 REST API in your code For more information see POST
    Object restore
    • The transition of objects to the GLACIER storage class is oneway
    You cannot use a lifecycle configuration rule to convert the storage class of an object from
    GLACIER to Standard or RRS If you want to change the storage class of an already archived
    object to either Standard or RRS you must use the restore operation to make a temporary copy
    first Then use the copy operation to overwrite the object as a STANDARD STANDARD_IA or
    REDUCED_REDUNDANCY object
    API Version 20060301
    111Amazon Simple Storage Service Developer Guide
    Expiring Objects General Considerations
    • The GLACIER storage class objects are visible and available only through Amazon S3 not through
    Amazon Glacier
    Amazon S3 stores the archived objects in Amazon Glacier however these are Amazon S3 objects
    and you can access them only by using the Amazon S3 console or the API You cannot access the
    archived objects through the Amazon Glacier console or the API
    Expiring Objects General Considerations
    When an object reaches the end of its lifetime Amazon S3 queues it for removal and removes it
    asynchronously There may be a delay between the expiration date and the date at which Amazon S3
    removes an object You are not charged for storage time associated with an object that has expired
    To find when an object is scheduled to expire you can use the HEAD Object or the GET Object APIs
    These APIs return response headers that provide object expiration information
    There are additional cost considerations if you put lifecycle policy to expire objects that have been in
    STANDARD_IA for less than 30 days or GLACIER for less than 90 days For more information about
    cost considerations see Amazon S3 Pricing
    Lifecycle and Other Bucket Configurations
    In addition to lifecycle configuration your bucket can have other configurations associated This is
    section explains how lifecycle configuration relates to other bucket configurations
    Lifecycle and Versioning
    You can add lifecycle configuration to nonversioned buckets and versioningenabled buckets For
    more information see Object Versioning (p 106) A versioningenabled bucket maintains one current
    and zero or more noncurrent object versions You can define separate lifecycle rules for current and
    noncurrent versions
    For more information see Lifecycle Configuration Elements (p 113) For information about
    versioning see Object Versioning (p 106)
    Lifecycle and MFA Enabled Buckets
    Lifecycle configuration on MFAenabled buckets is not supported
    Lifecycle and Logging
    If you have logging enabled on your bucket Amazon S3 reports the results of expiration action as
    follows
    • If the lifecycle expiration action results in Amazon S3 permanently removing the object Amazon S3
    reports it as operation S3EXPIREOBJECT in the log record
    • For a versioningenabled bucket if the lifecycle expiration action results in a logical deletion of
    current version in which Amazon S3 adds a delete marker Amazon S3 reports the logical deletion
    as operation S3CREATEDELETEMARKER in the log record For more information see Object
    Versioning (p 106)
    • When Amazon S3 transitions object to the GLACIER storage class it reports it as operation
    S3TRANSITIONOBJECT in the log record to indicate it has initiated the operation When it is
    transition to the STANDARD_IA storage class it is reported as S3TRANSITION_SIAOBJECT
    API Version 20060301
    112Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    Related Topics
    • Lifecycle Configuration Elements (p 113)
    • GLACIER Storage Class Additional Lifecycle Configuration Considerations (p 124)
    • Specifying a Lifecycle Configuration (p 125)
    Lifecycle Configuration Elements
    Topics
    • ID Element (p 114)
    • Status Element (p 114)
    • Prefix Element (p 114)
    • Elements to Describe Lifecycle Actions (p 115)
    • Examples of Lifecycle Configuration (p 117)
    You specify a lifecycle policy configuration as XML It consists of one or more lifecycle rules Each rule
    consists of the following
    • Rule metadata that include a rule ID and status indicating whether the rule is enabled or disabled If
    a rule is disabled Amazon S3 will not perform any actions specified in the rule
    • Prefix identifying objects by the key prefix to which the rule applies
    • One or more transitionexpiration actions with a date or a time period in the object's lifetime when
    you want Amazon S3 to perform the specified action
    The following are two introductory example configurations
    Example 1 Lifecycle configuration
    Suppose you want to transition objects with key prefix documents to the GLACIER storage class one
    year after you create them and then permanently remove them 10 years after you created them You
    can accomplish this by attaching the following lifecycle configuration to the bucket


    samplerule
    documents
    Enabled

    365
    GLACIER


    3650



    The lifecycle configuration defines one rule that applies to objects with the key name prefix
    documents The rule specifies two actions (Transition and Expiration) The rule is in effect because
    the rule status is Enabled
    API Version 20060301
    113Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    Example 2 Lifecycle configuration on a versioningenabled bucket
    If your bucket is versioningenabled you have one current object version and zero or more noncurrent
    versions For more information see Object Versioning (p 106)
    For a versioningenabled bucket the lifecycle actions apply as follows
    • Transition and Expiration actions apply to current versions
    • NoncurrentVersionTransition and NoncurrentVersionExpiration actions apply to
    noncurrent versions
    The following example lifecycle configuration has one rule that applies to objects with key name prefix
    logs The rule specifies two actions for noncurrent versions
    • The NoncurrentVersionTransition action directs Amazon S3 to transition noncurrent objects to
    the GLACIER storage class 30 days after the objects become noncurrent
    • The NoncurrentVersionExpiration action directs Amazon S3 to permanently remove the
    noncurrent objects 180 days after they become noncurrent


    samplerule
    logs
    Enabled

    30
    GLACIER


    180



    The following sections describe these XML elements in a lifecycle configuration
    ID Element
    A lifecycle configuration can have up to 1000 rules The ID element uniquely identifies a rule
    Status Element
    The Status element value can be either Enabled or Disabled If a rule is disabled Amazon S3 will not
    perform any of the actions defined in the rule
    Prefix Element
    The Prefix element identifies objects to which the rule applies If you specify an empty prefix the rule
    applies to all objects in the bucket If you specify a key name prefix the rule applies only to the objects
    whose key name begins with specified string For more information about object keys see Object
    Keys (p 99)
    API Version 20060301
    114Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    Elements to Describe Lifecycle Actions
    You can direct Amazon S3 to perform specific actions in an object's lifetime by specifying one or
    more of the following predefined actions in a lifecycle rule The effect of these actions depend on the
    versioning state of your bucket
    • Transition action element – You specify the Transition action to transition objects from
    one storage class to another For more information about transitioning objects see Supported
    Transitions (p 110) When a specified date or time period in the object's lifetime is reached
    Amazon S3 performs the transition
    For a versioned bucket (versioningenabled or versioningsuspended bucket) the Transition
    action applies to the current object version To manage noncurrent versions Amazon S3 defines the
    NoncurrentVersionTranstion action (described below)
    • Expiration action element – The Expiration action expires objects identified in the rule Amazon
    S3 makes all expired objects unavailable Whether the objects are permanently removed depends
    on the versioning state of the bucket
    Important
    Object expiration lifecycle polices do not remove incomplete multipart uploads To remove
    incomplete multipart uploads you must use the AbortIncompleteMultipartUpload lifecycle
    configuration action that is described later in this section
    • Nonversioned bucket – The Expiration action results in Amazon S3 permanently removing
    the object
    • Versioned bucket – For a versioned bucket versioningenabled or versioningsuspended (see
    Using Versioning (p 423)) there are several considerations that guide how Amazon S3 handles
    the expiration action Regardless of the version state the following applies
    • The expiration action applies only to the current version (no impact on noncurrent object
    versions)
    • Amazon S3 will not take any action if there are one or more object versions and the delete
    marker is the current version
    • If the current object version is the only object version and it is also a delete marker (also referred
    as the expired object delete marker where all object versions are deleted and you only have
    a delete marker remaining) Amazon S3 will remove the expired object delete marker You can
    also use the expiration action to direct Amazon S3 to remove any expired object delete markers
    For an example see Example 8 Removing Expired Object Delete Markers (p 121)
    Important
    Amazon S3 will remove an expired object delete marker no sooner than 48 hours after
    the object expired
    The additional considerations for Amazon S3 to manage expiration are as follows
    • Versioningenabled bucket
    If current object version is not a delete marker Amazon S3 adds a delete marker with a unique
    version ID making the current version noncurrent and the delete marker the current version
    • Versioningsuspended bucket
    In a versioningsuspended bucket the expiration action causes Amazon S3 to create a delete
    marker with null as the version ID This delete marker will replace any object version with a null
    version ID in the version hierarchy which effectively deletes the object
    API Version 20060301
    115Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    In addition Amazon S3 provides the following actions that you can use to manage noncurrent object
    versions in a versioned bucket (versioningenabled and versioningsuspended buckets)
    • NoncurrentVersionTransition action element – Use this action to specify how long (from the time
    the objects became noncurrent) you want the objects to remain in the current storage class before
    Amazon S3 transitions them to the specified storage class For more information about transitioning
    objects see Supported Transitions (p 110)
    • NoncurrentVersionExpiration action element – Use this action to specify how long (from the time
    the objects became noncurrent) you want to retain noncurrent object versions before Amazon S3
    permanently removes them The deleted object cannot be recovered
    This delayed removal of noncurrent objects can be helpful when you need to correct any accidental
    deletes or overwrites For example you can configure an expiration rule to delete noncurrent
    versions five days after they become noncurrent For example suppose on 112014 1030 AM
    UTC you create an object called photogif (version ID 111111) On 122014 1130 AM UTC
    you accidentally delete photogif (version ID 111111) which creates a delete marker with a new
    version ID (such as version ID 4857693) You now have five days to recover the original version
    of photogif (version ID 111111) before the deletion is permanent On 182014 0000 UTC the
    lifecycle rule for expiration executes and permanently deletes photogif (version ID 111111) five
    days after it became a noncurrent version
    Important
    Object expiration lifecycle polices do not remove incomplete multipart uploads To remove
    incomplete multipart uploads you must use the AbortIncompleteMultipartUpload lifecycle
    configuration action that is described later in this section
    In addition to the transition and expiration actions you can use the following lifecycle configuration
    action to direct Amazon S3 to abort incomplete multipart uploads
    • AbortIncompleteMultipartUpload action element – Use this element to set a maximum time (in
    days) that you want to allow multipart uploads to remain in progress If the applicable multipart
    uploads (determined by the key name prefix specified in the lifecycle rule) are not successfully
    completed within the predefined time period Amazon S3 will abort the incomplete multipart
    uploads For more information see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle
    Policy (p 167)
    How Amazon S3 Calculates How Long an Object Has Been Noncurrent
    In a versioningenabled bucket you can have multiple versions of an object there is always one
    current version and zero or more noncurrent versions Each time you upload an object the current
    version is retained as noncurrent version and the newly added version the successor become current
    To determine the number of days an object is noncurrent Amazon S3 looks at when its successor was
    created Amazon S3 uses the number of days since its successor was created as the number of days
    an object is noncurrent
    Restoring Previous Versions of an Object When Using Lifecycle Configurations
    As explained in detail in the topic Restoring Previous Versions (p 442) there are two
    methods to retrieve previous versions of an object
    1 By copying a noncurrent version of the object into the same bucket The copied object
    becomes the current version of that object and all object versions are preserved
    2 By permanently deleting the current version of the object When you delete the current
    object version you in effect turn the noncurrent version into the current version of that
    object
    When using lifecycle configuration rules with versioningenabled buckets we recommend as a
    best practice that you use the first method
    API Version 20060301
    116Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    Because of Amazon S3's eventual consistency semantics a current version that you
    permanently deleted may not disappear until the changes propagate (Amazon S3 may
    be unaware of this deletion) And in the meantime the lifecycle you configured to expire
    noncurrent objects may permanently remove noncurrent objects including the one you want
    to restore So copying the old version as recommended in the first method is the safer
    alternative
    Lifecycle Rules Based on the Object Age
    You can specify a time period in number of days from the creation (or modification) of the objects when
    Amazon S3 can take the action
    When you specify number of days in the Transition and Expiration actions in a lifecycle
    configuration note the following
    • It is the number of days since object creation when the action will be taken
    • Amazon S3 calculates the time by adding the number of days specified in the rule to the object
    creation time and rounding the resulting time to the next day midnight UTC For example if an
    object was created at 1152014 1030 AM UTC and you specify 3 days in a transition rule then the
    transition date of the object would be calculated as 1192014 0000 UTC
    Note
    Amazon S3 maintains only the last modified date for each object For example the Amazon
    S3 console shows the Last Modified date in the object Properties pane When you initially
    create a new object this date reflects the date the object is created If you replace the object
    the date will change accordingly So when we use the term creation date it is synonymous
    with the term last modified date
    When specifying the number of days in the NoncurrentVersionTransition and
    NoncurrentVersionExpiration actions in a lifecycle configuration note the following
    • It is the number of days from when the version of the object becomes noncurrent (that is since the
    object was overwritten or deleted) as the time period for when Amazon S3 will take the action on the
    specified object or objects
    • Amazon S3 calculates the time by adding the number of days specified in the rule to the time when
    the new successor version of the object is created and rounding the resulting time to the next day
    midnight UTC For example in your bucket you have a current version of an object that was created
    at 112014 1030 AM UTC if the new successor version of the object that replaces the current
    version is created at 1152014 1030 AM UTC and you specify 3 days in a transition rule then the
    transition date of the object would be calculated as 1192014 0000 UTC
    Lifecycle Rules Based on a Specific Date
    When specifying an action in a lifecycle configuration you can specify a date when you want Amazon
    S3 to take the action The datebased rules trigger action on all objects created on or before this
    date For example a rule to transition to GLACIER on 6302015 will transition all objects created on
    or before this date (note that the rule applies every day after the specified date and not just on the
    specified date as long as the rule is in effect)
    Note
    You cannot create the datebased rule using the AWS Management Console but you can
    view disable or delete such rules
    Examples of Lifecycle Configuration
    This section provides examples of lifecycle configuration Each example shows how you can specify
    XML in each of the example scenarios
    API Version 20060301
    117Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    Example 1 Specify a Lifecycle Rule for a Subset of Objects in a Bucket
    The following lifecycle configuration rule is applied to a subset of objects with key name prefix
    projectdocs The rule specifies two actions requesting Amazon S3 the following
    • Transition objects to the GLACIER storage class 365 days (one year) after creation
    • Delete objects (the Expiration action) objects 3650 days (10 years) after creation


    Transition and Expiration Rule
    projectdocs
    Enabled

    365
    GLACIER


    3650



    Instead of specifying object age in terms of days after creation you can specify a date for each action
    however you cannot use both Date and Days in the same rule
    Example 2 Specify a Lifecycle Rule that Applies to All Objects in the Bucket
    If you specify an empty Prefix in a lifecycle rule it applies to all objects in the bucket Suppose you
    create a bucket only for archiving objects to GLACIER You can set lifecycle configuration requesting
    Amazon S3 to transition objects to the GLACIER storage class immediately after creation as shown
    The lifecycle configuration defines one rule with an empty Prefix The rule specifies a Transition
    action requesting Amazon S3 to transition objects to the GLACIER storage class 0 days after creation
    in which case objects are eligible for archival to Amazon Glacier at midnight UTC following creation


    Archive all object sameday upon creation

    Enabled

    0
    GLACIER



    Example 3 Disable a Lifecycle Rule
    You can temporarily disable a lifecycle rule The following lifecycle configuration specifies two rules
    however one of them is disabled Amazon S3 will not perform any action specified in a rule that is
    disabled


    API Version 20060301
    118Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    30 days log objects expire rule
    logs
    Enabled

    0
    GLACIER



    1 year documents expire rule
    documents
    Disabled

    0
    GLACIER



    Example 4 Tiering Down Storage Class Over Object Lifetime
    In this example you leverage lifecycle configuration to tierdown the storage class of objects over
    their lifetime This tiering down can help reduce storage costs For more information about pricing see
    Amazon S3 Pricing
    The following lifecycle configuration specifies a rule that applies to objects with key name prefix logs
    The rule specifies the following actions
    • Two transition actions
    • Transition objects to the STANDARD_IA storage class 30 days after creation
    • Transition objects to the GLACIER storage class 90 days after creation
    • An expiration action directing Amazon S3 to delete objects a year after creation


    exampleid
    logs
    Enabled

    30
    STANDARD_IA


    90
    GLACIER


    365



    Note
    You can use one rule to describe all lifecycle actions if all actions apply to the same set of
    objects (identified by the prefix) Otherwise you can add multiple rules each specify a different
    key name prefix
    API Version 20060301
    119Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    Example 5 Specify Multiple Rules
    You can specify multiple rules if you want different lifecycle actions of different objects The following
    lifecycle configuration has two rules
    • Rule 1 applies to objects with key name prefix classA It directs Amazon S3 to transition objects to
    the GLACIER storage class one year after creation and expire these objects 10 years after creation
    • Rule 2 applies to objects with key name prefix classB It directs Amazon S3 to transition objects to
    the STANDARD_IA storage class 90 days after creation and delete then one year after creation


    ClassADocRule
    classA
    Enabled

    365
    GLACIER


    3650



    ClassBDocRule
    classB
    Enabled

    90
    STANDARD_IA


    365



    Example 6 Specify Multiple Rules with Overlapping Prefixes
    In the following example you have two rules that specify overlapping prefixes
    • First rule specifies empty prefix indicating all objects in the bucket
    • Second rule specifies subset of objects in the bucket with key name prefix logs
    These overlapping prefixes are fine there is no conflict Rule 1 requests Amazon S3 to delete all
    objects one year after creation and Rule 2 requests Amazon S3 to transition subset of objects to the
    STANDARD_IA storage class 30 days after creation


    Rule 1

    Enabled
    API Version 20060301
    120Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements

    365



    Rule 2
    logs
    Enabled

    STANDARD_IA
    30



    Example 7 Specify a Lifecycle Rule for a VersioningEnable Bucket
    Suppose you have a versioningenabled bucket which means that for each object you have a
    current version and zero or more noncurrent versions You want to maintain one year worth of
    history and then delete the noncurrent versions For more information about versioning see Object
    Versioning (p 106)
    Also you want to save storage costs by moving noncurrent versions to GLACIER 30 days after they
    become noncurrent (assuming cold data for which you will not need realtime access) In addition you
    also expect frequency of access of the current versions to diminish 90 days after creation so you might
    choose to move these objects to the STANDARD_IA storage class


    samplerule

    Enabled

    90
    STANDARD_IA


    30
    GLACIER


    365



    Example 8 Removing Expired Object Delete Markers
    A versioningenabled bucket has one current version and one or more noncurrent versions for each
    object When you delete an object note that
    • If you don't specify a version ID in your delete request Amazon S3 adds a delete marker instead of
    deleting the object The current object version become noncurrent and the delete marker becomes
    the current version
    • If you specify a version ID in your delete request Amazon S3 deletes the object version permanently
    (a delete marker is not created)
    • A delete marker with zero noncurrent versions is referred to as the expired object delete marker
    API Version 20060301
    121Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements
    This example shows a scenario that can create expired object delete markers in your bucket and how
    you can use lifecycle configuration to direct Amazon S3 to remove the expired object delete markers
    Suppose you write a lifecycle policy that specifies the NoncurrentVersionExpiration action to
    remove the noncurrent versions 30 days after they become noncurrent as shown




    30



    Note that the NoncurrentVersionExpiration action does not apply to the current object versions
    it only removes noncurrent versions
    For current object versions you have the following options to manage their lifetime depending on
    whether or not the current object versions follow a welldefined lifecycle
    • Current object versions follow a welldefined lifecycle
    In this case you can use lifecycle policy with the Expiration action to direct Amazon S3 to remove
    current versions as shown in the following example




    60


    30



    Amazon S3 removes current versions 60 days after they are created by adding a delete marker for
    each of the current object versions This makes the current version noncurrent and the delete marker
    becomes the current version (see Using Versioning (p 423))
    The NoncurrentVersionExpiration action in the same lifecycle configuration removes
    noncurrent objects 30 days after they become noncurrent Thus all object versions are removed and
    you have expired object delete markers but Amazon S3 will detect and remove expired object delete
    markers for you
    • Current object versions don't have a welldefined lifecycle
    In this case you might remove the objects manually when you don't need them creating
    a delete marker with one or more noncurrent versions If lifecycle configuration with
    NoncurrentVersionExpiration action removes all the noncurrent versions you now have expired
    object delete markers
    Specifically for this scenario Amazon S3 lifecycle configuration provides Expiration action where
    you can request S3 to remove the expired object delete markers

    API Version 20060301
    122Amazon Simple Storage Service Developer Guide
    Lifecycle Configuration Elements



    true


    30



    By setting the ExpiredObjectDeleteMarker element to true in the Expiration action you direct
    Amazon S3 to remove expired object delete markers Amazon S3 will remove an expired object delete
    marker no sooner than 48 hours after the object expired
    The following putbucketlifecycle CLI command adds the lifecycle configuration for the
    specified bucket
    aws s3api putbucketlifecycle \
    bucket bucketname \
    lifecycleconfiguration filenamecontaininglifecycleconfiguration
    Note
    If you have trouble getting the following test procedure to work make sure that you have the
    latest version of the AWS CLI installed
    To test the CLI command do the following
    1 Set up the AWS CLI For instructions see Set Up the AWS CLI (p 562)
    2 Save the following example lifecycle configuration in a file (lifecyclejson) The example policy
    specifies empty prefix so it applies to all objects You could specify a key name prefix to limit
    action to a subset of objects
    {
    Rules [
    {
    Status Enabled
    Prefix
    Expiration {
    ExpiredObjectDeleteMarker true
    }
    ID TestOnly
    }
    ]
    }
    3 Run the following CLI command to set lifecycle configuration on your bucket
    aws s3api putbucketlifecycle \
    bucket bucketname \
    lifecycleconfiguration filelifecyclejson
    4 To verify retrieve the lifecycle configuration using the getbucketlifecycle CLI command
    aws s3api getbucketlifecycle \
    API Version 20060301
    123Amazon Simple Storage Service Developer Guide
    GLACIER Storage Class Additional Considerations
    bucket bucketname
    5 To delete the lifecycle configuration use the deletebucketlifecycle CLI command
    aws s3api deletebucketlifecycle \
    bucket bucketname
    GLACIER Storage Class Additional Lifecycle
    Configuration Considerations
    Topics
    • Cost Considerations (p 124)
    • Restoring Archived Objects (p 125)
    For objects that you do not need to access in real time Amazon S3 also offers the GLACIER storage
    class This storage class is suitable for objects stored primarily for archival purposes For more
    information see Storage Classes (p 103)
    The lifecycle configuration enables a oneway transition to the GLACIER storage class To change the
    storage class from GLACIER to other storage classes you must restore the object as discussed in the
    following section and then make a copy of the restored object
    Cost Considerations
    If you are planning to archive infrequently accessed data for a period of months or years the GLACIER
    storage class will usually reduce your storage costs You should however consider the following in
    order to ensure that the GLACIER storage class is appropriate for you
    • Storage overhead charges – When you transition objects to the GLACIER storage class a fixed
    amount of storage is added to each object to accommodate metadata for managing the object
    • For each object archived to Amazon Glacier Amazon S3 uses 8 KB of storage for the name of
    the object and other metadata Amazon S3 stores this metadata so that you can get a realtime
    list of your archived objects by using the Amazon S3 API (see Get Bucket (List Objects)) You are
    charged standard Amazon S3 rates for this additional storage
    • For each archived object Amazon Glacier adds 32 KB of storage for index and related metadata
    This extra data is necessary to identify and restore your object You are charged Amazon Glacier
    rates for this additional storage
    If you are archiving small objects consider these storage charges Also consider aggregating a large
    number of small objects into a smaller number of large objects in order to reduce overhead costs
    • Number of days you plan to keep objects archived – Amazon Glacier is a longterm archival
    solution Deleting data that is archived to Amazon Glacier is free if the objects you delete are
    archived for three months or longer If you delete or overwrite an object within three months of
    archiving it Amazon S3 charges a prorated early deletion fee
    • Glacier archive request charges – Each object that you transition to the GLACIER storage class
    constitutes one archive request There is a cost for each such request If you plan to transition a
    large number of objects consider the request costs
    • Glacier data restore charges – Amazon Glacier is designed for longterm archival of data that you
    will access infrequently Data restore charges are based on how quickly you restore data which is
    measured as your peak billable restore rate in GBhr for the entire month Within a month you are
    charged only for the peak billable restore rate and there is no charge for restoring data at less than
    API Version 20060301
    124Amazon Simple Storage Service Developer Guide
    Specifying a Lifecycle Configuration
    the monthly peak billable restore rate Before initiating a large restore carefully review the pricing
    FAQ to determine how you will be billed for restoring data
    When you archive objects to Amazon Glacier by using object lifecycle management Amazon S3
    transitions these objects asynchronously There may be a delay between the transition date in the
    lifecycle configuration rule and the date of the physical transition You are charged Amazon Glacier
    prices based on the transition date specified in the rule
    The Amazon S3 product detail page provides pricing information and example calculations for
    archiving Amazon S3 objects For more information see the following topics
    • How is my storage charge calculated for Amazon S3 objects archived to Amazon Glacier
    • How am I charged for deleting objects from Amazon Glacier that are less than 3 months old
    • Amazon S3 Pricing for storage costs for the Standard and GLACIER storage classes This page also
    provides Glacier Archive Request costs
    • How will I be charged for restoring large amounts of data from Amazon Glacier
    Restoring Archived Objects
    Archived objects are not accessible in realtime You must first initiate a restore request and then wait
    until a temporary copy of the object is available for the duration that you specify in the request Restore
    jobs typically complete in three to five hours so it is important that you archive only objects that you will
    not need to access in real time
    After you receive a temporary copy of the restored object the object's storage class remains GLACIER
    (a GET or HEAD request will return GLACIER as the storage class) Note that when you restore an
    archive you are paying for both the archive (GLACIER rate) and a copy you restored temporarily (RRS
    rate) For information about pricing see Amazon S3 Pricing
    You can restore an object copy programmatically or by using the Amazon S3 console Amazon S3 will
    process only one restore request at a time per object You can use both the console and the Amazon
    S3 API to check the restoration status and to find out when Amazon S3 will delete the restored copy
    Restoring GLACIER Objects by Using Amazon S3 Console
    For information about restoring archived objects stored using the GLACIER storage class by using the
    Amazon S3 console see Restore an Archived Object Using the Amazon S3 Console (p 259)
    Restoring GLACIER Objects Programmatically
    You can restore GLACIER objects programmatically directly from your application by using either the
    AWS SDKs or the Amazon S3 API When you use the AWS SDKs the Amazon S3 API provides
    appropriate wrapper libraries to simplify your programming tasks however when the request is sent
    over the wire the SDK sends the preceding XML in the request body For information about restoring
    objects programmatically see Restoring Archived Objects (p 259)
    Specifying a Lifecycle Configuration
    Topics
    • Manage an Object's Lifecycle Using the AWS Management Console (p 126)
    • Manage Object Lifecycle Using the AWS SDK for Java (p 127)
    • Manage Object Lifecycle Using the AWS SDK for NET (p 129)
    • Manage an Object's Lifecycle Using the AWS SDK for Ruby (p 131)
    • Manage Object Lifecycle Using the REST API (p 131)
    API Version 20060301
    125Amazon Simple Storage Service Developer Guide
    Specifying a Lifecycle Configuration
    You can set a lifecycle configuration on a bucket either by programmatically using the Amazon S3
    API or by using the Amazon S3 console When you add a lifecycle configuration to a bucket there is
    usually some lag before a new or updated lifecycle configuration is fully propagated to all the Amazon
    S3 systems Expect a delay of a few minutes before the lifecycle configuration fully takes effect This
    delay can also occur when you delete a lifecycle configuration
    When you disable or delete a lifecycle rule after a small delay Amazon S3 stops scheduling new
    objects for deletion or transition Any objects that were already scheduled will be unscheduled and will
    not be deleted or transitioned
    Note
    When you add a lifecycle configuration to a bucket the configuration rules apply to
    both existing objects and objects that you add later For example if you add a lifecycle
    configuration rule with an expiration action today that causes objects with a specific prefix to
    expire 30 days after creation Amazon S3 will queue for removal any existing objects that are
    more than 30 days old
    There may be a lag between when the lifecycle configuration rules are satisfied and when the action
    triggered by satisfying the rule is taken However changes in billing happen as soon as the lifecycle
    configuration rule is satisfied even if the action is not yet taken One example is you are not charged for
    storage after the object expiration time even if the object is not deleted immediately Another example
    is you are charged Amazon Glacier storage rates as soon as the object transition time elapses even if
    the object is not transitioned to Amazon Glacier immediately
    For information about specifying the lifecycle by using the Amazon S3 console or programmatically by
    using AWS SDKs click the links provided at the beginning of this topic
    Manage an Object's Lifecycle Using the AWS Management
    Console
    You can specify lifecycle rules on a bucket using the Amazon S3 console In the console the bucket
    Properties provides a Lifecycle tab as shown in the following example screen shot For more
    information see Object Lifecycle Management (p 109)
    StepbyStep Instructions
    For instructions on how to setup lifecycle rules using the AWS Management Console see Managing
    Lifecycle Configuration in the Amazon S3 Console User Guide
    API Version 20060301
    126Amazon Simple Storage Service Developer Guide
    Specifying a Lifecycle Configuration
    Manage Object Lifecycle Using the AWS SDK for Java
    You can use the AWS SDK for Java to manage lifecycle configuration on a bucket For more
    information about managing lifecycle configuration see Object Lifecycle Management (p 109)
    The example code in this topic does the following
    • Add lifecycle configuration with two rules
    • A rule that applies to objects with the glacierobjects key name prefix The rule specifies a
    transition action that directs Amazon S3 to transition these objects to the GLACIER storage class
    Because the number of days specified is 0 the objects become eligible for archival immediately
    • A rule that applies to objects with the projectdocs key name prefix The rule specifies two
    transition actions directing Amazon S3 to first transition objects to the STANDARD_IA (IA for
    infrequent access) storage class 30 days after creation and then transition to the GLACIER
    storage class 365 days after creation The rule also specifies expiration action directing Amazon
    S3 to delete these objects 3650 days after creation
    • Retrieves the lifecycle configuration
    • Updates the configuration by adding another rule that applies to objects with the
    YearlyDocuments key name prefix The expiration action in this rule directs Amazon S3 to delete
    these objects 3650 days after creation
    Note
    When you add a lifecycle configuration to a bucket any existing lifecycle configuration
    is replaced To update existing lifecycle configuration you must first retrieve the existing
    lifecycle configuration make changes and then add the revised lifecycle configuration to the
    bucket
    API Version 20060301
    127Amazon Simple Storage Service Developer Guide
    Specifying a Lifecycle Configuration
    Example Java Code Example
    The following Java code example provides a complete code listing that adds updates and deletes
    a lifecycle configuration to a bucket You need to update the code and provide your bucket name to
    which the code can add the example lifecycle configuration
    For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioIOException
    import javautilArrayList
    import javautilArrays
    import javautilCalendar
    import javautilList
    import javautilTimeZone
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelAmazonS3Exception
    import comamazonawsservicess3modelBucketLifecycleConfiguration
    import
    comamazonawsservicess3modelBucketLifecycleConfigurationTransition
    import comamazonawsservicess3modelStorageClass
    public class LifecycleConfiguration {
    public static String bucketName *** Provide bucket name ***
    public static AmazonS3Client s3Client
    public static void main(String[] args) throws IOException {
    s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    try {

    BucketLifecycleConfigurationRule rule1
    new BucketLifecycleConfigurationRule()
    withId(Archive immediately rule)
    withPrefix(glacierobjects)
    addTransition(new Transition()
    withDays(0)
    withStorageClass(StorageClassGlacier))
    withStatus(BucketLifecycleConfigurationENABLEDtoString())
    BucketLifecycleConfigurationRule rule2
    new BucketLifecycleConfigurationRule()
    withId(Archive and then delete rule)
    withPrefix(projectdocs)
    addTransition(new Transition()
    withDays(30)

    withStorageClass(StorageClassStandardInfrequentAccess))
    addTransition(new Transition()
    withDays(365)

    withStorageClass(StorageClassGlacier))
    withExpirationInDays(3650)
    withStatus(BucketLifecycleConfigurationENABLEDtoString())
    BucketLifecycleConfiguration configuration
    new BucketLifecycleConfiguration()
    withRules(ArraysasList(rule1 rule2))

    Save configuration
    s3ClientsetBucketLifecycleConfiguration(bucketName
    configuration)

    Retrieve configuration
    configuration
    s3ClientgetBucketLifecycleConfiguration(bucketName)

    Add a new rule
    configurationgetRules()add(
    new BucketLifecycleConfigurationRule()
    withId(NewRule)
    withPrefix(YearlyDocuments)
    withExpirationInDays(3650)
    withStatus(BucketLifecycleConfiguration
    ENABLEDtoString())
    )
    Save configuration
    s3ClientsetBucketLifecycleConfiguration(bucketName
    configuration)

    Retrieve configuration
    configuration
    s3ClientgetBucketLifecycleConfiguration(bucketName)

    Verify there are now three rules
    configuration
    s3ClientgetBucketLifecycleConfiguration(bucketName)
    Systemoutformat(Expected # of rules 3 found s\n
    configurationgetRules()size())
    Systemoutprintln(Deleting lifecycle configuration Next we
    verify deletion)
    Delete configuration
    s3ClientdeleteBucketLifecycleConfiguration(bucketName)

    Retrieve nonexistent configuration
    configuration
    s3ClientgetBucketLifecycleConfiguration(bucketName)
    String s (configuration null) No configuration found
    Configuration found
    Systemoutprintln(s)
    } catch (AmazonS3Exception amazonS3Exception) {
    Systemoutformat(An Amazon S3 error occurred Exception s
    amazonS3ExceptiontoString())
    } catch (Exception ex) {
    Systemoutformat(Exception s extoString())
    }
    }
    }
    API Version 20060301
    128Amazon Simple Storage Service Developer Guide
    Specifying a Lifecycle Configuration
    Manage Object Lifecycle Using the AWS SDK for NET
    You can use the AWS SDK for NET to manage lifecycle configuration on a bucket For more
    information about managing lifecycle configuration see Object Lifecycle Management (p 109) The
    example code in this topic does the following
    • Add lifecycle configuration with two rules
    • A rule that applies to objects with the glacierobjects key name prefix The rule specifies a
    transition action that directs Amazon S3 to transition these objects to the GLACIER storage class
    Because the number of days specified is 0 the objects become eligible for archival immediately
    • A rule that applies to objects with the projectdocs key name prefix The rule specifies two
    transition actions directing Amazon S3 to first transition objects to the STANDARD_IA (IA for
    infrequent access) storage class 30 days after creation and then transition to the GLACIER
    storage class 365 days after creation The rule also specifies expiration action directing Amazon
    S3 to delete these objects 3650 days after creation
    • Retrieves the lifecycle configuration
    • Updates the configuration by adding another rule that applies to objects with the
    YearlyDocuments key name prefix The expiration action in this rule directs Amazon S3 to delete
    these objects 3650 days after creation
    Note
    When you add a lifecycle configuration to a bucket any existing lifecycle configuration
    is replaced To update existing lifecycle configuration you must first retrieve the existing
    lifecycle configuration make changes and then add the revised lifecycle configuration to the
    bucket
    API Version 20060301
    129Amazon Simple Storage Service Developer Guide
    Specifying a Lifecycle Configuration
    Example NET Code Example
    The following C# code example provides complete code listing that adds updates and deletes a
    lifecycle configuration to a bucket You need to update the code and provide your bucket name to
    which the code can add the example lifecycle configuration
    Note
    The following code works with the latest version of the NET SDK
    For instructions on how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using SystemCollectionsGeneric
    using SystemDiagnostics
    using AmazonS3
    using AmazonS3Model
    namespace awsamazoncoms3documentation
    {
    class LifeCycleTest
    {
    static string bucketName *** provide bucket name ***
    public static void Main(string[] args)
    {
    try
    {
    using (var client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    var lifeCycleConfiguration new LifecycleConfiguration()
    {
    Rules new List
    {
    new LifecycleRule
    {
    Id Archive immediately rule
    Prefix glacierobjects
    Status LifecycleRuleStatusEnabled
    Transitions new List
    {
    new LifecycleTransition
    {
    Days 0
    StorageClass
    S3StorageClassGlacier
    }
    }
    }
    new LifecycleRule
    {
    Id Archive and then delete rule
    Prefix projectdocs
    Status LifecycleRuleStatusEnabled
    Transitions new List
    {
    new LifecycleTransition
    {
    Days 30
    StorageClass
    S3StorageClassStandardInfrequentAccess
    }
    new LifecycleTransition
    {
    Days 365
    StorageClass S3StorageClassGlacier
    }
    }
    Expiration new LifecycleRuleExpiration()
    {
    Days 3650
    }
    }
    }
    }
    Add the configuration to the bucket
    PutLifeCycleConfiguration(client
    lifeCycleConfiguration)
    Retrieve an existing configuration
    lifeCycleConfiguration
    GetLifeCycleConfiguration(client)
    Add a new rule
    lifeCycleConfigurationRulesAdd(new LifecycleRule
    {
    Id NewRule
    Prefix YearlyDocuments
    Expiration new LifecycleRuleExpiration()
    {
    Days 3650
    }
    })
    Add the configuration to the bucket
    PutLifeCycleConfiguration(client
    lifeCycleConfiguration)
    Verify that there are now three rules
    lifeCycleConfiguration
    GetLifeCycleConfiguration(client)
    ConsoleWriteLine(Expected # of rulest3 found{0}
    lifeCycleConfigurationRulesCount)
    Delete the configuration
    DeleteLifecycleConfiguration(client)
    Retrieve a nonexistent configuration
    lifeCycleConfiguration
    GetLifeCycleConfiguration(client)
    DebugAssert(lifeCycleConfiguration null)
    }
    ConsoleWriteLine(Example complete To continue click
    Enter)
    ConsoleReadKey()
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    ConsoleWriteLine(S3 error occurred Exception +
    amazonS3ExceptionToString())
    }
    catch (Exception e)
    {
    ConsoleWriteLine(Exception + eToString())
    }
    }
    static void PutLifeCycleConfiguration(IAmazonS3 client
    LifecycleConfiguration configuration)
    {
    PutLifecycleConfigurationRequest request new
    PutLifecycleConfigurationRequest
    {
    BucketName bucketName
    Configuration configuration
    }
    var response clientPutLifecycleConfiguration(request)
    }
    static LifecycleConfiguration GetLifeCycleConfiguration(IAmazonS3
    client)
    {
    GetLifecycleConfigurationRequest request new
    GetLifecycleConfigurationRequest
    {
    BucketName bucketName
    }
    var response clientGetLifecycleConfiguration(request)
    var configuration responseConfiguration
    return configuration
    }
    static void DeleteLifecycleConfiguration(IAmazonS3 client)
    {
    DeleteLifecycleConfigurationRequest request new
    DeleteLifecycleConfigurationRequest
    {
    BucketName bucketName
    }
    clientDeleteLifecycleConfiguration(request)
    }
    }
    }
    API Version 20060301
    130Amazon Simple Storage Service Developer Guide
    CrossOrigin Resource Sharing (CORS)
    Manage an Object's Lifecycle Using the AWS SDK for Ruby
    You can use the AWS SDK for Ruby to manage lifecycle configuration on a bucket by using the class
    AWSS3BucketLifecycleConfiguration For more information about using the AWS SDK for Ruby with
    Amazon S3 go to Using the AWS SDK for Ruby Version 2 (p 568) For more information about
    managing lifecycle configuration see Object Lifecycle Management (p 109)
    Manage Object Lifecycle Using the REST API
    You can use the AWS Management Console to set the lifecycle configuration on your bucket If your
    application requires it you can also send REST requests directly The following sections in the Amazon
    Simple Storage Service API Reference describe the REST API related to the lifecycle configuration
    • PUT Bucket lifecycle
    • GET Bucket lifecycle
    • DELETE Bucket lifecycle
    CrossOrigin Resource Sharing (CORS)
    Crossorigin resource sharing (CORS) defines a way for client web applications that are loaded in one
    domain to interact with resources in a different domain With CORS support in Amazon S3 you can
    build rich clientside web applications with Amazon S3 and selectively allow crossorigin access to your
    Amazon S3 resources
    This section provides an overview of CORS The subtopics describe how you can enable CORS using
    the Amazon S3 console or programmatically using the Amazon S3 REST API and the AWS SDKs
    Topics
    • CrossOrigin Resource Sharing Usecase Scenarios (p 131)
    • How Do I Configure CORS on My Bucket (p 132)
    • How Does Amazon S3 Evaluate the CORS Configuration On a Bucket (p 134)
    • Enabling CrossOrigin Resource Sharing (CORS) (p 134)
    • Troubleshooting CORS Issues (p 142)
    CrossOrigin Resource Sharing Usecase
    Scenarios
    The following are example scenarios for using CORS
    • Scenario 1 Suppose you are hosting a website in an Amazon S3 bucket named website as
    described in Hosting a Static Website on Amazon S3 (p 449) Your users load the website
    endpoint httpwebsites3websiteuseast1amazonawscom Now you want to use
    JavaScript on the web pages that are stored in this bucket to be able to make authenticated GET
    and PUT requests against the same bucket by using the Amazon S3's API endpoint for the bucket
    websites3amazonawscom A browser would normally block JavaScript from allowing those
    requests but with CORS you can configure your bucket to explicitly enable crossorigin requests
    from websites3websiteuseast1amazonawscom
    • Scenario 2 Suppose you want to host a web font from your S3 bucket Again browsers require a
    CORS check (also referred as a preflight check) for loading web fonts so you would configure the
    bucket that is hosting the web font to allow any origin to make these requests
    API Version 20060301
    131Amazon Simple Storage Service Developer Guide
    How Do I Configure CORS on My Bucket
    How Do I Configure CORS on My Bucket
    To configure your bucket to allow crossorigin requests you create a CORS configuration an XML
    document with rules that identify the origins that you will allow to access your bucket the operations
    (HTTP methods) will support for each origin and other operationspecific information
    You can add up to 100 rules to the configuration You add the XML document as the cors subresource
    to the bucket either programmatically or by using the Amazon S3 console For more information see
    Enabling CrossOrigin Resource Sharing (CORS) (p 134)
    The following example cors configuration has three rules which are specified as CORSRule elements
    • The first rule allows crossorigin PUT POST and DELETE requests from the https
    wwwexample1com origin The rule also allows all headers in a preflight OPTIONS request through
    the AccessControlRequestHeaders header In response to any preflight OPTIONS request
    Amazon S3 will return any requested headers
    • The second rule allows same crossorigin requests as the first rule but the rule applies to another
    origin httpswwwexample2com
    • The third rule allows crossorigin GET requests from all origins The '*' wildcard character refers to all
    origins


    httpwwwexample1com
    PUT
    POST
    DELETE
    *


    httpwwwexample2com
    PUT
    POST
    DELETE
    *


    *
    GET


    The CORS configuration also allows optional configuration parameters as shown in the following
    CORS configuration In this example the following CORS configuration allows crossorigin PUT and
    POST requests from the httpwwwexamplecom origin


    httpwwwexamplecom
    PUT
    POST
    API Version 20060301
    132Amazon Simple Storage Service Developer Guide
    How Do I Configure CORS on My Bucket
    DELETE
    *
    3000
    xamzserversideencryption<
    ExposeHeader>
    xamzrequestid<
    ExposeHeader>
    xamzid2


    The CORSRule element in the preceding configuration includes the following optional elements
    • MaxAgeSeconds—Specifies the amount of time in seconds (in this example 3000) that the browser
    will cache an Amazon S3 response to a preflight OPTIONS request for the specified resource By
    caching the response the browser does not have to send preflight requests to Amazon S3 if the
    original request is to be repeated
    • ExposeHeader—Identifies the response headers (in this example xamzserverside
    encryption xamzrequestid and xamzid2) that customers will be able to access from
    their applications (for example from a JavaScript XMLHttpRequest object)
    AllowedMethod Element
    In the CORS configuration you can specify the following values for the AllowedMethod element
    • GET
    • PUT
    • POST
    • DELETE
    • HEAD
    AllowedOrigin Element
    In the AllowedOrigin element you specify the origins that you want to allow crossdomain requests
    from for example httpwwwexamplecom The origin string can contain at most one * wildcard
    character such as http*examplecom You can optionally specify * as the origin to enable all
    the origins to send crossorigin requests You can also specify https to enable only secure origins
    AllowedHeader Element
    The AllowedHeader element specifies which headers are allowed in a preflight request through
    the AccessControlRequestHeaders header Each header name in the AccessControl
    RequestHeaders header must match a corresponding entry in the rule Amazon S3 will send only
    the allowed headers in a response that were requested For a sample list of headers that can be used
    in requests to Amazon S3 go to Common Request Headers in the Amazon Simple Storage Service
    API Reference guide
    Each AllowedHeader string in the rule can contain at most one * wildcard character For example
    xamz* will enable all Amazonspecific headers
    ExposeHeader Element
    Each ExposeHeader element identifies a header in the response that you want customers to be able
    to access from their applications (for example from a JavaScript XMLHttpRequest object) For a list
    API Version 20060301
    133Amazon Simple Storage Service Developer Guide
    How Does Amazon S3 Evaluate the
    CORS Configuration On a Bucket
    of common Amazon S3 response headers go to Common Response Headers in the Amazon Simple
    Storage Service API Reference guide
    MaxAgeSeconds Element
    The MaxAgeSeconds element specifies the time in seconds that your browser can cache the response
    for a preflight request as identified by the resource the HTTP method and the origin
    How Does Amazon S3 Evaluate the CORS
    Configuration On a Bucket
    When Amazon S3 receives a preflight request from a browser it evaluates the CORS configuration for
    the bucket and uses the first CORSRule rule that matches the incoming browser request to enable a
    crossorigin request For a rule to match the following conditions must be met
    • The request's Origin header must match an AllowedOrigin element
    • The request method (for example GET or PUT) or the AccessControlRequestMethod
    header in case of a preflight OPTIONS request must be one of the AllowedMethod elements
    • Every header listed in the request's AccessControlRequestHeaders header on the preflight
    request must match an AllowedHeader element
    Note
    The ACLs and policies continue to apply when you enable CORS on the bucket
    Enabling CrossOrigin Resource Sharing (CORS)
    Enable crossorigin resource sharing by setting a CORS configuration on your bucket using the AWS
    Management Console the REST API or the AWS SDKs
    Topics
    • Enabling CrossOrigin Resource Sharing (CORS) Using the AWS Management Console (p 134)
    • Enabling CrossOrigin Resource Sharing (CORS) Using the AWS SDK for Java (p 134)
    • Enabling CrossOrigin Resource Sharing (CORS) Using the AWS SDK for NET (p 138)
    • Enabling CrossOrigin Resource Sharing (CORS) Using the REST API (p 142)
    Enabling CrossOrigin Resource Sharing (CORS) Using the
    AWS Management Console
    You can use the AWS Management Console to set a CORS configuration on your bucket For
    instructions see Editing Bucket Permissions in the Amazon S3 Console User Guide
    Enabling CrossOrigin Resource Sharing (CORS) Using the
    AWS SDK for Java
    You can use the AWS SDK for Java to manage crossorigin resource sharing (CORS) for a bucket For
    more information about CORS see CrossOrigin Resource Sharing (CORS) (p 131)
    This section provides sample code snippets for following tasks followed by a complete example
    program demonstrating all tasks
    API Version 20060301
    134Amazon Simple Storage Service Developer Guide
    Enabling CORS
    • Creating an instance of the Amazon S3 client class
    • Creating and adding a CORS configuration to a bucket
    • Updating an existing CORS configuration
    CrossOrigin Resource Sharing Methods
    AmazonS3Client() Constructs an AmazonS3Client object
    setBucketCrossOriginConfiguration()Sets the CORS configuration that to be applied to the bucket If
    a configuration already exists for the specified bucket the new
    configuration will replace the existing one
    getBucketCrossOriginConfiguration()Retrieves the CORS configuration for the specified bucket If no
    configuration has been set for the bucket the Configuration header
    in the response will be null
    deleteBucketCrossOriginConfiguration()Deletes the CORS configuration for the specified bucket
    For more information about the AWS SDK for Java API go to AWS SDK for Java API Reference
    Creating an Instance of the Amazon S3 Client Class
    The following snippet creates a new AmazonS3Client instance for a class called CORS_JavaSDK
    This example retrieves the values for accessKey and secretKey from the AwsCredentialsproperties
    file
    AmazonS3Client client
    client new AmazonS3Client(new ProfileCredentialsProvider())
    Creating and Adding a CORS Configuration to a Bucket
    To add a CORS configuration to a bucket
    1 Create a CORSRule object that describes the rule
    2 Create a BucketCrossOriginConfiguration object and then add the rule to the configuration
    object
    3 Add the CORS configuration to the bucket by calling the
    clientsetBucketCrossOriginConfiguration method
    The following snippet creates two rules CORSRule1 and CORSRule2 and then adds each rule to the
    rules array By using the rules array it then adds the rules to the bucket bucketName
    Add a sample configuration
    BucketCrossOriginConfiguration configuration new
    BucketCrossOriginConfiguration()
    List rules new ArrayList()
    CORSRule rule1 new CORSRule()
    withId(CORSRule1)
    withAllowedMethods(ArraysasList(new CORSRuleAllowedMethods[] {
    CORSRuleAllowedMethodsPUT CORSRuleAllowedMethodsPOST
    CORSRuleAllowedMethodsDELETE}))
    API Version 20060301
    135Amazon Simple Storage Service Developer Guide
    Enabling CORS
    withAllowedOrigins(ArraysasList(new String[] {http
    *examplecom}))
    CORSRule rule2 new CORSRule()
    withId(CORSRule2)
    withAllowedMethods(ArraysasList(new CORSRuleAllowedMethods[] {
    CORSRuleAllowedMethodsGET}))
    withAllowedOrigins(ArraysasList(new String[] {*}))
    withMaxAgeSeconds(3000)
    withExposedHeaders(ArraysasList(new String[] {xamzserverside
    encryption}))
    configurationsetRules(ArraysasList(new CORSRule[] {rule1 rule2}))
    Save the configuration
    clientsetBucketCrossOriginConfiguration(bucketName configuration)
    Updating an Existing CORS Configuration
    To update an existing CORS configuration
    1 Get a CORS configuration by calling the clientgetBucketCrossOriginConfiguration
    method
    2 Update the configuration information by adding or deleting rules to the list of rules
    3 Add the configuration to a bucket by calling the
    clientgetBucketCrossOriginConfiguration method
    The following snippet gets an existing configuration and then adds a new rule with the ID NewRule
    Get configuration
    BucketCrossOriginConfiguration configuration
    clientgetBucketCrossOriginConfiguration(bucketName)

    Add new rule
    CORSRule rule3 new CORSRule()
    withId(CORSRule3)
    withAllowedMethods(ArraysasList(new CORSRuleAllowedMethods[] {
    CORSRuleAllowedMethodsHEAD}))
    withAllowedOrigins(ArraysasList(new String[] {httpwwwexamplecom}))
    List rules configurationgetRules()
    rulesadd(rule3)
    configurationsetRules(rules)
    Save configuration
    clientsetBucketCrossOriginConfiguration(bucketName configuration)
    API Version 20060301
    136Amazon Simple Storage Service Developer Guide
    Enabling CORS
    Example Program Listing
    The following Java program incorporates the preceding tasks
    For information about creating and testing a working sample see Testing the Java Code
    Examples (p 564)
    import javaioIOException
    import javautilArrayList
    import javautilArrays
    import javautilList
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelBucketCrossOriginConfiguration
    import comamazonawsservicess3modelCORSRule
    public class Cors {
    **
    * @param args
    * @throws IOException
    *
    public static AmazonS3Client client
    public static String bucketName ***provide bucket name***

    public static void main(String[] args) throws IOException {
    client new AmazonS3Client(new ProfileCredentialsProvider())
    Create a new configuration request and add two rules
    BucketCrossOriginConfiguration configuration new
    BucketCrossOriginConfiguration()

    List rules new ArrayList()

    CORSRule rule1 new CORSRule()
    withId(CORSRule1)
    withAllowedMethods(ArraysasList(new CORSRuleAllowedMethods[]
    {
    CORSRuleAllowedMethodsPUT
    CORSRuleAllowedMethodsPOST CORSRuleAllowedMethodsDELETE}))
    withAllowedOrigins(ArraysasList(new String[] {http
    *examplecom}))

    CORSRule rule2 new CORSRule()
    withId(CORSRule2)
    withAllowedMethods(ArraysasList(new CORSRuleAllowedMethods[] {
    CORSRuleAllowedMethodsGET}))
    withAllowedOrigins(ArraysasList(new String[] {*}))
    withMaxAgeSeconds(3000)
    withExposedHeaders(ArraysasList(new String[] {xamzserverside
    encryption}))

    configurationsetRules(ArraysasList(new CORSRule[] {rule1 rule2}))

    Add the configuration to the bucket
    clientsetBucketCrossOriginConfiguration(bucketName configuration)
    Retrieve an existing configuration
    configuration clientgetBucketCrossOriginConfiguration(bucketName)
    printCORSConfiguration(configuration)

    Add a new rule
    CORSRule rule3 new CORSRule()
    withId(CORSRule3)
    withAllowedMethods(ArraysasList(new CORSRuleAllowedMethods[] {
    CORSRuleAllowedMethodsHEAD}))
    withAllowedOrigins(ArraysasList(new String[] {http
    wwwexamplecom}))
    rules configurationgetRules()
    rulesadd(rule3)
    configurationsetRules(rules)
    clientsetBucketCrossOriginConfiguration(bucketName configuration)
    Systemoutformat(Added another rule s\n rule3getId())

    Verify that the new rule was added
    configuration clientgetBucketCrossOriginConfiguration(bucketName)
    Systemoutformat(Expected # of rules 3 found s
    configurationgetRules()size())
    Delete the configuration
    clientdeleteBucketCrossOriginConfiguration(bucketName)

    Try to retrieve configuration
    configuration clientgetBucketCrossOriginConfiguration(bucketName)
    Systemoutprintln(\nRemoved CORS configuration)
    printCORSConfiguration(configuration)
    }

    static void printCORSConfiguration(BucketCrossOriginConfiguration
    configuration)
    {
    if (configuration null)
    {
    Systemoutprintln(\nConfiguration is null)
    return
    }
    Systemoutformat(\nConfiguration has s rules\n
    configurationgetRules()size())
    for (CORSRule rule configurationgetRules())
    {
    Systemoutformat(Rule ID s\n rulegetId())
    Systemoutformat(MaxAgeSeconds s\n
    rulegetMaxAgeSeconds())
    Systemoutformat(AllowedMethod s\n
    rulegetAllowedMethods()toArray())
    Systemoutformat(AllowedOrigins s\n
    rulegetAllowedOrigins())
    Systemoutformat(AllowedHeaders s\n
    rulegetAllowedHeaders())
    Systemoutformat(ExposeHeader s\n
    rulegetExposedHeaders())
    }
    }
    }
    API Version 20060301
    137Amazon Simple Storage Service Developer Guide
    Enabling CORS
    Enabling CrossOrigin Resource Sharing (CORS) Using the
    AWS SDK for NET
    You can use the AWS SDK for NET to manage crossorigin resource sharing (CORS) for a bucket
    For more information about CORS see CrossOrigin Resource Sharing (CORS) (p 131)
    This section provides sample code for the tasks in the following table followed by a complete example
    program listing
    Managing CrossOrigin Resource Sharing
    1 Create an instance of the AmazonS3Client class
    2 Create a new CORS configuration
    3 Retrieve and modify an existing CORS configuration
    4 Add the configuration to the bucket
    CrossOrigin Resource Sharing Methods
    AmazonS3Client() Constructs AmazonS3Client with the credentials defined in the
    Appconfig file
    PutCORSConfiguration() Sets the CORS configuration that should be applied to the bucket
    If a configuration already exists for the specified bucket the new
    configuration will replace the existing one
    GetCORSConfiguration() Retrieves the CORS configuration for the specified bucket If no
    configuration has been set for the bucket the Configuration header
    in the response will be null
    DeleteCORSConfiguration() Deletes the CORS configuration for the specified bucket
    For more information about the AWS SDK for NET API go to Using the AWS SDK for NET (p 565)
    Creating an Instance of the AmazonS3 Class
    The following sample creates an instance of the AmazonS3Client class
    static IAmazonS3 client
    using (client new AmazonS3Client(AmazonRegionEndpointUSWest2))
    Adding a CORS Configuration to a Bucket
    To add a CORS configuration to a bucket
    1 Create a CORSConfiguration object describing the rule
    2 Create a PutCORSConfigurationRequest object that provides the bucket name and the CORS
    configuration
    3 Add the CORS configuration to the bucket by calling clientPutCORSConfiguration
    The following sample creates two rules CORSRule1 and CORSRule2 and then adds each rule to the
    rules array By using the rules array it then adds the rules to the bucket bucketName
    Add a sample configuration
    API Version 20060301
    138Amazon Simple Storage Service Developer Guide
    Enabling CORS
    CORSConfiguration configuration new CORSConfiguration
    {
    Rules new SystemCollectionsGenericList
    {
    new CORSRule
    {
    Id CORSRule1
    AllowedMethods new List {PUT POST DELETE}
    AllowedOrigins new List {http*examplecom}
    }
    new CORSRule
    {
    Id CORSRule2
    AllowedMethods new List {GET}
    AllowedOrigins new List {*}
    MaxAgeSeconds 3000
    ExposeHeaders new List {xamzserversideencryption}
    }
    }
    }
    Save the configuration
    PutCORSConfiguration(configuration)
    static void PutCORSConfiguration(CORSConfiguration configuration)
    {
    PutCORSConfigurationRequest request new PutCORSConfigurationRequest
    {
    BucketName bucketName
    Configuration configuration
    }
    var response clientPutCORSConfiguration(request)
    }
    Updating an Existing CORS Configuration
    To update an existing CORS configuration
    1 Get a CORS configuration by calling the clientGetCORSConfiguration method
    2 Update the configuration information by adding or deleting rules
    3 Add the configuration to a bucket by calling the clientPutCORSConfiguration method
    The following snippet gets an existing configuration and then adds a new rule with the ID NewRule
    Get configuration
    configuration GetCORSConfiguration()
    Add new rule
    configurationRulesAdd(new CORSRule
    {
    Id NewRule
    AllowedMethods new List { HEAD }
    AllowedOrigins new List { httpwwwexamplecom }
    })
    Save configuration
    API Version 20060301
    139Amazon Simple Storage Service Developer Guide
    Enabling CORS
    PutCORSConfiguration(configuration)
    API Version 20060301
    140Amazon Simple Storage Service Developer Guide
    Enabling CORS
    Example Program Listing
    The following C# program incorporates the preceding tasks
    For information about creating and testing a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using SystemConfiguration
    using SystemCollectionsSpecialized
    using SystemNet
    using AmazonS3
    using AmazonS3Model
    using AmazonS3Util
    using SystemDiagnostics
    using SystemCollectionsGeneric
    namespace s3amazoncomdocsamples
    {
    class CORS
    {
    static string bucketName *** Provide bucket name ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    try
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSWest2))
    {
    Create a new configuration request and add two rules

    CORSConfiguration configuration new CORSConfiguration
    {
    Rules new SystemCollectionsGenericList
    {
    new CORSRule
    {
    Id CORSRule1
    AllowedMethods new List {PUT POST
    DELETE}
    AllowedOrigins new List {http
    *examplecom}
    }
    new CORSRule
    {
    Id CORSRule2
    AllowedMethods new List {GET}
    AllowedOrigins new List {*}
    MaxAgeSeconds 3000
    ExposeHeaders new List {xamzserver
    sideencryption}
    }
    }
    }
    Add the configuration to the bucket
    PutCORSConfiguration(configuration)
    Retrieve an existing configuration
    configuration GetCORSConfiguration()
    Add a new rule
    configurationRulesAdd(new CORSRule
    {
    Id CORSRule3
    AllowedMethods new List { HEAD }
    AllowedOrigins new List { http
    wwwexamplecom }
    })
    Add the configuration to the bucket
    PutCORSConfiguration(configuration)
    Verify that there are now three rules
    configuration GetCORSConfiguration()
    ConsoleWriteLine()
    ConsoleWriteLine(Expected # of rulest3 found{0}
    configurationRulesCount)
    ConsoleWriteLine()
    ConsoleWriteLine(Pause before configuration delete To
    continue click Enter)
    ConsoleReadKey()
    Delete the configuration
    DeleteCORSConfiguration()
    Retrieve a nonexistent configuration
    configuration GetCORSConfiguration()
    DebugAssert(configuration null)
    }
    ConsoleWriteLine(Example complete)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    ConsoleWriteLine(S3 error occurred Exception +
    amazonS3ExceptionToString())
    ConsoleReadKey()
    }
    catch (Exception e)
    {
    ConsoleWriteLine(Exception + eToString())
    ConsoleReadKey()
    }

    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void PutCORSConfiguration(CORSConfiguration configuration)
    {
    PutCORSConfigurationRequest request new
    PutCORSConfigurationRequest
    {
    BucketName bucketName
    Configuration configuration
    }
    var response clientPutCORSConfiguration(request)
    }
    static CORSConfiguration GetCORSConfiguration()
    {
    GetCORSConfigurationRequest request new
    GetCORSConfigurationRequest
    {
    BucketName bucketName
    }
    var response clientGetCORSConfiguration(request)
    var configuration responseConfiguration
    PrintCORSRules(configuration)
    return configuration
    }
    static void DeleteCORSConfiguration()
    {
    DeleteCORSConfigurationRequest request new
    DeleteCORSConfigurationRequest
    {
    BucketName bucketName
    }
    clientDeleteCORSConfiguration(request)
    }
    static void PrintCORSRules(CORSConfiguration configuration)
    {
    ConsoleWriteLine()
    if (configuration null)
    {
    ConsoleWriteLine(\nConfiguration is null)
    return
    }
    ConsoleWriteLine(Configuration has {0} rules
    configurationRulesCount)
    foreach (CORSRule rule in configurationRules)
    {
    ConsoleWriteLine(Rule ID {0} ruleId)
    ConsoleWriteLine(MaxAgeSeconds {0} ruleMaxAgeSeconds)
    ConsoleWriteLine(AllowedMethod {0} stringJoin(
    ruleAllowedMethodsToArray()))
    ConsoleWriteLine(AllowedOrigins {0} stringJoin(
    ruleAllowedOriginsToArray()))
    ConsoleWriteLine(AllowedHeaders {0} stringJoin(
    ruleAllowedHeadersToArray()))
    ConsoleWriteLine(ExposeHeader {0} stringJoin(
    ruleExposeHeadersToArray()))
    }
    }
    }
    }
    API Version 20060301
    141Amazon Simple Storage Service Developer Guide
    Troubleshooting CORS
    Enabling CrossOrigin Resource Sharing (CORS) Using the
    REST API
    You can use the AWS Management Console to set CORS configuration on your bucket If your
    application requires it you can also send REST requests directly The following sections in the
    Amazon Simple Storage Service API Reference describe the REST API actions related to the CORS
    configuration
    • PUT Bucket cors
    • GET Bucket cors
    • DELETE Bucket cors
    • OPTIONS object
    Troubleshooting CORS Issues
    When you are accessing buckets set with the CORS configuration if you encounter unexpected
    behavior the following are some troubleshooting actions you can take
    1 Verify that the CORS configuration is set on the bucket
    For instructions go to Editing Bucket Permissions in the Amazon Simple Storage Service
    Console User Guide If you have the CORS configuration set the console displays an Edit CORS
    Configuration link in the Permissions section of the Properties bucket
    2 Capture the complete request and response using a tool of your choice For each request Amazon
    S3 receives there must exist one CORS rule matching the data in your request as follows
    a Verify the request has the Origin header
    If the header is missing Amazon S3 does not treat the request as a crossorigin request and
    does not send CORS response headers back in the response
    b Verify that the Origin header in your request matches at least one of the AllowedOrigin
    elements in the specific CORSRule
    The scheme the host and the port values in the Origin request header must match the
    AllowedOrigin in the CORSRule For example if you set the CORSRule to allow the
    origin httpwwwexamplecom then both httpswwwexamplecom and http
    wwwexamplecom80 origins in your request do not match the allowed origin in your
    configuration
    c Verify that the Method in your request (or the method specified in the AccessControl
    RequestMethod in case of a preflight request) is one of the AllowedMethod elements in the
    same CORSRule
    d For a preflight request if the request includes an AccessControlRequestHeaders header
    verify that the CORSRule includes the AllowedHeader entries for each value in the Access
    ControlRequestHeaders header
    Operations on Objects
    Amazon S3 enables you to store retrieve and delete objects You can retrieve an entire object or a
    portion of an object If you have enabled versioning on your bucket you can retrieve a specific version
    of the object You can also retrieve a subresource associated with your object and update it where
    applicable You can make a copy of your existing object Depending on the object size the following
    upload and copy related considerations apply
    API Version 20060301
    142Amazon Simple Storage Service Developer Guide
    Getting Objects
    • Uploading objects—You can upload objects of up to 5 GB in size in a single operation For objects
    greater than 5 GB you must use the multipart upload API
    Using the multipart upload API you can upload objects up to 5 TB each For more information see
    Uploading Objects Using Multipart Upload API (p 165)
    • Copying objects—The copy operation creates a copy of an object that is already stored in Amazon
    S3
    You can create a copy of your object up to 5 GB in size in a single atomic operation However for
    copying an object greater than 5 GB you must use the multipart upload API For more information
    see Copying Objects (p 212)
    You can use the REST API (see Making Requests Using the REST API (p 49)) to work with objects or
    use one of the following AWS SDK libraries
    • AWS SDK for Java
    • AWS SDK for NET
    • AWS SDK for PHP
    These libraries provide a highlevel abstraction that makes working with objects easy However if your
    application requires you can use the REST API directly
    Getting Objects
    Topics
    • Related Resources (p 144)
    • Get an Object Using the AWS SDK for Java (p 144)
    • Get an Object Using the AWS SDK for NET (p 147)
    • Get an Object Using the AWS SDK for PHP (p 150)
    • Get an Object Using the REST API (p 152)
    • Share an Object with Others (p 152)
    You can retrieve objects directly from Amazon S3 You have the following options when retrieving an
    object
    • Retrieve an entire object—A single GET operation can return you the entire object stored in
    Amazon S3
    • Retrieve object in parts—Using the Range HTTP header in a GET request you can retrieve a
    specific range of bytes in an object stored in Amazon S3
    You resume fetching other parts of the object whenever your application is ready This resumable
    download is useful when you need only portions of your object data It is also useful where network
    connectivity is poor and you need to react to failures
    Note
    Amazon S3 doesn't support retrieving multiple ranges of data per GET request
    When you retrieve an object its metadata is returned in the response headers There are times when
    you want to override certain response header values returned in a GET response For example you
    might override the ContentDisposition response header value in your GET request The REST
    GET Object API (see GET Object) allows you to specify query string parameters in your GET request
    to override these values
    The AWS SDK for Java NET and PHP also provide necessary objects you can use to specify values
    for these response headers in your GET request
    API Version 20060301
    143Amazon Simple Storage Service Developer Guide
    Getting Objects
    When retrieving objects that are stored encrypted using serverside encryption you will need to provide
    appropriate request headers For more information see Protecting Data Using Encryption (p 380)
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    Get an Object Using the AWS SDK for Java
    When you download an object you get all of object's metadata and a stream from which to read
    the contents You should read the content of the stream as quickly as possible because the data is
    streamed directly from Amazon S3 and your network connection will remain open until you read all the
    data or close the input stream
    Downloading Objects
    1 Create an instance of the AmazonS3Client class
    2 Execute one of the AmazonS3ClientgetObject() method You need to provide the
    request information such as bucket name and key name You provide this information by
    creating an instance of the GetObjectRequest class
    3 Execute one of the getObjectContent() methods on the object returned to get a
    stream on the object data and process the response
    The following Java code sample demonstrates the preceding tasks
    AmazonS3 s3Client new AmazonS3Client(new ProfileCredentialsProvider())

    S3Object object s3ClientgetObject(
    new GetObjectRequest(bucketName key))
    InputStream objectData objectgetObjectContent()
    Process the objectData stream
    objectDataclose()
    The GetObjectRequest object provides several options including conditional downloading of objects
    based on modification times ETags and selectively downloading a range of an object The following
    Java code sample demonstrates how you can specify a range of data bytes to retrieve from an object
    AmazonS3 s3Client new AmazonS3Client(new ProfileCredentialsProvider())

    GetObjectRequest rangeObjectRequest new GetObjectRequest(
    bucketName key)
    rangeObjectRequestsetRange(0 10) retrieve 1st 11 bytes
    S3Object objectPortion s3ClientgetObject(rangeObjectRequest)
    InputStream objectData objectPortiongetObjectContent()
    Process the objectData stream
    objectDataclose()
    When retrieving an object you can optionally override the response header values (see Getting
    Objects (p 143)) by using the ResponseHeaderOverrides object and setting the corresponding
    request property as shown in the following Java code sample
    GetObjectRequest request new GetObjectRequest(bucketName key)
    API Version 20060301
    144Amazon Simple Storage Service Developer Guide
    Getting Objects

    ResponseHeaderOverrides responseHeaders new ResponseHeaderOverrides()
    responseHeaderssetCacheControl(Nocache)
    responseHeaderssetContentDisposition(attachment filenametestingtxt)
    Add the ResponseHeaderOverides to the request
    requestsetResponseHeaders(responseHeaders)
    API Version 20060301
    145Amazon Simple Storage Service Developer Guide
    Getting Objects
    Example
    The following Java code example retrieves an object from a specified Amazon S3 bucket
    For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioBufferedReader
    import javaioIOException
    import javaioInputStream
    import javaioInputStreamReader
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelGetObjectRequest
    import comamazonawsservicess3modelS3Object
    public class GetObject {
    private static String bucketName *** provide bucket name ***
    private static String key *** provide object key ***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3Client new AmazonS3Client(new
    ProfileCredentialsProvider())
    try {
    Systemoutprintln(Downloading an object)
    S3Object s3object s3ClientgetObject(new GetObjectRequest(
    bucketName key))
    Systemoutprintln(ContentType +
    s3objectgetObjectMetadata()getContentType())
    displayTextInputStream(s3objectgetObjectContent())

    Get a range of bytes from an object

    GetObjectRequest rangeObjectRequest new GetObjectRequest(
    bucketName key)
    rangeObjectRequestsetRange(0 10)
    S3Object objectPortion s3ClientgetObject(rangeObjectRequest)

    Systemoutprintln(Printing bytes retrieved)
    displayTextInputStream(objectPortiongetObjectContent())

    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException which +
    means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException which
    means+
    the client encountered +
    an internal error while trying to +
    communicate with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    private static void displayTextInputStream(InputStream input)
    throws IOException {
    Read one text line at a time and display
    BufferedReader reader new BufferedReader(new
    InputStreamReader(input))
    while (true) {
    String line readerreadLine()
    if (line null) break
    Systemoutprintln( + line)
    }
    Systemoutprintln()
    }
    }
    API Version 20060301
    146Amazon Simple Storage Service Developer Guide
    Getting Objects
    Get an Object Using the AWS SDK for NET
    The following tasks guide you through using the NET classes to retrieve an object or a portion of the
    object and save it locally to a file
    Downloading Objects
    1 Create an instance of the AmazonS3 class
    2 Execute one of the AmazonS3GetObject methods You need to provide information
    such as bucket name file path or a stream You provide this information by creating an
    instance of the GetObjectRequest class
    3 Execute one of the GetObjectResponseWriteResponseStreamToFile methods to
    save the stream to a file
    The following C# code sample demonstrates the preceding tasks The examples saves the object to a
    file on your desktop
    static IAmazonS3 client
    using (client new AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    GetObjectRequest request new GetObjectRequest
    {
    BucketName bucketName
    Key keyName
    }
    using (GetObjectResponse response clientGetObject(request))
    {
    string dest
    PathCombine(EnvironmentGetFolderPath(EnvironmentSpecialFolderDesktop)
    keyName)
    if (FileExists(dest))
    {
    responseWriteResponseStreamToFile(dest)
    }
    }
    }
    Instead of reading the entire object you can read only the portion of the object data by specifying the
    byte range in the request as shown in the following C# code sample
    GetObjectRequest request new GetObjectRequest
    {
    BucketName bucketName
    Key keyName
    ByteRange new ByteRange(0 10)
    }
    When retrieving an object you can optionally override the response header values (see Getting
    Objects (p 143)) by using the ResponseHeaderOverrides object and setting the corresponding
    request property as shown in the following C# code sample You can use this feature to indicate the
    object should be downloaded into a different filename that the object key name
    GetObjectRequest request new GetObjectRequest
    {
    API Version 20060301
    147Amazon Simple Storage Service Developer Guide
    Getting Objects
    BucketName bucketName
    Key keyName
    }
    ResponseHeaderOverrides responseHeaders new ResponseHeaderOverrides()
    responseHeadersCacheControl Nocache
    responseHeadersContentDisposition attachment filenametestingtxt
    requestResponseHeaderOverrides responseHeaders
    API Version 20060301
    148Amazon Simple Storage Service Developer Guide
    Getting Objects
    ExampleThe following C# code example retrieves an object from an Amazon S3 bucket From the response
    the example reads the object data using the GetObjectResponseResponseStream property The
    example also shows how you can use the GetObjectResponseMetadata collection to read object
    metadata If the object you retrieve has the xamzmetatitle metadata the code will print the
    metadata valueFor instructions on how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using SystemIO
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class GetObject
    {
    static string bucketName *** bucket name ***
    static string keyName *** object key ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    try
    {
    ConsoleWriteLine(Retrieving (GET) an object)
    string data ReadObjectData()
    }
    catch (AmazonS3Exception s3Exception)
    {
    ConsoleWriteLine(s3ExceptionMessage
    s3ExceptionInnerException)
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static string ReadObjectData()
    {
    string responseBody
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    GetObjectRequest request new GetObjectRequest
    {
    BucketName bucketName
    Key keyName
    }
    using (GetObjectResponse response
    clientGetObject(request))
    using (Stream responseStream responseResponseStream)
    using (StreamReader reader new
    StreamReader(responseStream))
    {
    string title responseMetadata[xamzmetatitle]
    ConsoleWriteLine(The object's title is {0} title)
    responseBody readerReadToEnd()
    }
    }
    return responseBody
    }
    }
    }
    API Version 20060301
    149Amazon Simple Storage Service Developer Guide
    Getting Objects
    Get an Object Using the AWS SDK for PHP
    This topic guides you through using a class from the AWS SDK for PHP to retrieve an object You can
    retrieve an entire object or specify a byte range to retrieve from the object
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    Downloading an Object
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Execute the Aws\S3\S3ClientgetObject() method You must provide a bucket name and
    a key name in the array parameter's required keys Bucket and Key
    Instead of retrieving the entire object you can retrieve a specific byte range from the
    object data You provide the range value by specifying the array parameter's Range key
    in addition to the required keys
    You can save the object you retrieved from Amazon S3 to a file in your local file system
    by specifying a file path to where to save the file in the array parameter's SaveAs key in
    addition to the required keys Bucket and Key
    The following PHP code sample demonstrates the preceding tasks for downloading an object
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    filepath '*** Your File Path ***'

    Instantiate the client
    s3 S3Clientfactory()
    Get an object
    result s3>getObject(array(
    'Bucket' > bucket
    'Key' > keyname
    ))
    Get a range of bytes from an object
    result s3>getObject(array(
    'Bucket' > bucket
    'Key' > keyname
    'Range' > 'bytes099'
    ))
    Save object to a file
    result s3>getObject(array(
    'Bucket' > bucket
    'Key' > keyname
    'SaveAs' > filepath
    ))

    API Version 20060301
    150Amazon Simple Storage Service Developer Guide
    Getting Objects
    When retrieving an object you can optionally override the response header values (see Getting
    Objects (p 143)) by adding the array parameter's response keys ResponseContentType
    ResponseContentLanguage ResponseContentDisposition ResponseCacheControl and
    ResponseExpires to the getObject() method as shown in the following PHP code sample
    result s3>getObject(array(
    'Bucket' > bucket
    'Key' > keyname
    'ResponseContentType' > 'textplain'
    'ResponseContentLanguage' > 'enUS'
    'ResponseContentDisposition' > 'attachment filenametestingtxt'
    'ResponseCacheControl' > 'Nocache'
    'ResponseExpires' > gmdate(DATE_RFC2822 time() + 3600)
    ))
    Example of Downloading an Object Using PHP
    The following PHP example retrieves an object and displays object content in the browser The
    example illustrates the use of the getObject() method For information about running the PHP
    examples in this guide go to Running PHP Examples (p 567)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    use Aws\S3\Exception\S3Exception
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    Instantiate the client
    s3 S3Clientfactory()
    try {
    Get the object
    result s3>getObject(array(
    'Bucket' > bucket
    'Key' > keyname
    ))
    Display the object in the browser
    header(ContentType {result['ContentType']})
    echo result['Body']
    } catch (S3Exception e) {
    echo e>getMessage() \n
    }
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientgetObject() Method
    • AWS SDK for PHP for Amazon S3 Downloading Objects
    API Version 20060301
    151Amazon Simple Storage Service Developer Guide
    Getting Objects
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    Get an Object Using the REST API
    You can use the AWS SDK to retrieve object keys from a bucket However if your application requires
    it you can send REST requests directly You can send a GET request to retrieve object keys For more
    information about the request and response format go to Get Object
    Share an Object with Others
    Topics
    • Generate a Presigned Object URL using AWS Explorer for Visual Studio (p 152)
    • Generate a Presigned Object URL using AWS SDK for Java (p 152)
    • Generate a Presigned Object URL using AWS SDK for NET (p 155)
    All objects by default are private Only the object owner has permission to access these objects
    However the object owner can optionally share objects with others by creating a presigned URL
    using their own security credentials to grant timelimited permission to download the objects
    When you create a presigned URL for your object you must provide your security credentials specify
    a bucket name an object key specify the HTTP method (GET to download the object) and expiration
    date and time The presigned URLs are valid only for the specified duration
    Anyone who receives the presigned URL can then access the object For example if you have a video
    in your bucket and both the bucket and the object are private you can share the video with others by
    generating a presigned URL
    Note
    Anyone with valid security credentials can create a presigned URL However in order to
    successfully access an object the presigned URL must be created by someone who has
    permission to perform the operation that the presigned URL is based upon
    You can generate presigned URL programmatically using the AWS SDK for Java and NET
    Generate a Presigned Object URL using AWS Explorer for Visual Studio
    If you are using Visual Studio you can generate a presigned URL for an object without writing any
    code by using AWS Explorer for Visual Studio Anyone with this URL can download the object For
    more information go to Using Amazon S3 from AWS Explorer
    For instructions about how to install the AWS Explorer see Using the AWS SDKs CLI and
    Explorers (p 560)
    Generate a Presigned Object URL using AWS SDK for Java
    The following tasks guide you through using the Java classes to generate a presigned URL
    Downloading Objects
    1 Create an instance of the AmazonS3 class For information about providing credentials
    see Using the AWS SDK for Java (p 563) These credentials are used in creating a
    signature for authentication when you generate a presigned URL
    2 Execute the AmazonS3generatePresignedUrl method to generate a presigned
    URL
    API Version 20060301
    152Amazon Simple Storage Service Developer Guide
    Getting Objects
    You provide information including a bucket name an object key and an expiration date
    by creating an instance of the GeneratePresignedUrlRequest class The request
    by default sets the verb to GET To use the presigned URL for other operations for
    example PUT you must explicitly set the verb when you create the request
    The following Java code sample demonstrates the preceding tasks
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())

    javautilDate expiration new javautilDate()
    long msec expirationgetTime()
    msec + 1000 * 60 * 60 1 hour
    expirationsetTime(msec)

    GeneratePresignedUrlRequest generatePresignedUrlRequest
    new GeneratePresignedUrlRequest(bucketName objectKey)
    generatePresignedUrlRequestsetMethod(HttpMethodGET) Default
    generatePresignedUrlRequestsetExpiration(expiration)

    URL s s3clientgeneratePresignedUrl(generatePresignedUrlRequest)
    API Version 20060301
    153Amazon Simple Storage Service Developer Guide
    Getting Objects
    ExampleThe following Java code example generates a presigned URL that you can give to others so that they
    can retrieve the object You can use the generated presigned URL to retrieve the object To use the
    presigned URL for other operations such as put an object you must explicitly set the verb in the
    GetPreSignedUrlRequest For instructions about how to create and test a working sample see
    Testing the Java Code Examples (p 564)
    import javaioIOException
    import javanetURL
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsHttpMethod
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelGeneratePresignedUrlRequest
    public class GeneratePreSignedUrl {
    private static String bucketName *** Provide a bucket name ***
    private static String objectKey *** Provide an object key ***
    public static void main(String[] args) throws IOException {
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
    try {
    Systemoutprintln(Generating presigned URL)
    javautilDate expiration new javautilDate()
    long milliSeconds expirationgetTime()
    milliSeconds + 1000 * 60 * 60 Add 1 hour
    expirationsetTime(milliSeconds)
    GeneratePresignedUrlRequest generatePresignedUrlRequest
    new GeneratePresignedUrlRequest(bucketName objectKey)
    generatePresignedUrlRequestsetMethod(HttpMethodGET)
    generatePresignedUrlRequestsetExpiration(expiration)
    URL url s3clientgeneratePresignedUrl(generatePresignedUrlRequest)
    Systemoutprintln(PreSigned URL + urltoString())
    } catch (AmazonServiceException exception) {
    Systemoutprintln(Caught an AmazonServiceException +
    which means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + exceptiongetMessage())
    Systemoutprintln(HTTP Code + exceptiongetStatusCode())
    Systemoutprintln(AWS Error Code + exceptiongetErrorCode())
    Systemoutprintln(Error Type + exceptiongetErrorType())
    Systemoutprintln(Request ID + exceptiongetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException +
    which means the client encountered +
    an internal error while trying to communicate +
    with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    API Version 20060301
    154Amazon Simple Storage Service Developer Guide
    Getting Objects
    Generate a Presigned Object URL using AWS SDK for NET
    The following tasks guide you through using the NET classes to generate a presigned URL
    Downloading Objects
    1 Create an instance of the AmazonS3 class For information about providing your
    credentials see Using the AWS SDK for NET (p 565) These credentials are used in
    creating a signature for authentication when you generate a presigned URL
    2 Execute the AmazonS3GetPreSignedURL method to generate a presigned URL
    You provide information including a bucket name an object key and an expiration date
    by creating an instance of the GetPreSignedUrlRequest class
    The following C# code sample demonstrates the preceding tasks
    static IAmazonS3 s3Client
    s3Client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    GetPreSignedUrlRequest request1 new GetPreSignedUrlRequest()
    {
    BucketName bucketName
    Key objectKey
    Expires DateTimeNowAddMinutes(5)
    }
    string url s3ClientGetPreSignedURL(request1)
    API Version 20060301
    155Amazon Simple Storage Service Developer Guide
    Getting Objects
    Example
    The following C# code example generates a presigned URL for a specific object For instructions
    about how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class GeneratePresignedURL
    {
    static string bucketName *** Provide a bucket name ***
    static string objectKey *** Provide an object name ***
    static IAmazonS3 s3Client
    public static void Main(string[] args)
    {

    using (s3Client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    string urlString GeneratePreSignedURL()
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static string GeneratePreSignedURL()
    {
    string urlString
    GetPreSignedUrlRequest request1 new GetPreSignedUrlRequest
    {
    BucketName bucketName
    Key objectKey
    Expires DateTimeNowAddMinutes(5)

    }
    try
    {
    urlString s3ClientGetPreSignedURL(request1)
    string url s3ClientGetPreSignedURL(request1)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(
    To sign up for service go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when listing objects
    amazonS3ExceptionMessage)
    }
    }
    catch (Exception e)
    {
    ConsoleWriteLine(eMessage)
    }
    return urlString
    }
    }
    }
    API Version 20060301
    156Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Uploading Objects
    Depending on the size of the data you are uploading Amazon S3 offers the following options
    • Upload objects in a single operation—With a single PUT operation you can upload objects up to 5
    GB in size
    For more information see Uploading Objects in a Single Operation (p 157)
    • Upload objects in parts—Using the Multipart upload API you can upload large objects up to 5 TB
    The Multipart Upload API is designed to improve the upload experience for larger objects You can
    upload objects in parts These object parts can be uploaded independently in any order and in
    parallel You can use a Multipart Upload for objects from 5 MB to 5 TB in size For more information
    see Uploading Objects Using Multipart Upload API (p 165)
    We encourage Amazon S3 customers to use Multipart Upload for objects greater than 100 MB
    Topics
    • Uploading Objects in a Single Operation (p 157)
    • Uploading Objects Using Multipart Upload API (p 165)
    • Uploading Objects Using PreSigned URLs (p 206)
    When uploading objects you optionally request Amazon S3 to encrypt your object before saving it
    on disks in its data centers and decrypt it when you download the objects For more information see
    Protecting Data Using Encryption (p 380)
    Related Topics
    • Using the AWS SDKs CLI and Explorers (p 560)
    Uploading Objects in a Single Operation
    Topics
    • Upload an Object Using the AWS SDK for Java (p 157)
    • Upload an Object Using the AWS SDK for NET (p 159)
    • Upload an Object Using the AWS SDK for PHP (p 161)
    • Upload an Object Using the AWS SDK for Ruby (p 163)
    • Upload an Object Using the REST API (p 164)
    You can use the AWS SDK to upload objects The SDK provides wrapper libraries for you to upload
    data easily However if your application requires it you can use the REST API directly in your
    application
    Upload an Object Using the AWS SDK for Java
    The following tasks guide you through using the Java classes to upload a file The API provides several
    variations called overloads of the putObject method to easily upload your data
    Uploading Objects
    1 Create an instance of the AmazonS3Client
    2 Execute one of the AmazonS3ClientputObject overloads depending on whether you
    are uploading data from a file or a stream
    API Version 20060301
    157Amazon Simple Storage Service Developer Guide
    Uploading Objects
    The following Java code sample demonstrates the preceding tasks
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())

    s3clientputObject(new PutObjectRequest(bucketName keyName file))
    Example
    The following Java code example uploads a file to an Amazon S3 bucket For instructions on how to
    create and test a working sample see Testing the Java Code Examples (p 564)
    import javaioFile
    import javaioIOException
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelPutObjectRequest
    public class UploadObjectSingleOperation {
    private static String bucketName *** Provide bucket name ***
    private static String keyName *** Provide key ***
    private static String uploadFileName *** Provide file name ***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3client new AmazonS3Client(new
    ProfileCredentialsProvider())
    try {
    Systemoutprintln(Uploading a new object to S3 from a file\n)
    File file new File(uploadFileName)
    s3clientputObject(new PutObjectRequest(
    bucketName keyName file))
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException which +
    means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException which +
    means the client encountered +
    an internal error while trying to +
    communicate with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    API Version 20060301
    158Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Upload an Object Using the AWS SDK for NET
    The tasks in the following process guide you through using the NET classes to upload an object The
    API provides several variations overloads of the PutObject method to easily upload your data
    Uploading Objects
    1 Create an instance of the AmazonS3 class
    2 Execute one of the AmazonS3PutObject You need to provide information such as a
    bucket name file path or a stream You provide this information by creating an instance
    of the PutObjectRequest class
    The following C# code sample demonstrates the preceding tasks
    static IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    PutObjectRequest request new PutObjectRequest()
    {
    BucketName bucketName
    Key keyName
    FilePath filePath
    }
    PutObjectResponse response2 clientPutObject(request)
    API Version 20060301
    159Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following C# code example uploads an object The object data is provided as a text string in the
    code The example uploads the object twice
    • The first PutObjectRequest specifies only the bucket name key name and a text string
    embedded in the code as sample object data
    • The second PutObjectRequest provides additional information including the optional object
    metadata and a ContentType header The request specifies a file name to upload
    Each successive call to AmazonS3PutObject replaces the previous upload For instructions on how
    to create and test a working sample see Running the Amazon S3 NET Code Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class UploadObject
    {
    static string bucketName *** bucket name ***
    static string keyName *** key name when object is created ***
    static string filePath *** absolute path to a sample file to
    upload ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    ConsoleWriteLine(Uploading an object)
    WritingAnObject()
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void WritingAnObject()
    {
    try
    {
    PutObjectRequest putRequest1 new PutObjectRequest
    {
    BucketName bucketName
    Key keyName
    ContentBody sample text
    }
    PutObjectResponse response1 clientPutObject(putRequest1)
    2 Put objectset ContentType and add metadata
    PutObjectRequest putRequest2 new PutObjectRequest
    {
    BucketName bucketName
    Key keyName
    FilePath filePath
    ContentType textplain
    }
    putRequest2MetadataAdd(xamzmetatitle someTitle)

    PutObjectResponse response2 clientPutObject(putRequest2)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(
    For service sign up go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when writing an
    object
    amazonS3ExceptionMessage)
    }
    }
    }
    }
    }
    API Version 20060301
    160Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Upload an Object Using the AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to upload an object of up to
    5 GB in size For larger files you must use multipart upload API For more information see Uploading
    Objects Using Multipart Upload API (p 165)
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    Uploading Objects
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Execute the Aws\S3\S3ClientputObject() method You must provide a bucket name and
    a key name in the array parameter's required keys Bucket and Key
    If you are uploading a file you specify the file name by adding the array parameter with
    the SourceFile key You can also provide the optional object metadata using the array
    parameter
    The following PHP code sample demonstrates how to create an object by uploading a file specified in
    the SourceFile key in the putObject method's array parameter
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    filepath should be absolute path to a file on disk
    filepath '*** Your File Path ***'

    Instantiate the client
    s3 S3Clientfactory()
    Upload a file
    result s3>putObject(array(
    'Bucket' > bucket
    'Key' > keyname
    'SourceFile' > filepath
    'ContentType' > 'textplain'
    'ACL' > 'publicread'
    'StorageClass' > 'REDUCED_REDUNDANCY'
    'Metadata' > array(
    'param1' > 'value 1'
    'param2' > 'value 2'
    )
    ))
    echo result['ObjectURL']
    Instead of specifying a file name you can provide object data inline by specifying the array parameter
    with the Body key as shown in the following PHP code example
    use Aws\S3\S3Client
    API Version 20060301
    161Amazon Simple Storage Service Developer Guide
    Uploading Objects
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'

    Instantiate the client
    s3 S3Clientfactory()
    Upload data
    result s3>putObject(array(
    'Bucket' > bucket
    'Key' > keyname
    'Body' > 'Hello world'
    ))
    echo result['ObjectURL']
    Example of Creating an Object in an Amazon S3 bucket by Uploading Data
    The following PHP example creates an object in a specified bucket by uploading data using the
    putObject() method For information about running the PHP examples in this guide go to Running
    PHP Examples (p 567)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    use Aws\S3\Exception\S3Exception
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'

    Instantiate the client
    s3 S3Clientfactory()
    try {
    Upload data
    result s3>putObject(array(
    'Bucket' > bucket
    'Key' > keyname
    'Body' > 'Hello world'
    'ACL' > 'publicread'
    ))
    Print the URL to the object
    echo result['ObjectURL'] \n
    } catch (S3Exception e) {
    echo e>getMessage() \n
    }
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientputObject() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    API Version 20060301
    162Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Upload an Object Using the AWS SDK for Ruby
    The following tasks guide you through using a Ruby script to upload an object for either version of the
    SDK for Ruby
    Using AWS SDK for Ruby Version 2
    The AWS SDK for Ruby Version 2 has two ways of uploading an object to Amazon S3 The first is a
    managed file uploader which makes it easy to upload files of any size from disk
    Uploading a File
    1 Create an instance of the AwsS3Resource class
    2 Reference the target object by bucket name and key
    2 Call#upload_file on the object
    require 'awssdk'
    s3 AwsS3Resourcenew(region'uswest2')
    obj s3bucket('bucketname')object('key')
    objupload_file('pathtosourcefile')
    The second way that SDK for Ruby Version 2 can upload an object is to use the #put method of
    AwsS3Object This is useful if the object is a string or an IO object that is not a file on disk
    Put Object
    1 Create an instance of the AwsS3Resource class
    2 Reference the target object by bucket name and key
    2 Call#put passing in the string or IO object
    require 'awssdk'
    s3 AwsS3Resourcenew(region'uswest2')
    obj s3bucket('bucketname')object('key')
    # string data
    objput(body 'Hello World')
    # IO object
    Fileopen('source' 'rb') do |file|
    objput(body file)
    end
    Using AWS SDK for Ruby Version 1
    The API provides a #write method that can take options that you can use to specify how to upload
    your data
    Uploading Objects SDK for Ruby Version 1
    1 Create an instance of the AWSS3 class by providing your AWS credentials
    API Version 20060301
    163Amazon Simple Storage Service Developer Guide
    Uploading Objects
    2 Use the AWSS3S3Object#write method which takes a data parameter and options
    hash which allow you to upload data from a file or a stream
    The following code sample for the SDK for Ruby Version 1 demonstrates the preceding tasks and
    uses the options hash file to specify the path to the file to upload
    s3 AWSS3new
    # Upload a file
    key Filebasename(file_name)
    s3buckets[bucket_name]objects[key]write(file > file_name)
    Example
    The following SDK for Ruby Version 1 script example uploads a file to an Amazon S3 bucket For
    instructions about how to create and test a working sample see Using the AWS SDK for Ruby
    Version 2 (p 568)
    #usrbinenv ruby
    require 'rubygems'
    require 'awssdk'
    bucket_name '*** Provide bucket name ***'
    file_name '*** Provide file name ****'
    # Get an instance of the S3 interface
    s3 AWSS3new
    # Upload a file
    key Filebasename(file_name)
    s3buckets[bucket_name]objects[key]write(file > file_name)
    puts Uploading file #{file_name} to bucket #{bucket_name}
    Upload an Object Using the REST API
    You can use AWS SDK to upload an object However if your application requires it you can send
    REST requests directly You can send a PUT request to upload data in a single operation For more
    information see PUT Object
    API Version 20060301
    164Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Uploading Objects Using Multipart Upload API
    Topics
    • Multipart Upload Overview (p 165)
    • Using the AWS Java SDK for Multipart Upload (HighLevel API) (p 172)
    • Using the AWS Java SDK for Multipart Upload (LowLevel API) (p 177)
    • Using the AWS NET SDK for Multipart Upload (HighLevel API) (p 181)
    • Using the AWS NET SDK for Multipart Upload (LowLevel API) (p 190)
    • Using the AWS PHP SDK for Multipart Upload (HighLevel API) (p 196)
    • Using the AWS PHP SDK for Multipart Upload (LowLevel API) (p 200)
    • Using the AWS SDK for Ruby for Multipart Upload (p 204)
    • Using the REST API for Multipart Upload (p 205)
    Multipart upload allows you to upload a single object as a set of parts Each part is a contiguous portion
    of the object's data You can upload these object parts independently and in any order If transmission
    of any part fails you can retransmit that part without affecting other parts After all parts of your object
    are uploaded Amazon S3 assembles these parts and creates the object In general when your object
    size reaches 100 MB you should consider using multipart uploads instead of uploading the object in a
    single operation
    Using multipart upload provides the following advantages
    • Improved throughput—You can upload parts in parallel to improve throughput
    • Quick recovery from any network issues—Smaller part size minimizes the impact of restarting a
    failed upload due to a network error
    • Pause and resume object uploads—You can upload object parts over time Once you initiate a
    multipart upload there is no expiry you must explicitly complete or abort the multipart upload
    • Begin an upload before you know the final object size—You can upload an object as you are
    creating it
    For more information see Multipart Upload Overview (p 165)
    Multipart Upload Overview
    Topics
    • Concurrent Multipart Upload Operations (p 167)
    • Multipart Upload and Pricing (p 167)
    • Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy (p 167)
    • Quick Facts (p 169)
    • API Support for Multipart Upload (p 169)
    • Multipart Upload API and Permissions (p 169)
    The Multipart upload API enables you to upload large objects in parts You can use this API to upload
    new large objects or make a copy of an existing object (see Operations on Objects (p 142))
    Multipart uploading is a threestep process You initiate the upload you upload the object parts and
    after you have uploaded all the parts you complete the multipart upload Upon receiving the complete
    multipart upload request Amazon S3 constructs the object from the uploaded parts and you can then
    access the object just as you would any other object in your bucket
    API Version 20060301
    165Amazon Simple Storage Service Developer Guide
    Uploading Objects
    You can list of all your inprogress multipart uploads or get a list of the parts that you have uploaded for
    a specific multipart upload Each of these operations is explained in this section
    Multipart Upload Initiation
    When you send a request to initiate a multipart upload Amazon S3 returns a response with an upload
    ID which is a unique identifier for your multipart upload You must include this upload ID whenever
    you upload parts list the parts complete an upload or abort an upload If you want to provide any
    metadata describing the object being uploaded you must provide it in the request to initiate multipart
    upload
    Parts Upload
    When uploading a part in addition to the upload ID you must specify a part number You can choose
    any part number between 1 and 10000 A part number uniquely identifies a part and its position in
    the object you are uploading If you upload a new part using the same part number as a previously
    uploaded part the previously uploaded part is overwritten Whenever you upload a part Amazon S3
    returns an ETag header in its response For each part upload you must record the part number and
    the ETag value You need to include these values in the subsequent request to complete the multipart
    upload
    Note
    After you initiate a multipart upload and upload one or more parts you must either complete
    or abort the multipart upload in order to stop getting charged for storage of the uploaded parts
    Only after you either complete or abort a multipart upload will Amazon S3 free up the parts
    storage and stop charging you for the parts storage
    Multipart Upload Completion (or Abort)
    When you complete a multipart upload Amazon S3 creates an object by concatenating the parts in
    ascending order based on the part number If any object metadata was provided in the initiate multipart
    upload request Amazon S3 associates that metadata with the object After a successful complete
    request the parts no longer exist Your complete multipart upload request must include the upload
    ID and a list of both part numbers and corresponding ETag values Amazon S3 response includes an
    ETag that uniquely identifies the combined object data This ETag will not necessarily be an MD5 hash
    of the object data You can optionally abort the multipart upload After aborting a multipart upload you
    cannot upload any part using that upload ID again All storage that any parts from the aborted multipart
    upload consumed is then freed If any part uploads were inprogress they can still succeed or fail even
    after you aborted To free all storage consumed by all parts you must abort a multipart upload only
    after all part uploads have completed
    Multipart Upload Listings
    You can list the parts of a specific multipart upload or all inprogress multipart uploads The list parts
    operation returns the parts information that you have uploaded for a specific multipart upload For each
    list parts request Amazon S3 returns the parts information for the specified multipart upload up to a
    maximum of 1000 parts If there are more than 1000 parts in the multipart upload you must send a
    series of list part requests to retrieve all the parts Note that the returned list of parts doesn't include
    parts that haven't completed uploading
    Note
    Only use the returned listing for verification You should not use the result of this listing
    when sending a complete multipart upload request Instead maintain your own list of the
    part numbers you specified when uploading parts and the corresponding ETag values that
    Amazon S3 returns
    Using the list multipart uploads operation you can obtain a list of multipart uploads in progress An in
    progress multipart upload is an upload that you have initiated but have not yet completed or aborted
    Each request returns at most 1000 multipart uploads If there are more than 1000 multipart uploads in
    progress you need to send additional requests to retrieve the remaining multipart uploads
    API Version 20060301
    166Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Concurrent Multipart Upload Operations
    In a distributed development environment it is possible for your application to initiate several updates
    on the same object at the same time Your application might initiate several multipart uploads using
    the same object key For each of these uploads your application can then upload parts and send
    a complete upload request to Amazon S3 to create the object When the buckets have versioning
    enabled completing a multipart upload always creates a new version For buckets that do not have
    versioning enabled it is possible that some other request received between the time when a multipart
    upload is initiated and when it is completed might take precedence
    Note
    It is possible for some other request received between the time you initiated a multipart upload
    and completed it to take precedence For example if another operation deletes a key after
    you initiate a multipart upload with that key but before you complete it the complete multipart
    upload response might indicate a successful object creation without you ever seeing the
    object
    Multipart Upload and Pricing
    Once you initiate a multipart upload Amazon S3 retains all the parts until you either complete or
    abort the upload Throughout its lifetime you are billed for all storage bandwidth and requests for
    this multipart upload and its associated parts If you abort the multipart upload Amazon S3 deletes
    upload artifacts and any parts that you have uploaded and you are no longer billed for them For more
    information about pricing see Amazon S3 Pricing
    Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy
    After you initiate a multipart upload you begin uploading parts Amazon S3 stores these parts but it
    creates the object from the parts only after you upload all of them and send a successful request
    to complete the multipart upload (you should verify that your request to complete multipart upload is
    successful) Upon receiving the complete multipart upload request Amazon S3 assembles the parts
    and creates an object
    If you don't send the complete multipart upload request successfully Amazon S3 will not assemble
    the parts and will not create any object Therefore the parts remain in Amazon S3 and you pay for the
    parts that are stored in Amazon S3 As a best practice we recommend you configure a lifecycle rule
    (using the AbortIncompleteMultipartUpload action) to minimize your storage costs
    Amazon S3 supports a bucket lifecycle rule that you can use to direct Amazon S3 to abort multipart
    uploads that don't complete within a specified number of days after being initiated When a multipart
    upload is not completed within the time frame it becomes eligible for an abort operation and Amazon
    S3 aborts the multipart upload (and deletes the parts associated with the multipart upload)
    The following is an example lifecycle configuration that specifies a rule with the
    AbortIncompleteMultipartUpload action


    samplerule

    Enabled

    7



    In the example the rule does not specify a value for the Prefix element (object key name prefix) and
    therefore it applies to all objects in the bucket for which you initiated multipart uploads Any multipart
    API Version 20060301
    167Amazon Simple Storage Service Developer Guide
    Uploading Objects
    uploads that were initiated and did not complete within seven days become eligible for an abort
    operation (the action has no effect on completed multipart uploads)
    For more information about the bucket lifecycle configuration see Object Lifecycle
    Management (p 109)
    Note
    if the multipart upload is completed within the number of days specified in the rule the
    AbortIncompleteMultipartUpload lifecycle action does not apply (that is Amazon S3
    will not take any action) Also this action does not apply to objects no objects are deleted by
    this lifecycle action
    The following putbucketlifecycle CLI command adds the lifecycle configuration for the
    specified bucket
    aws s3api putbucketlifecycle \
    bucket bucketname \
    lifecycleconfiguration filenamecontaininglifecycle
    configuration
    To test the CLI command do the following
    1 Set up the AWS CLI For instructions see Set Up the AWS CLI (p 562)
    2 Save the following example lifecycle configuration in a file (lifecyclejson) The example
    configuration specifies empty prefix and therefore it applies to all objects in the bucket You can
    specify a prefix to restrict the policy to a subset of objects
    {
    Rules [
    {
    ID Test Rule
    Status Enabled
    Prefix
    AbortIncompleteMultipartUpload {
    DaysAfterInitiation 7
    }
    }
    ]
    }
    3 Run the following CLI command to set lifecycle configuration on your bucket
    aws s3api putbucketlifecycle \
    bucket bucketname \
    lifecycleconfiguration filelifecyclejson
    4 To verify retrieve the lifecycle configuration using the getbucketlifecycle CLI command
    aws s3api getbucketlifecycle \
    bucket bucketname
    5 To delete the lifecycle configuration use the deletebucketlifecycle CLI command
    aws s3api deletebucketlifecycle \
    API Version 20060301
    168Amazon Simple Storage Service Developer Guide
    Uploading Objects
    bucket bucketname
    Quick Facts
    The following table provides multipart upload core specifications For more information see Multipart
    Upload Overview (p 165)
    Item Specification
    Maximum object size 5 TB
    Maximum number of parts per upload 10000
    Part numbers 1 to 10000 (inclusive)
    Part size 5 MB to 5 GB last part can be < 5 MB
    Maximum number of parts returned
    for a list parts request
    1000
    Maximum number of multipart
    uploads returned in a list multipart
    uploads request
    1000
    API Support for Multipart Upload
    You can use an AWS SDK to upload an object in parts The following AWS SDK libraries support
    multipart upload
    • AWS SDK for Java
    • AWS SDK for NET
    • AWS SDK for PHP
    These libraries provide a highlevel abstraction that makes uploading multipart objects easy However
    if your application requires you can use the REST API directly The following sections in the Amazon
    Simple Storage Service API Reference describe the REST API for multipart upload
    • Initiate Multipart Upload
    • Upload Part
    • Upload Part (Copy)
    • Complete Multipart Upload
    • Abort Multipart Upload
    • List Parts
    • List Multipart Uploads
    Multipart Upload API and Permissions
    An individual must have the necessary permissions to use the multipart upload operations You can
    use ACLs the bucket policy or the user policy to grant individuals permissions to perform these
    operations The following table lists the required permissions for various multipart upload operations
    when using ACLs bucket policy or the user policy
    API Version 20060301
    169Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Action Required Permissions
    Initiate
    Multipart
    Upload
    You must be allowed to perform the s3PutObject action on an object to initiate
    multipart upload
    The bucket owner can allow other principals to perform the s3PutObject action
    Initiator Container element that identifies who initiated the multipart upload If the initiator is
    an AWS account this element provides the same information as the Owner element
    If the initiator is an IAM User this element provides the user ARN and display name
    Upload Part You must be allowed to perform the s3PutObject action on an object to upload a
    part
    Only the initiator of a multipart upload can upload parts The bucket owner must
    allow the initiator to perform the s3PutObject action on an object in order for the
    initiator to upload a part for that object
    Upload Part
    (Copy)
    You must be allowed to perform the s3PutObject action on an object to upload
    a part Because your are uploading a part from an existing object you must be
    allowed s3GetObject on the source object
    Only the initiator of a multipart upload can upload parts The bucket owner must
    allow the initiator to perform the s3PutObject action on an object in order for the
    initiator to upload a part for that object
    Complete
    Multipart
    Upload
    You must be allowed to perform the s3PutObject action on an object to complete
    a multipart upload
    Only the initiator of a multipart upload can complete that multipart upload The
    bucket owner must allow the initiator to perform the s3PutObject action on an
    object in order for the initiator to complete a multipart upload for that object
    Abort
    Multipart
    Upload
    You must be allowed to perform the s3AbortMultipartUpload action to abort a
    multipart upload
    By default the bucket owner and the initiator of the multipart upload are allowed to
    perform this action If the initiator is an IAM user that user's AWS account is also
    allowed to abort that multipart upload
    In addition to these defaults the bucket owner can allow other principals to perform
    the s3AbortMultipartUpload action on an object The bucket owner can deny
    any principal the ability to perform the s3AbortMultipartUpload action
    List Parts You must be allowed to perform the s3ListMultipartUploadParts action to
    list parts in a multipart upload
    By default the bucket owner has permission to list parts for any multipart upload to
    the bucket The initiator of the multipart upload has the permission to list parts of the
    specific multipart upload If the multipart upload initiator is an IAM user the AWS
    account controlling that IAM user also has permission to list parts of that upload
    In addition to these defaults the bucket owner can allow other principals to perform
    the s3ListMultipartUploadParts action on an object The bucket owner can
    also deny any principal the ability to perform the s3ListMultipartUploadParts
    action
    List Multipart
    Uploads
    You must be allowed to perform the s3ListBucketMultipartUploads action on
    a bucket to list multipart uploads in progress to that bucket
    In addition to the default the bucket owner can allow other principals to perform the
    s3ListBucketMultipartUploads action on the bucket
    API Version 20060301
    170Amazon Simple Storage Service Developer Guide
    Uploading Objects
    For information on the relationship between ACL permissions and permissions in access policies see
    Mapping of ACL Permissions and Access Policy Permissions (p 366) For information on IAM users
    go to Working with Users and Groups
    API Version 20060301
    171Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Using the AWS Java SDK for Multipart Upload (HighLevel API)
    Topics
    • Upload a File (p 172)
    • Abort Multipart Uploads (p 173)
    • Track Multipart Upload Progress (p 174)
    The AWS SDK for Java exposes a highlevel API that simplifies multipart upload (see Uploading
    Objects Using Multipart Upload API (p 165)) You can upload data from a file or a stream You
    can also set advanced options such as the part size you want to use for the multipart upload or
    the number of threads you want to use when uploading the parts concurrently You can also set
    optional object properties the storage class or ACL You use the PutObjectRequest and the
    TransferManagerConfiguration classes to set these advanced options The TransferManager
    class of the Java API provides the highlevel API for you to upload data
    When possible TransferManager attempts to use multiple threads to upload multiple parts of a
    single upload at once When dealing with large content sizes and high bandwidth this can have a
    significant increase on throughput
    In addition to file upload functionality the TransferManager class provides a method for you to abort
    multipart upload in progress You must provide a Date value and then the API aborts all the multipart
    uploads that were initiated before the specified date
    Upload a File
    The following tasks guide you through using the highlevel Java classes to upload a file The API
    provides several variations called overloads of the upload method to easily upload your data
    HighLevel API File Uploading Process
    1 Create an instance of the TransferManager class
    2 Execute one of the TransferManagerupload overloads depending on whether you
    are uploading data from a file or a stream
    The following Java code example demonstrates the preceding tasks
    API Version 20060301
    172Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following Java code example uploads a file to an Amazon S3 bucket For instructions on how to
    create and test a working sample see Testing the Java Code Examples (p 564)
    import javaioFile
    import comamazonawsAmazonClientException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3transferTransferManager
    import comamazonawsservicess3transferUpload
    public class UploadObjectMultipartUploadUsingHighLevelAPI {
    public static void main(String[] args) throws Exception {
    String existingBucketName *** Provide existing bucket name ***
    String keyName *** Provide object key ***
    String filePath *** Path to and name of the file to
    upload ***

    TransferManager tm new TransferManager(new
    ProfileCredentialsProvider())
    Systemoutprintln(Hello)
    TransferManager processes all transfers asynchronously
    so this call will return immediately
    Upload upload tmupload(
    existingBucketName keyName new File(filePath))
    Systemoutprintln(Hello2)
    try {
    Or you can block and wait for the upload to finish
    uploadwaitForCompletion()
    Systemoutprintln(Upload complete)
    } catch (AmazonClientException amazonClientException) {
    Systemoutprintln(Unable to upload file upload was aborted)
    amazonClientExceptionprintStackTrace()
    }
    }
    }
    Abort Multipart Uploads
    The TransferManager class provides a method abortMultipartUploads to abort multipart
    uploads in progress An upload is considered to be in progress once you initiate it and until you
    complete it or abort it You provide a Date value and this API aborts all the multipart uploads on that
    bucket that were initiated before the specified Date and are still in progress
    Because you are billed for all storage associated with uploaded parts (see Multipart Upload and
    Pricing (p 167)) it is important that you either complete the multipart upload to have the object created
    or abort the multipart upload to remove any uploaded parts
    The following tasks guide you through using the highlevel Java classes to abort multipart uploads
    HighLevel API Multipart Uploads Aborting Process
    1 Create an instance of the TransferManager class
    2 Execute the TransferManagerabortMultipartUploads method by passing the
    bucket name and a Date value
    API Version 20060301
    173Amazon Simple Storage Service Developer Guide
    Uploading Objects
    The following Java code example demonstrates the preceding tasks
    Example
    The following Java code aborts all multipart uploads in progress that were initiated on a specific bucket
    over a week ago For instructions on how to create and test a working sample see Testing the Java
    Code Examples (p 564)
    import javautilDate
    import comamazonawsAmazonClientException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3transferTransferManager
    public class AbortMPUUsingHighLevelAPI {
    public static void main(String[] args) throws Exception {
    String existingBucketName *** Provide existing bucket name ***

    TransferManager tm new TransferManager(new
    ProfileCredentialsProvider())
    int sevenDays 1000 * 60 * 60 * 24 * 7
    Date oneWeekAgo new Date(SystemcurrentTimeMillis() sevenDays)

    try {
    tmabortMultipartUploads(existingBucketName oneWeekAgo)
    } catch (AmazonClientException amazonClientException) {
    Systemoutprintln(Unable to upload file upload was aborted)
    amazonClientExceptionprintStackTrace()
    }
    }
    }
    Note
    You can also abort a specific multipart upload For more information see Abort a Multipart
    Upload (p 180)
    Track Multipart Upload Progress
    The highlevel multipart upload API provides a listen interface ProgressListener to track the
    upload progress when uploading data using the TransferManager class To use the event in
    your code you must import the comamazonawsservicess3modelProgressEvent and
    comamazonawsservicess3modelProgressListener types
    Progress events occurs periodically and notify the listener that bytes have been transferred
    The following Java code sample demonstrates how you can subscribe to the ProgressEvent event
    and write a handler
    TransferManager tm new TransferManager(new ProfileCredentialsProvider())

    PutObjectRequest request new PutObjectRequest(
    existingBucketName keyName new File(filePath))
    Subscribe to the event and provide event handler
    requestsetProgressListener(new ProgressListener() {
    public void progressChanged(ProgressEvent event) {
    Systemoutprintln(Transferred bytes +
    API Version 20060301
    174Amazon Simple Storage Service Developer Guide
    Uploading Objects
    eventgetBytesTransfered())
    }
    })
    API Version 20060301
    175Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following Java code uploads a file and uses the ProgressListener to track the upload
    progress For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioFile
    import comamazonawsAmazonClientException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawseventProgressEvent
    import comamazonawseventProgressListener
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3transferTransferManager
    import comamazonawsservicess3transferUpload
    public class TrackMPUProgressUsingHighLevelAPI {
    public static void main(String[] args) throws Exception {
    String existingBucketName *** Provide bucket name ***
    String keyName *** Provide object key ***
    String filePath *** file to upload ***

    TransferManager tm new TransferManager(new
    ProfileCredentialsProvider())
    For more advanced uploads you can create a request object
    and supply additional request parameters (ex progress listeners
    canned ACLs etc)
    PutObjectRequest request new PutObjectRequest(
    existingBucketName keyName new File(filePath))

    You can ask the upload for its progress or you can
    add a ProgressListener to your request to receive notifications
    when bytes are transferred
    requestsetGeneralProgressListener(new ProgressListener() {
    @Override
    public void progressChanged(ProgressEvent progressEvent) {
    Systemoutprintln(Transferred bytes +
    progressEventgetBytesTransferred())
    }
    })
    TransferManager processes all transfers asynchronously
    so this call will return immediately
    Upload upload tmupload(request)

    try {
    You can block and wait for the upload to finish
    uploadwaitForCompletion()
    } catch (AmazonClientException amazonClientException) {
    Systemoutprintln(Unable to upload file upload aborted)
    amazonClientExceptionprintStackTrace()
    }
    }
    }
    API Version 20060301
    176Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Using the AWS Java SDK for Multipart Upload (LowLevel API)
    Topics
    • Upload a File (p 177)
    • List Multipart Uploads (p 180)
    • Abort a Multipart Upload (p 180)
    The AWS SDK for Java exposes a lowlevel API that closely resembles the Amazon S3 REST API
    for multipart upload (see Uploading Objects Using Multipart Upload API (p 165) Use the lowlevel
    API when you need to pause and resume multipart uploads vary part sizes during the upload or do
    not know the size of the data in advance Use the highlevel API (see Using the AWS Java SDK for
    Multipart Upload (HighLevel API) (p 172)) whenever you don't have these requirements
    Upload a File
    The following tasks guide you through using the lowlevel Java classes to upload a file
    LowLevel API File Uploading Process
    1 Create an instance of the AmazonS3Client class
    2 Initiate multipart upload by executing the
    AmazonS3ClientinitiateMultipartUpload method You will need to provide the
    required information ie bucket name and key name to initiate the multipart upload by
    creating an instance of the InitiateMultipartUploadRequest class
    3 Save the upload ID that the AmazonS3ClientinitiateMultipartUpload method
    returns You will need to provide this upload ID for each subsequent multipart upload
    operation
    4 Upload parts For each part upload execute the AmazonS3ClientuploadPart
    method You need to provide part upload information such as upload ID bucket
    name and the part number You provide this information by creating an instance of the
    UploadPartRequest class
    5 Save the response of the AmazonS3ClientuploadPart method in a list This
    response includes the ETag value and the part number you will need to complete the
    multipart upload
    6 Repeat tasks 4 and 5 for each part
    7 Execute the AmazonS3ClientcompleteMultipartUpload method to complete the
    multipart upload
    The following Java code sample demonstrates the preceding tasks
    AmazonS3 s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    Create a list of UploadPartResponse objects You get one of these for
    each part upload
    List partETags new ArrayList()
    Step 1 Initialize
    InitiateMultipartUploadRequest initRequest new
    InitiateMultipartUploadRequest(
    existingBucketName
    keyName)
    InitiateMultipartUploadResult initResponse
    API Version 20060301
    177Amazon Simple Storage Service Developer Guide
    Uploading Objects
    s3ClientinitiateMultipartUpload(initRequest)
    File file new File(filePath)
    long contentLength filelength()
    long partSize 5 * 1024 * 1024 Set part size to 5 MB
    try {
    Step 2 Upload parts
    long filePosition 0
    for (int i 1 filePosition < contentLength i++) {
    Last part can be less than 5 MB Adjust part size
    partSize Mathmin(partSize (contentLength filePosition))

    Create request to upload a part
    UploadPartRequest uploadRequest new UploadPartRequest()
    withBucketName(existingBucketName)withKey(keyName)
    withUploadId(initResponsegetUploadId())withPartNumber(i)
    withFileOffset(filePosition)
    withFile(file)
    withPartSize(partSize)
    Upload part and add response to our list
    partETagsadd(s3ClientuploadPart(uploadRequest)getPartETag())
    filePosition + partSize
    }
    Step 3 Complete
    CompleteMultipartUploadRequest compRequest new
    CompleteMultipartUploadRequest(existingBucketName
    keyName
    initResponsegetUploadId()
    partETags)
    s3ClientcompleteMultipartUpload(compRequest)
    } catch (Exception e) {
    s3ClientabortMultipartUpload(new AbortMultipartUploadRequest(
    existingBucketName keyName initResponsegetUploadId()))
    }
    API Version 20060301
    178Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following Java code example uploads a file to an Amazon S3 bucket For instructions on how to
    create and test a working sample see Testing the Java Code Examples (p 564)
    import javaioFile
    import javaioIOException
    import javautilArrayList
    import javautilList
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelAbortMultipartUploadRequest
    import comamazonawsservicess3modelCompleteMultipartUploadRequest
    import comamazonawsservicess3modelInitiateMultipartUploadRequest
    import comamazonawsservicess3modelInitiateMultipartUploadResult
    import comamazonawsservicess3modelPartETag
    import comamazonawsservicess3modelUploadPartRequest
    public class UploadObjectMPULowLevelAPI {
    public static void main(String[] args) throws IOException {
    String existingBucketName *** ProvideYourExistingBucketName
    ***
    String keyName *** ProvideKeyName ***
    String filePath *** ProvideFilePath ***

    AmazonS3 s3Client new AmazonS3Client(new
    ProfileCredentialsProvider())
    Create a list of UploadPartResponse objects You get one of these
    for each part upload
    List partETags new ArrayList()
    Step 1 Initialize
    InitiateMultipartUploadRequest initRequest new
    InitiateMultipartUploadRequest(existingBucketName keyName)
    InitiateMultipartUploadResult initResponse
    s3ClientinitiateMultipartUpload(initRequest)
    File file new File(filePath)
    long contentLength filelength()
    long partSize 5242880 Set part size to 5 MB
    try {
    Step 2 Upload parts
    long filePosition 0
    for (int i 1 filePosition < contentLength i++) {
    Last part can be less than 5 MB Adjust part size
    partSize Mathmin(partSize (contentLength filePosition))

    Create request to upload a part
    UploadPartRequest uploadRequest new UploadPartRequest()
    withBucketName(existingBucketName)withKey(keyName)

    withUploadId(initResponsegetUploadId())withPartNumber(i)
    withFileOffset(filePosition)
    withFile(file)
    withPartSize(partSize)
    Upload part and add response to our list
    partETagsadd(
    s3ClientuploadPart(uploadRequest)getPartETag())
    filePosition + partSize
    }
    Step 3 Complete
    CompleteMultipartUploadRequest compRequest new
    CompleteMultipartUploadRequest(
    existingBucketName
    keyName
    initResponsegetUploadId()
    partETags)
    s3ClientcompleteMultipartUpload(compRequest)
    } catch (Exception e) {
    s3ClientabortMultipartUpload(new AbortMultipartUploadRequest(
    existingBucketName keyName
    initResponsegetUploadId()))
    }
    }
    }
    API Version 20060301
    179Amazon Simple Storage Service Developer Guide
    Uploading Objects
    List Multipart Uploads
    The following tasks guide you through using the lowlevel Java classes to list all inprogress multipart
    uploads on a bucket
    LowLevel API Multipart Uploads Listing Process
    1 Create an instance of the ListMultipartUploadsRequest class and provide the
    bucket name
    2 Execute the AmazonS3ClientlistMultipartUploads method The method returns
    an instance of the MultipartUploadListing class that gives you information about
    the multipart uploads in progress
    The following Java code sample demonstrates the preceding tasks
    ListMultipartUploadsRequest allMultpartUploadsRequest
    new ListMultipartUploadsRequest(existingBucketName)
    MultipartUploadListing multipartUploadListing
    s3ClientlistMultipartUploads(allMultpartUploadsRequest)
    Abort a Multipart Upload
    You can abort an inprogress multipart upload by calling the AmazonS3abortMultipartUpload
    method This method deletes any parts that were uploaded to Amazon S3 and frees up the resources
    You must provide the upload ID bucket name and key name The following Java code sample
    demonstrates how to abort an inprogress multipart upload
    InitiateMultipartUploadRequest initRequest
    new InitiateMultipartUploadRequest(existingBucketName keyName)
    InitiateMultipartUploadResult initResponse
    s3ClientinitiateMultipartUpload(initRequest)
    AmazonS3 s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    s3ClientabortMultipartUpload(new AbortMultipartUploadRequest(
    existingBucketName keyName initResponsegetUploadId()))
    Note
    Instead of a specific multipart upload you can abort all your multipart uploads initiated before
    a specific time that are still in progress This cleanup operation is useful to abort old multipart
    uploads that you initiated but neither completed nor aborted For more information see Abort
    Multipart Uploads (p 173)
    API Version 20060301
    180Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Using the AWS NET SDK for Multipart Upload (HighLevel API)
    Topics
    • Upload a File (p 181)
    • Upload a Directory (p 183)
    • Abort Multipart Uploads (p 185)
    • Track Multipart Upload Progress (p 186)
    The AWS SDK for NET exposes a highlevel API that simplifies multipart upload (see Uploading
    Objects Using Multipart Upload API (p 165)) You can upload data from a file directory or a stream
    When uploading data from a file if you don't provide the object's key name the API uses the file
    name for the object's key name You must provide the object's key name if you are uploading data
    from a stream You can optionally set advanced options such as the part size you want to use for the
    multipart upload number of threads you want to use when uploading the parts concurrently optional
    file metadata the storage class (STANDARD or REDUCED_REDUNDANCY) or ACL The highlevel
    API provides the TransferUtilityUploadRequest class to set these advanced options
    The TransferUtility class provides a method for you to abort multipart uploads in progress You
    must provide a DateTime value and then the API aborts all the multipart uploads that were initiated
    before the specified date and time
    Upload a File
    The following tasks guide you through using the highlevel NET classes to upload a file The API
    provides several variations overloads of the Upload method to easily upload your data
    HighLevel API File Uploading Process
    1 Create an instance of the TransferUtility class by providing your AWS credentials
    2 Execute one of the TransferUtilityUpload overloads depending on whether you
    are uploading data from a file a stream or a directory
    The following C# code sample demonstrates the preceding tasks
    TransferUtility utility new TransferUtility()
    utilityUpload(filePath existingBucketName)
    When uploading large files using the NET API timeout might occur even while
    data is being written to the request stream You can set explicit timeout using the
    TransferUtilityConfigDefaultTimeout as demonstrated in the following C# code sample
    TransferUtilityConfig config new TransferUtilityConfig()
    configDefaultTimeout 11111
    TransferUtility utility new TransferUtility(config)
    API Version 20060301
    181Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following C# code example uploads a file to an Amazon S3 bucket The example illustrates
    the use of various TransferUtilityUpload overloads to upload a file each successive call to
    upload replaces the previous upload For instructions on how to create and test a working sample see
    Running the Amazon S3 NET Code Examples (p 566)
    using System
    using SystemIO
    using AmazonS3
    using AmazonS3Transfer
    namespace s3amazoncomdocsamples
    {
    class UploadFileMPUHighLevelAPI
    {
    static string existingBucketName *** Provide bucket name ***
    static string keyName *** Provide your object key ***
    static string filePath *** Provide file name ***
    static void Main(string[] args)
    {
    try
    {
    TransferUtility fileTransferUtility new
    TransferUtility(new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    1 Upload a file file name is used as the object key
    name
    fileTransferUtilityUpload(filePath existingBucketName)
    ConsoleWriteLine(Upload 1 completed)
    2 Specify object key name explicitly
    fileTransferUtilityUpload(filePath
    existingBucketName keyName)
    ConsoleWriteLine(Upload 2 completed)
    3 Upload data from a type of SystemIOStream
    using (FileStream fileToUpload
    new FileStream(filePath FileModeOpen FileAccessRead))
    {
    fileTransferUtilityUpload(fileToUpload
    existingBucketName keyName)
    }
    ConsoleWriteLine(Upload 3 completed)
    4Specify advanced settingsoptions
    TransferUtilityUploadRequest fileTransferUtilityRequest new
    TransferUtilityUploadRequest
    {
    BucketName existingBucketName
    FilePath filePath
    StorageClass S3StorageClassReducedRedundancy
    PartSize 6291456 6 MB
    Key keyName
    CannedACL S3CannedACLPublicRead
    }
    fileTransferUtilityRequestMetadataAdd(param1 Value1)
    fileTransferUtilityRequestMetadataAdd(param2 Value2)
    fileTransferUtilityUpload(fileTransferUtilityRequest)
    ConsoleWriteLine(Upload 4 completed)
    }
    catch (AmazonS3Exception s3Exception)
    {
    ConsoleWriteLine(s3ExceptionMessage
    s3ExceptionInnerException)
    }
    }
    }
    }
    API Version 20060301
    182Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Upload a Directory
    Using the TransferUtility class you can also upload an entire directory By default Amazon S3
    only uploads the files at the root of the specified directory You can however specify to recursively
    upload files in all the subdirectories
    You can also specify filtering expressions to select files in the specified directory based on some
    filtering criteria For example to upload only the pdf files from a directory you specify a *pdf filter
    expression
    When uploading files from a directory you cannot specify the object's key name It is constructed from
    the file's location in the directory as well as its name For example assume you have a directory c
    \myfolder with the following structure
    C\myfolder
    \atxt
    \bpdf
    \media\
    Anmp3
    When you upload this directory Amazon S3 uses the following key names
    atxt
    bpdf
    mediaAnmp3
    The following tasks guide you through using the highlevel NET classes to upload a directory
    HighLevel API Directory Uploading Process
    1 Create an instance of the TransferUtility class by providing your AWS credentials
    2 Execute one of the TransferUtilityUploadDirectory overloads
    The following C# code sample demonstrates the preceding tasks
    TransferUtility utility new TransferUtility()
    utilityUploadDirectory(directoryPath existingBucketName)
    API Version 20060301
    183Amazon Simple Storage Service Developer Guide
    Uploading Objects
    ExampleThe following C# code example uploads a directory to an Amazon S3 bucket The example illustrates
    the use of various TransferUtilityUploadDirectory overloads to upload a directory each
    successive call to upload replaces the previous upload For instructions on how to create and test a
    working sample see Running the Amazon S3 NET Code Examples (p 566)using System
    using SystemIO
    using AmazonS3
    using AmazonS3Transfer
    namespace s3amazoncomdocsamples
    {
    class UploadDirectoryMPUHighLevelAPI
    {
    static string existingBucketName *** Provide bucket name ***
    static string directoryPath *** Provide directory name ***
    static void Main(string[] args)
    {
    try
    {
    TransferUtility directoryTransferUtility
    new TransferUtility(new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    1 Upload a directory
    directoryTransferUtilityUploadDirectory(directoryPath
    existingBucketName)
    ConsoleWriteLine(Upload statement 1 completed)
    2 Upload only the txt files from a directory
    Also search recursively
    directoryTransferUtilityUploadDirectory(
    directoryPath
    existingBucketName
    *txt
    SearchOptionAllDirectories)
    ConsoleWriteLine(Upload statement 2 completed)
    3 Same as 2 and some optional configuration
    Search recursively for txt files to upload)
    TransferUtilityUploadDirectoryRequest request
    new TransferUtilityUploadDirectoryRequest
    {
    BucketName existingBucketName
    Directory directoryPath
    SearchOption SearchOptionAllDirectories
    SearchPattern *txt
    }
    directoryTransferUtilityUploadDirectory(request)
    ConsoleWriteLine(Upload statement 3 completed)
    }
    catch (AmazonS3Exception e)
    {
    ConsoleWriteLine(eMessage eInnerException)
    }
    }
    }
    }
    API Version 20060301
    184Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Abort Multipart Uploads
    The TransferUtility class provides a method AbortMultipartUploads to abort multipart
    uploads in progress An upload is considered to be inprogress once you initiate it and until you
    complete it or abort it You provide a DateTime value and this API aborts all the multipart uploads on
    that bucket that were initiated before the specified DateTime and in progress
    Because you are billed for all storage associated with uploaded parts (see Multipart Upload and
    Pricing (p 167)) it is important that you either complete the multipart upload to have the object created
    or abort the multipart upload to remove any uploaded parts
    The following tasks guide you through using the highlevel NET classes to abort multipart uploads
    HighLevel API Multipart Uploads Aborting Process
    1 Create an instance of the TransferUtility class by providing your AWS credentials
    2 Execute the TransferUtilityAbortMultipartUploads method by passing the
    bucket name and a DateTime value
    The following C# code sample demonstrates the preceding tasks
    TransferUtility utility new TransferUtility()
    utilityAbortMultipartUploads(existingBucketName DateTimeNowAddDays(7))
    API Version 20060301
    185Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following C# code aborts all multipart uploads in progress that were initiated on a specific bucket
    over a week ago For instructions on how to create and test a working sample see Running the
    Amazon S3 NET Code Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Transfer
    namespace s3amazoncomdocsamples
    {
    class AbortMPUUsingHighLevelAPI
    {
    static string existingBucketName ***Provide bucket name***
    static void Main(string[] args)
    {
    try
    {
    TransferUtility transferUtility
    new TransferUtility(new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    Aborting uploads that were initiated over a week ago
    transferUtilityAbortMultipartUploads(
    existingBucketName DateTimeNowAddDays(7))
    }
    catch (AmazonS3Exception e)
    {
    ConsoleWriteLine(eMessage eInnerException)
    }
    }
    }
    }
    Note
    You can also abort a specific multipart upload For more information see List Multipart
    Uploads (p 194)
    Track Multipart Upload Progress
    The highlevel multipart upload API provides an event
    TransferUtilityUploadRequestUploadProgressEvent to track the upload progress when
    uploading data using the TransferUtility class
    The event occurs periodically and returns multipart upload progress information such as the total
    number of bytes to transfer and the number of bytes transferred at the time event occurred
    The following C# code sample demonstrates how you can subscribe to the UploadProgressEvent
    event and write a handler
    TransferUtility fileTransferUtility
    new TransferUtility(new AmazonS3Client(AmazonRegionEndpointUSEast1))
    Use TransferUtilityUploadRequest to configure options
    In this example we subscribe to an event
    TransferUtilityUploadRequest uploadRequest
    new TransferUtilityUploadRequest
    API Version 20060301
    186Amazon Simple Storage Service Developer Guide
    Uploading Objects
    {
    BucketName existingBucketName
    FilePath filePath
    Key keyName
    }

    uploadRequestUploadProgressEvent +
    new EventHandler
    (uploadRequest_UploadPartProgressEvent)
    fileTransferUtilityUpload(uploadRequest)
    static void uploadRequest_UploadPartProgressEvent(object sender
    UploadProgressArgs e)
    {
    Process event
    ConsoleWriteLine({0}{1} eTransferredBytes eTotalBytes)
    }
    API Version 20060301
    187Amazon Simple Storage Service Developer Guide
    Uploading Objects
    ExampleThe following C# code example uploads a file to an Amazon S3 bucket and tracks the progress
    by subscribing to the TransferUtilityUploadRequestUploadProgressEvent event For
    instructions on how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)using System
    using SystemCollectionsSpecialized
    using SystemConfiguration
    using AmazonS3
    using AmazonS3Transfer
    namespace s3amazoncomdocsamples
    {
    class TrackMPUUsingHighLevelAPI
    {
    static string existingBucketName *** Provide bucket name ***
    static string keyName *** Provide key name ***
    static string filePath *** Provide file to upload ***
    static void Main(string[] args)
    {
    try
    {
    TransferUtility fileTransferUtility
    new TransferUtility(new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    Use TransferUtilityUploadRequest to configure options
    In this example we subscribe to an event
    TransferUtilityUploadRequest uploadRequest
    new TransferUtilityUploadRequest
    {
    BucketName existingBucketName
    FilePath filePath
    Key keyName
    }

    uploadRequestUploadProgressEvent +
    new EventHandler
    (uploadRequest_UploadPartProgressEvent)
    fileTransferUtilityUpload(uploadRequest)
    ConsoleWriteLine(Upload completed)
    }
    catch (AmazonS3Exception e)
    {
    ConsoleWriteLine(eMessage eInnerException)
    }
    }
    static void uploadRequest_UploadPartProgressEvent(
    object sender UploadProgressArgs e)
    {
    Process event
    ConsoleWriteLine({0}{1} eTransferredBytes eTotalBytes)
    }
    }
    }
    API Version 20060301
    188Amazon Simple Storage Service Developer Guide
    Uploading Objects
    API Version 20060301
    189Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Using the AWS NET SDK for Multipart Upload (LowLevel API)
    Topics
    • Upload a File (p 190)
    • List Multipart Uploads (p 194)
    • Track Multipart Upload Progress (p 194)
    • Abort a Multipart Upload (p 194)
    The AWS SDK for NET exposes a lowlevel API that closely resembles the Amazon S3 REST API for
    multipart upload (see Using the REST API for Multipart Upload (p 205) ) Use the lowlevel API when
    you need to pause and resume multipart uploads vary part sizes during the upload or do not know the
    size of the data in advance Use the highlevel API (see Using the AWS NET SDK for Multipart Upload
    (HighLevel API) (p 181)) whenever you don't have these requirements
    Upload a File
    The following tasks guide you through using the lowlevel NET classes to upload a file
    LowLevel API File UploadingProcess
    1 Create an instance of the AmazonS3Client class by providing your AWS credentials
    2 Initiate multipart upload by executing the
    AmazonS3ClientInitiateMultipartUpload method You will need to provide
    information required to initiate the multipart upload by creating an instance of the
    InitiateMultipartUploadRequest class
    3 Save the Upload ID that the AmazonS3ClientInitiateMultipartUpload method
    returns You will need to provide this upload ID for each subsequent multipart upload
    operation
    4 Upload the parts For each part upload execute the AmazonS3ClientUploadPart
    method You will need to provide part upload information such as upload ID bucket
    name and the part number You provide this information by creating an instance of the
    UploadPartRequest class
    5 Save the response of the AmazonS3ClientUploadPart method in a list This
    response includes the ETag value and the part number you will later need to complete
    the multipart upload
    6 Repeat tasks 4 and 5 for each part
    7 Execute the AmazonS3ClientCompleteMultipartUpload method to complete the
    multipart upload
    The following C# code sample demonstrates the preceding tasks
    IAmazonS3 s3Client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    List to store upload part responses
    List uploadResponses new List()
    1 Initialize
    InitiateMultipartUploadRequest initiateRequest new
    InitiateMultipartUploadRequest
    {
    BucketName existingBucketName
    API Version 20060301
    190Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Key keyName
    }
    InitiateMultipartUploadResponse initResponse
    s3ClientInitiateMultipartUpload(initRequest)
    2 Upload Parts
    long contentLength new FileInfo(filePath)Length
    long partSize 5242880 5 MB
    try
    {
    long filePosition 0
    for (int i 1 filePosition < contentLength i++)
    {
    Create request to upload a part
    UploadPartRequest uploadRequest new UploadPartRequest
    {
    BucketName existingBucketName
    Key keyName
    UploadId initResponseUploadId
    PartNumber i
    PartSize partSize
    FilePosition filePosition
    FilePath filePath
    }
    Upload part and add response to our list
    uploadResponsesAdd(s3ClientUploadPart(uploadRequest))
    filePosition + partSize
    }
    Step 3 complete
    CompleteMultipartUploadRequest completeRequest new
    CompleteMultipartUploadRequest
    {
    BucketName existingBucketName
    Key keyName
    UploadId initResponseUploadId
    }
    CompleteMultipartUploadResponse completeUploadResponse
    s3ClientCompleteMultipartUpload(completeRequest)

    }
    catch (Exception exception)
    {
    ConsoleWriteLine(Exception occurred {0} exceptionMessage)
    AbortMultipartUploadRequest abortMPURequest new
    AbortMultipartUploadRequest
    {
    BucketName existingBucketName
    Key keyName
    UploadId initResponseUploadId
    }
    s3ClientAbortMultipartUpload(abortMPURequest)
    API Version 20060301
    191Amazon Simple Storage Service Developer Guide
    Uploading Objects
    }
    Note
    When uploading large objects using the NET API timeout might occur even while
    data is being written to the request stream You can set explicit timeout using the
    UploadPartRequest
    API Version 20060301
    192Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following C# code example uploads a file to an Amazon S3 bucket For instructions on how to
    create and test a working sample see Running the Amazon S3 NET Code Examples (p 566)
    using System
    using SystemCollectionsGeneric
    using SystemIO
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class UploadFileMPULowLevelAPI
    {
    static string existingBucketName *** bucket name ***
    static string keyName *** key name ***
    static string filePath *** file path ***
    static void Main(string[] args)
    {
    IAmazonS3 s3Client new
    AmazonS3Client(AmazonRegionEndpointUSEast1)
    List to store upload part responses
    List uploadResponses new
    List()
    1 Initialize
    InitiateMultipartUploadRequest initiateRequest new
    InitiateMultipartUploadRequest
    {
    BucketName existingBucketName
    Key keyName
    }
    InitiateMultipartUploadResponse initResponse
    s3ClientInitiateMultipartUpload(initiateRequest)
    2 Upload Parts
    long contentLength new FileInfo(filePath)Length
    long partSize 5 * (long)MathPow(2 20) 5 MB
    try
    {
    long filePosition 0
    for (int i 1 filePosition < contentLength i++)
    {
    UploadPartRequest uploadRequest new UploadPartRequest
    {
    BucketName existingBucketName
    Key keyName
    UploadId initResponseUploadId
    PartNumber i
    PartSize partSize
    FilePosition filePosition
    FilePath filePath
    }
    Upload part and add response to our list
    uploadResponsesAdd(s3ClientUploadPart(uploadRequest))
    filePosition + partSize
    }
    Step 3 complete
    CompleteMultipartUploadRequest completeRequest new
    CompleteMultipartUploadRequest
    {
    BucketName existingBucketName
    Key keyName
    UploadId initResponseUploadId
    PartETags new List(uploadResponses)
    }
    completeRequestAddPartETags(uploadResponses)
    CompleteMultipartUploadResponse completeUploadResponse
    s3ClientCompleteMultipartUpload(completeRequest)
    }
    catch (Exception exception)
    {
    ConsoleWriteLine(Exception occurred {0}
    exceptionMessage)
    AbortMultipartUploadRequest abortMPURequest new
    AbortMultipartUploadRequest
    {
    BucketName existingBucketName
    Key keyName
    UploadId initResponseUploadId
    }
    s3ClientAbortMultipartUpload(abortMPURequest)
    }
    }
    }
    }
    API Version 20060301
    193Amazon Simple Storage Service Developer Guide
    Uploading Objects
    List Multipart Uploads
    The following tasks guide you through using the lowlevel NET classes to list all inprogress multipart
    uploads on a bucket
    LowLevel API Multipart Uploads Listing Process
    1 Create an instance of the ListMultipartUploadsRequest class and provide the
    bucket name
    2 Execute the AmazonS3ClientListMultipartUploads method The method
    returns an instance of the ListMultipartUploadsResponse class providing you the
    information about the inprogress multipart uploads
    The following C# code sample demonstrates the preceding tasks
    ListMultipartUploadsRequest request new ListMultipartUploadsRequest
    {
    BucketName existingBucketName
    }
    Track Multipart Upload Progress
    The lowlevel multipart upload API provides an event
    UploadPartRequestStreamTransferProgress to track the upload progress
    The event occurs periodically and returns multipart upload progress information such as the total
    number of bytes to transfer and the number of bytes transferred at the time event occurred
    The following C# code sample demonstrates how you can subscribe to the
    StreamTransferProgress event and write a handler
    UploadPartRequest uploadRequest new UploadPartRequest
    {
    provide request data
    }
    uploadRequestStreamTransferProgress +
    new
    EventHandler(UploadPartProgressEventCallback)

    public static void UploadPartProgressEventCallback(object sender
    StreamTransferProgressArgs e)
    {
    Process event
    ConsoleWriteLine({0}{1} eTransferredBytes eTotalBytes)
    }
    Abort a Multipart Upload
    You can abort an inprogress multipart upload by calling the AmazonS3ClientAbortMultipartUpload
    method This method deletes any parts that were uploaded to S3 and free up the resources You must
    provide the upload ID bucket name and the key name The following C# code sample demonstrates
    how you can abort a multipart upload in progress
    s3ClientAbortMultipartUpload(new AbortMultipartUploadRequest
    API Version 20060301
    194Amazon Simple Storage Service Developer Guide
    Uploading Objects
    {
    BucketName existingBucketName
    Key keyName
    UploadId uploadID
    }
    Note
    Instead of a specific multipart upload you can abort all your inprogress multipart uploads
    initiated prior to a specific time This clean up operation is useful to abort old multipart uploads
    that you initiated but neither completed or aborted For more information see Abort Multipart
    Uploads (p 185)
    API Version 20060301
    195Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Using the AWS PHP SDK for Multipart Upload (HighLevel API)
    Amazon S3 allows you to upload large files in multiple parts You must use a multipart upload for
    files larger than 5 GB The AWS SDK for PHP exposes the highlevel Aws\S3\Model\MultipartUpload
    \UploadBuilder class that simplifies multipart uploads
    The Aws\S3\Model\MultipartUpload\UploadBuilder class is best used for a simple multipart
    upload If you need to pause and resume multipart uploads vary part sizes during the upload or do not
    know the size of the data in advance you should use the lowlevel PHP API For more information see
    Using the AWS PHP SDK for Multipart Upload (LowLevel API) (p 200)
    For more information about multipart uploads see Uploading Objects Using Multipart Upload
    API (p 165) For information on uploading files that are less than 5GB in size see Upload an Object
    Using the AWS SDK for PHP (p 161)
    Upload a File Using the HighLevel Multipart Upload
    This topic guides you through using the highlevel Aws\S3\Model\MultipartUpload
    \UploadBuilder class from the AWS SDK for PHP for multipart file uploads
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    HighLevel Multipart File Upload Process
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Create an instance of the UploadBuilder using the Amazon S3 Aws\S3\Model
    \MultipartUpload\UploadBuilder class newInstance() method which is inherited
    from the Aws\Common\Model\MultipartUpload\AbstractUploadBuilder class For the
    UploadBuilder object set the client the bucket name and the key name using the
    setClient() setBucket() and setKey() methods Set the path and name of the file you
    want to upload with the setSource() method
    3 Execute the UploadBuilder object's build() method to build the appropriate uploader
    transfer object based on the builder options you set (The transfer object is of a subclass
    of the Aws\S3\Model\MultipartUpload\AbstractTransfer class)
    4 Execute the upload() method of the built transfer object to perform the upload
    The following PHP code sample demonstrates how to upload a file using the highlevel
    UploadBuilder object
    use Aws\Common\Exception\MultipartUploadException
    use Aws\S3\Model\MultipartUpload\UploadBuilder
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'

    Instantiate the client
    s3 S3Clientfactory()
    Prepare the upload parameters
    uploader UploadBuildernewInstance()
    >setClient(s3)
    >setSource('pathtolargefilemov')
    API Version 20060301
    196Amazon Simple Storage Service Developer Guide
    Uploading Objects
    >setBucket(bucket)
    >setKey(keyname)
    >build()
    Perform the upload Abort the upload if something goes wrong
    try {
    uploader>upload()
    echo Upload complete\n
    } catch (MultipartUploadException e) {
    uploader>abort()
    echo Upload failed\n
    echo e>getMessage() \n
    }
    API Version 20060301
    197Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example of a Multipart Upload of a File to an Amazon S3 Bucket Using the Highlevel
    UploadBuilder
    The following PHP example uploads a file to an Amazon S3 bucket The example demonstrates how
    to set advanced options for the UploadBuilder object For example you can use the setMinPartSize()
    method to set the part size you want to use for the multipart upload and the setOption() method to set
    optional file metadata or an access control list (ACL)
    The example also demonstrates how to upload file parts in parallel by setting the concurrency option
    using the setConcurrency() method for the UploadBuilder object The example creates a transfer object
    that will attempt to upload three parts in parallel until the entire file has been uploaded For information
    about running the PHP examples in this guide go to Running PHP Examples (p 567)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\Common\Exception\MultipartUploadException
    use Aws\S3\Model\MultipartUpload\UploadBuilder
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'

    Instantiate the client
    s3 S3Clientfactory()

    Prepare the upload parameters
    uploader UploadBuildernewInstance()
    >setClient(s3)
    >setSource('pathtolargefilemov')
    >setBucket(bucket)
    >setKey(keyname)
    >setMinPartSize(25 * 1024 * 1024)
    >setOption('Metadata' array(
    'param1' > 'value1'
    'param2' > 'value2'
    ))
    >setOption('ACL' 'publicread')
    >setConcurrency(3)
    >build()
    Perform the upload Abort the upload if something goes wrong
    try {
    uploader>upload()
    echo Upload complete\n
    } catch (MultipartUploadException e) {
    uploader>abort()
    echo Upload failed\n
    echo e>getMessage() \n
    }
    Related Resources
    • AWS SDK for PHP Aws\Common\Model\MultipartUpload\AbstractUploadBuilder Class
    • AWS SDK for PHP Aws\Common\Model\MultipartUpload\AbstractUploadBuildernewInstance()
    Method
    API Version 20060301
    198Amazon Simple Storage Service Developer Guide
    Uploading Objects
    • AWS SDK for PHP Aws\Common\Model\MultipartUpload\AbstractUploadBuilderSetSource()
    Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\Model\MultipartUpload\UploadBuilder Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\Model\MultipartUpload\UploadBuilderbuild() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\Model\MultipartUpload\UploadBuildersetMinPartSize()
    Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\Model\MultipartUpload\UploadBuildersetOption() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\Model\MultipartUpload\UploadBuildersetConcurrency()
    Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Uploading Large Files Using Multipart Uploads
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    API Version 20060301
    199Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Using the AWS PHP SDK for Multipart Upload (LowLevel API)
    Topics
    • Upload a File in Multiple Parts Using the PHP SDK LowLevel API (p 200)
    • List Multipart Uploads Using the LowLevel AWS SDK for PHP API (p 203)
    • Abort a Multipart Upload (p 203)
    The AWS SDK for PHP exposes a lowlevel API that closely resembles the Amazon S3 REST API for
    multipart upload (see Using the REST API for Multipart Upload (p 205) ) Use the lowlevel API when
    you need to pause and resume multipart uploads vary part sizes during the upload or do not know the
    size of the data in advance Use the AWS SDK for PHP highlevel abstractions (see Using the AWS
    PHP SDK for Multipart Upload (HighLevel API) (p 196)) whenever you don't have these requirements
    Upload a File in Multiple Parts Using the PHP SDK LowLevel API
    This topic guides you through using lowlevel multipart upload classes from the AWS SDK for PHP to
    upload a file in multiple parts
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    PHP SDK LowLevel API Multipart File Upload Process
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Initiate multipart upload by executing the Aws\S3\S3ClientcreateMultipartUpload()
    method You must provide a bucket name and a key name in the array parameter's
    required keys Bucket and Key
    Retrieve and save the UploadID from the response body The UploadID is used in
    each subsequent multipart upload operation
    3 Upload the file in parts by executing the Aws\S3\S3ClientuploadPart() method for
    each file part until the end of the file is reached The required array parameter keys for
    upload_part() are Bucket Key UploadId and PartNumber You must increment
    the value passed as the argument for the PartNumber key for each subsequent call to
    upload_part() to upload each successive file part
    Save the response of each of the upload_part() methods calls in an array Each
    response includes the ETag value you will later need to complete the multipart upload
    4 Execute the Aws\S3\S3ClientcompleteMultipartUpload() method to complete the
    multipart upload The required array parameters for completeMultipartUpload()
    are Bucket Key and UploadId
    The following PHP code example demonstrates uploading a file in multiple parts using the PHP SDK
    lowlevel API
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    filename '*** Path to and Name of the File to Upload ***'

    1 Instantiate the client
    API Version 20060301
    200Amazon Simple Storage Service Developer Guide
    Uploading Objects
    s3 S3Clientfactory()
    2 Create a new multipart upload and get the upload ID
    response s3>createMultipartUpload(array(
    'Bucket' > bucket
    'Key' > keyname
    ))
    uploadId response['UploadId']
    3 Upload the file in parts
    file fopen(filename 'r')
    parts array()
    partNumber 1
    while (feof(file)) {
    result s3>uploadPart(array(
    'Bucket' > bucket
    'Key' > key
    'UploadId' > uploadId
    'PartNumber' > partNumber
    'Body' > fread(file 5 * 1024 * 1024)
    ))
    parts[] array(
    'PartNumber' > partNumber++
    'ETag' > result['ETag']
    )
    }
    4 Complete multipart upload
    result s3>completeMultipartUpload(array(
    'Bucket' > bucket
    'Key' > key
    'UploadId' > uploadId
    'Parts' > parts
    ))
    url result['Location']
    fclose(file)
    API Version 20060301
    201Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example of Uploading a File to an Amazon S3 Bucket Using the Lowlevel Multipart
    Upload PHP SDK API
    The following PHP code example uploads a file to an Amazon S3 bucket using the lowlevel PHP API
    multipart upload For information about running the PHP examples in this guide go to Running PHP
    Examples (p 567)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    filename '*** Path to and Name of the File to Upload ***'
    1 Instantiate the client
    s3 S3Clientfactory()
    2 Create a new multipart upload and get the upload ID
    result s3>createMultipartUpload(array(
    'Bucket' > bucket
    'Key' > keyname
    'StorageClass' > 'REDUCED_REDUNDANCY'
    'ACL' > 'publicread'
    'Metadata' > array(
    'param1' > 'value 1'
    'param2' > 'value 2'
    'param3' > 'value 3'
    )
    ))
    uploadId result['UploadId']
    3 Upload the file in parts
    try {
    file fopen(filename 'r')
    parts array()
    partNumber 1
    while (feof(file)) {
    result s3>uploadPart(array(
    'Bucket' > bucket
    'Key' > keyname
    'UploadId' > uploadId
    'PartNumber' > partNumber
    'Body' > fread(file 5 * 1024 * 1024)
    ))
    parts[] array(
    'PartNumber' > partNumber++
    'ETag' > result['ETag']
    )
    echo Uploading part {partNumber} of {filename}\n
    }
    fclose(file)
    } catch (S3Exception e) {
    result s3>abortMultipartUpload(array(
    'Bucket' > bucket
    'Key' > keyname
    'UploadId' > uploadId
    ))
    echo Upload of {filename} failed\n
    }
    4 Complete multipart upload
    result s3>completeMultipartUpload(array(
    'Bucket' > bucket
    'Key' > keyname
    'UploadId' > uploadId
    'Parts' > parts
    ))
    url result['Location']
    echo Uploaded {filename} to {url}\n
    API Version 20060301
    202Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientcreateMultipartUpload() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientuploadPart()Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientcompleteMultipartUpload() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    List Multipart Uploads Using the LowLevel AWS SDK for PHP API
    This topic guides you through using the lowlevel API classes from the AWS SDK for PHP to list all in
    progress multipart uploads on a bucket
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    PHP SDK LowLevel API Multipart Uploads Listing Process
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Execute the Aws\S3\S3ClientlistMultipartUploads() method by providing a bucket
    name The method returns all of the inprogress multipart uploads on the specified
    bucket
    The following PHP code sample demonstrates listing all inprogress multipart uploads on a bucket
    use Aws\S3\S3Client
    s3 S3Clientfactory()
    bucket '*** Your Bucket Name ***'
    result s3>listMultipartUploads(array('Bucket' > bucket))
    print_r(result>toArray())
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientlistMultipartUploads() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    Abort a Multipart Upload
    This topic describes how to use a class from the AWS SDK for PHP to abort a multipart upload that is
    in progress
    API Version 20060301
    203Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    Aborting a Multipart Upload
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Execute the Aws\S3\S3ClientabortMultipartUpload() method You must provide a bucket
    name a key name and the upload ID in the array parameter's required keys Bucket
    Key and UploadId
    The abortMultipartUpload() method deletes any parts that were uploaded to
    Amazon S3 and frees up the resources
    Example of Aborting a Multipart Upload
    The following PHP code example demonstrates how you can abort a multipart upload in progress The
    example illustrates the use of the abortMultipartUpload() method For information about running
    the PHP examples in this guide go to Running PHP Examples (p 567)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    Instantiate the client
    s3 S3Clientfactory()
    Abort the multipart upload
    s3>abortMultipartUpload(array(
    'Bucket' > bucket
    'Key' > keyname
    'UploadId' >
    'VXBsb2FkIElExampleBlbHZpbmcncyBtExamplepZS5tMnRzIHVwbG9hZ'
    ))
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientabortMultipartUpload() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    Using the AWS SDK for Ruby for Multipart Upload
    The AWS SDK for Ruby supports Amazon S3 multipart uploads by using the class
    AWSS3MultipartUpload For more information about using the AWS SDK for Ruby with Amazon S3
    go to Using the AWS SDK for Ruby Version 2 (p 568)
    API Version 20060301
    204Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Using the REST API for Multipart Upload
    The following sections in the Amazon Simple Storage Service API Reference describe the REST API
    for multipart upload
    • Initiate Multipart Upload
    • Upload Part
    • Complete Multipart Upload
    • Abort Multipart Upload
    • List Parts
    • List Multipart Uploads
    You can use these APIs to make your own REST requests or you can use one the SDKs we provide
    For more information about the SDKs see API Support for Multipart Upload (p 169)
    API Version 20060301
    205Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Uploading Objects Using PreSigned URLs
    Topics
    • Upload an Object Using a PreSigned URL (AWS SDK for Java) (p 206)
    • Upload an Object Using a PreSigned URL (AWS SDK for NET) (p 209)
    • Upload an Object Using a PreSigned URL (AWS SDK for Ruby) (p 211)
    A presigned URL gives you access to the object identified in the URL provided that the creator
    of the presigned URL has permissions to access that object That is if you receive a presigned
    URL to upload an object you can upload the object only if the creator of the presigned URL has the
    necessary permissions to upload that object
    All objects and buckets by default are private The presigned URLs are useful if you want your user
    customer to be able upload a specific object to your bucket but you don't require them to have AWS
    security credentials or permissions When you create a presigned URL you must provide your
    security credentials specify a bucket name an object key an HTTP method (PUT for uploading
    objects) and an expiration date and time The presigned URLs are valid only for the specified
    duration
    You can generate a presigned URL programmatically using the AWS SDK for Java or the AWS
    SDK for NET If you are using Visual Studio you can also use AWS Explorer to generate a pre
    signed object URL without writing any code Anyone who receives a valid presigned URL can then
    programmatically upload an object
    For more information go to Using Amazon S3 from AWS Explorer
    For instructions about how to install AWS Explorer see Using the AWS SDKs CLI and
    Explorers (p 560)
    Note
    Anyone with valid security credentials can create a presigned URL However in order to
    successfully upload an object the presigned URL must be created by someone who has
    permission to perform the operation that the presigned URL is based upon
    Upload an Object Using a PreSigned URL (AWS SDK for Java)
    The following tasks guide you through using the Java classes to upload an object using a presigned
    URL
    Uploading Objects
    1 Create an instance of the AmazonS3 class
    2 Generate a presigned URL by executing the AmazonS3generatePresignedUrl
    method
    You provide a bucket name an object key and an expiration date by creating an instance
    of the GeneratePresignedUrlRequest class You must specify the HTTP verb PUT
    when creating this URL if you want to use it to upload an object
    3 Anyone with the presigned URL can upload an object
    The upload creates an object or replaces any existing object with the same key that is
    specified in the presigned URL
    The following Java code sample demonstrates the preceding tasks
    AmazonS3 s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    javautilDate expiration new javautilDate()
    API Version 20060301
    206Amazon Simple Storage Service Developer Guide
    Uploading Objects
    long msec expirationgetTime()
    msec + 1000 * 60 * 60 Add 1 hour
    expirationsetTime(msec)
    GeneratePresignedUrlRequest generatePresignedUrlRequest new
    GeneratePresignedUrlRequest(bucketName objectKey)
    generatePresignedUrlRequestsetMethod(HttpMethodPUT)
    generatePresignedUrlRequestsetExpiration(expiration)

    URL s s3clientgeneratePresignedUrl(generatePresignedUrlRequest)
    Use the presigned URL to upload an object
    API Version 20060301
    207Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following Java code example generates a presigned URL The example code then uses the
    presigned URL to upload sample data as an object For instructions about how to create and test a
    working sample see Testing the Java Code Examples (p 564)
    import javaioIOException
    import javaioOutputStreamWriter
    import javanetHttpURLConnection
    import javanetURL
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsHttpMethod
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelGeneratePresignedUrlRequest
    public class GeneratePresignedUrlAndUploadObject {
    private static String bucketName *** bucket name ***
    private static String objectKey *** object key ***
    public static void main(String[] args) throws IOException {
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
    try {
    Systemoutprintln(Generating presigned URL)
    javautilDate expiration new javautilDate()
    long milliSeconds expirationgetTime()
    milliSeconds + 1000 * 60 * 60 Add 1 hour
    expirationsetTime(milliSeconds)
    GeneratePresignedUrlRequest generatePresignedUrlRequest
    new GeneratePresignedUrlRequest(bucketName objectKey)
    generatePresignedUrlRequestsetMethod(HttpMethodPUT)
    generatePresignedUrlRequestsetExpiration(expiration)
    URL url s3clientgeneratePresignedUrl(generatePresignedUrlRequest)
    UploadObject(url)
    Systemoutprintln(PreSigned URL + urltoString())
    } catch (AmazonServiceException exception) {
    Systemoutprintln(Caught an AmazonServiceException +
    which means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + exceptiongetMessage())
    Systemoutprintln(HTTP Code + exceptiongetStatusCode())
    Systemoutprintln(AWS Error Code + exceptiongetErrorCode())
    Systemoutprintln(Error Type + exceptiongetErrorType())
    Systemoutprintln(Request ID + exceptiongetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException +
    which means the client encountered +
    an internal error while trying to communicate +
    with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    public static void UploadObject(URL url) throws IOException
    {
    HttpURLConnection connection(HttpURLConnection) urlopenConnection()
    connectionsetDoOutput(true)
    connectionsetRequestMethod(PUT)
    OutputStreamWriter out new OutputStreamWriter(
    connectiongetOutputStream())
    outwrite(This text uploaded as object)
    outclose()
    int responseCode connectiongetResponseCode()
    Systemoutprintln(Service returned response code + responseCode)
    }
    }
    API Version 20060301
    208Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Upload an Object Using a PreSigned URL (AWS SDK for NET)
    The following tasks guide you through using the NET classes to upload an object using a presigned
    URL
    Uploading Objects
    1 Create an instance of the AmazonS3 class
    These credentials are used in creating a signature for authentication when you generate
    a presigned URL
    2 Generate a presigned URL by executing the AmazonS3GetPreSignedURL method
    You provide a bucket name an object key and an expiration date by creating an instance
    of the GetPreSignedUrlRequest class You must specify the HTTP verb PUT when
    creating this URL if you plan to use it to upload an object
    3 Anyone with the presigned URL can upload an object You can create an instance of the
    HttpWebRequest class by providing the presigned URL and uploading the object
    The following C# code sample demonstrates the preceding tasks
    IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    Generate a presigned URL
    GetPreSignedUrlRequest request new GetPreSignedUrlRequest
    {
    BucketName bucketName
    Key objectKey
    Verb HttpVerbPUT
    Expires DateTimeNowAddMinutes(5)
    }
    string url null
    url s3ClientGetPreSignedURL(request)
    Upload a file using the presigned URL
    HttpWebRequest httpRequest WebRequestCreate(url) as HttpWebRequest
    httpRequestMethod PUT
    using (Stream dataStream httpRequestGetRequestStream())
    {
    Upload object
    }
    HttpWebResponse response httpRequestGetResponse() as HttpWebResponse
    API Version 20060301
    209Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Example
    The following C# code example generates a presigned URL for a specific object and uses it to upload
    a file For instructions about how to create and test a working sample see Running the Amazon
    S3 NET Code Examples (p 566)
    using System
    using SystemIO
    using SystemNet
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class UploadObjcetUsingPresignedURL
    {
    static IAmazonS3 s3Client
    File to upload
    static string filePath *** Specify file to upload ***
    Information to generate presigned object URL
    static string bucketName *** Provide bucket name ***
    static string objectKey *** Provide object key for the new object
    ***
    public static void Main(string[] args)
    {
    try
    {
    using (s3Client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    string url GeneratePreSignedURL()
    UploadObject(url)
    }
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(
    To sign up for service go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when listing objects
    amazonS3ExceptionMessage)
    }
    }
    catch (Exception e)
    {
    ConsoleWriteLine(eMessage)
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void UploadObject(string url)
    {
    HttpWebRequest httpRequest WebRequestCreate(url) as
    HttpWebRequest
    httpRequestMethod PUT
    using (Stream dataStream httpRequestGetRequestStream())
    {
    byte[] buffer new byte[8000]
    using (FileStream fileStream new FileStream(filePath
    FileModeOpen FileAccessRead))
    {
    int bytesRead 0
    while ((bytesRead fileStreamRead(buffer 0
    bufferLength)) > 0)
    {
    dataStreamWrite(buffer 0 bytesRead)
    }
    }
    }
    HttpWebResponse response httpRequestGetResponse() as
    HttpWebResponse
    }
    static string GeneratePreSignedURL()
    {
    GetPreSignedUrlRequest request new GetPreSignedUrlRequest
    {
    BucketName bucketName
    Key objectKey
    Verb HttpVerbPUT
    Expires DateTimeNowAddMinutes(5)
    }

    string url null
    url s3ClientGetPreSignedURL(request)
    return url
    }
    }
    }
    API Version 20060301
    210Amazon Simple Storage Service Developer Guide
    Uploading Objects
    Upload an Object Using a PreSigned URL (AWS SDK for Ruby)
    The following tasks guide you through using a Ruby script to upload an object using a presigned URL
    for either version of the SDK for Ruby
    Topics
    • Using AWS SDK for Ruby Version 2 (p 211)
    • Using AWS SDK for Ruby Version 1 (p 212)
    Using AWS SDK for Ruby Version 2
    The following tasks guide you through using a Ruby script to upload an object using a presigned URL
    for SDK for Ruby Version 2
    Uploading Objects SDK for Ruby Version 2
    1 Create an instance of the AwsS3Resource class
    2 You provide a bucket name and an object key by calling the #bucket[] and the
    #object[] methods of your AwsS3Resource class instance
    Generate a presigned URL by creating an instance of the URI class and use it to parse
    the presigned_url method of your AwsS3Resource class instance You
    must specify put as an argument to presigned_url and you must specify PUT to
    NetHTTPSession#send_request if you want to upload an object
    3 Anyone with the presigned URL can upload an object
    The upload creates an object or replaces any existing object with the same key that is
    specified in the presigned URL
    The following Ruby code sample demonstrates the preceding tasks for SDK for Ruby Version 2
    #Uploading an object using a presigned URL for SDK for Ruby Version 2
    require 'awssdkresources'
    require 'nethttp'
    s3 AwsS3Resourcenew(region'uswest2')
    obj s3bucket('BucketName')object('KeyName')
    # Replace BucketName with the name of your bucket
    # Replace KeyName with the name of the object you are creating or replacing
    url URIparse(objpresigned_url(put))
    body Hello World
    # This is the contents of your object In this case it's a simple string
    NetHTTPstart(urlhost) do |http|
    httpsend_request(PUT urlrequest_uri body {
    # This is required or NetHTTP will add a default unsigned contenttype
    contenttype >
    })
    end
    puts objgetbodyread
    API Version 20060301
    211Amazon Simple Storage Service Developer Guide
    Copying Objects
    # This will print out the contents of your object to the terminal window
    Using AWS SDK for Ruby Version 1
    Uploading Objects SDK for Ruby Version 1
    1 Create an instance of the AWSS3 class
    2 You provide a bucket name and an object key by calling the #bucket[] and the
    #object[] methods of your AWSS3S3Object class instance
    Generate a presigned URL by calling the url_for method of your AWSS3 class
    instance You must specify put as an argument to url_for and you must specify
    PUT to NetHTTPSession#send_request if you want to upload an object
    3 Anyone with the presigned URL can upload an object
    The upload creates an object or replaces any existing object with the same key that is
    specified in the presigned URL
    The following Ruby code sample demonstrates the preceding tasks for AWS SDK for Ruby Version 1
    #Uploading an object using a presigned URL for SDK for Ruby Version 1

    require 'awssdkv1'
    require 'nethttp'
    s3 AWSS3new(region'uswest2')
    obj s3buckets['BucketName']objects['KeyName']
    # Replace BucketName with the name of your bucket
    # Replace KeyName with the name of the object you are creating or replacing
    url objurl_for(write content_type > textplain)
    body Hello World
    # This is the contents of your object In this case it's a simple string
    NetHTTPstart(urlhost) do |http|
    httpsend_request(PUT urlrequest_uri body {contenttype > text
    plain})
    # The contenttype must be specified in the presigned url
    end

    puts objread
    # This will print out the contents of your object to the terminal window

    puts objcontent_type
    # This will print out the content type of your object to the terminal window
    Copying Objects
    Topics
    • Related Resources (p 213)
    • Copying Objects in a Single Operation (p 213)
    API Version 20060301
    212Amazon Simple Storage Service Developer Guide
    Copying Objects
    • Copying Objects Using the Multipart Upload API (p 223)
    The copy operation creates a copy of an object that is already stored in Amazon S3 You can create
    a copy of your object up to 5 GB in a single atomic operation However for copying an object that is
    greater than 5 GB you must use the multipart upload API Using the copy operation you can
    • Create additional copies of objects
    • Rename objects by copying them and deleting the original ones
    • Move objects across Amazon S3 locations (eg uswest1 and EU)
    • Change object metadata
    Each Amazon S3 object has metadata It is a set of namevalue pairs You can set object metadata
    at the time you upload it After you upload the object you cannot modify object metadata The only
    way to modify object metadata is to make copy of the object and set the metadata In the copy
    operation you set the same object as the source and target
    Each object has metadata Some of it is system metadata and other userdefined Users control
    some of the system metadata such as storage class configuration to use for the object and configure
    serverside encryption When you copy an object usercontrolled system metadata and userdefined
    metadata are also copied Amazon S3 resets the system controlled metadata For example when you
    copy an object Amazon S3 resets creation date of copied object You don't need to set any of these
    values in your copy request
    When copying an object you might decide to update some of the metadata values For example
    if your source object is configured to use standard storage you might choose to use reduced
    redundancy storage for the object copy You might also decide to alter some of the userdefined
    metadata values present on the source object Note that if you choose to update any of the object's
    user configurable metadata (system or userdefined) during the copy then you must explicitly specify
    all the user configurable metadata even if you are only changing only one of the metadata values
    present on the source object in your request
    For more information about the object metadata see Object Key and Metadata (p 99)
    Note
    Copying objects across locations incurs bandwidth charges
    Note
    If the source object is archived in Amazon Glacier (the storage class of the object is
    GLACIER) you must first restore a temporary copy before you can copy the object to another
    bucket For information about archiving objects see GLACIER Storage Class Additional
    Lifecycle Configuration Considerations (p 124)
    When copying objects you can request Amazon S3 to save the target object encrypted using an
    AWS Key Management Service (KMS) encryption key an Amazon S3managed encryption key
    or a customerprovided encryption key Accordingly you must specify encryption information in
    your request If the copy source is an object that stored in Amazon S3 using serverside encryption
    with customer provided key you will need to provide encryption information in your request so
    Amazon S3 can decrypt the object for copying For more information see Protecting Data Using
    Encryption (p 380)
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    Copying Objects in a Single Operation
    Topics
    API Version 20060301
    213Amazon Simple Storage Service Developer Guide
    Copying Objects
    • Copy an Object Using the AWS SDK for Java (p 214)
    • Copy an Object Using the AWS SDK for NET (p 215)
    • Copy an Object Using the AWS SDK for PHP (p 218)
    • Copy an Object Using the AWS SDK for Ruby (p 221)
    • Copy an Object Using the REST API (p 221)
    The examples in this section show how to copy objects up to 5 GB in a single operation For copying
    objects greater than 5 GB you must use multipart upload API For more information see Copying
    Objects Using the Multipart Upload API (p 223)
    Copy an Object Using the AWS SDK for Java
    The following tasks guide you through using the Java classes to copy an object in Amazon S3
    Copying Objects
    1 Create an instance of the AmazonS3Client class
    2 Execute one of the AmazonS3ClientcopyObject methods You need to provide the
    request information such as source bucket name source key name destination bucket
    name and destination key You provide this information by creating an instance of the
    CopyObjectRequest class or optionally providing this information directly with the
    AmazonS3ClientcopyObject method
    The following Java code sample demonstrates the preceding tasks
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
    s3clientcopyObject(sourceBucketName sourceKey
    destinationBucketName destinationKey)
    API Version 20060301
    214Amazon Simple Storage Service Developer Guide
    Copying Objects
    Example
    The following Java code example makes a copy of an object The copied object with a different key
    is saved in the same source bucket For instructions on how to create and test a working sample see
    Testing the Java Code Examples (p 564)
    import javaioIOException
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelCopyObjectRequest
    public class CopyObjectSingleOperation {
    private static String bucketName *** Provide bucket name ***
    private static String key *** Provide key ***
    private static String destinationKey *** Provide dest key ***
    public static void main(String[] args) throws IOException {
    AmazonS3 s3client new AmazonS3Client(new
    ProfileCredentialsProvider())
    try {
    Copying object
    CopyObjectRequest copyObjRequest new CopyObjectRequest(
    bucketName key bucketName destinationKey)
    Systemoutprintln(Copying object)
    s3clientcopyObject(copyObjRequest)
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException +
    which means your request made it +
    to Amazon S3 but was rejected with an error +
    response for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException +
    which means the client encountered +
    an internal error while trying to +
    communicate with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    Copy an Object Using the AWS SDK for NET
    The following tasks guide you through using the highlevel NET classes to upload a file The API
    provides several variations overloads of the Upload method to easily upload your data
    API Version 20060301
    215Amazon Simple Storage Service Developer Guide
    Copying Objects
    Copying Objects
    1 Create an instance of the AmazonS3 class
    2 Execute one of the AmazonS3CopyObject You need to provide information such as
    source bucket source key name target bucket and target key name You provide this
    information by creating an instance of the CopyObjectRequest class
    The following C# code sample demonstrates the preceding tasks
    static IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    CopyObjectRequest request new CopyObjectRequest()
    {
    SourceBucket bucketName
    SourceKey objectKey
    DestinationBucket bucketName
    DestinationKey destObjectKey
    }
    CopyObjectResponse response clientCopyObject(request)
    API Version 20060301
    216Amazon Simple Storage Service Developer Guide
    Copying Objects
    Example
    The following C# code example makes a copy of an object You will need to update code and provide
    your bucket names and object keys For instructions on how to create and test a working sample see
    Running the Amazon S3 NET Code Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class CopyObject
    {
    static string sourceBucket *** Bucket on which to enable
    logging ***
    static string destinationBucket *** Bucket where you want logs
    stored ***
    static string objectKey *** Provide key name ***
    static string destObjectKey *** Provide destination key name
    ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    ConsoleWriteLine(Copying an object)
    CopyingObject()
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void CopyingObject()
    {
    try
    {
    CopyObjectRequest request new CopyObjectRequest
    {
    SourceBucket sourceBucket
    SourceKey objectKey
    DestinationBucket destinationBucket
    DestinationKey destObjectKey
    }
    CopyObjectResponse response clientCopyObject(request)
    }
    catch (AmazonS3Exception s3Exception)
    {
    ConsoleWriteLine(s3ExceptionMessage
    s3ExceptionInnerException)
    }
    }
    }
    }
    API Version 20060301
    217Amazon Simple Storage Service Developer Guide
    Copying Objects
    Copy an Object Using the AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to copy a single object and
    multiple objects within Amazon S3 from one bucket to another or within the same bucket
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    The following tasks guide you through using PHP SDK classes to copy an object that is already stored
    in Amazon S3
    Copying an Object
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 To copy an object execute the Aws\S3\S3ClientcopyObject() method You need to
    provide information such as source bucket source key name target bucket and target
    key name
    The following PHP code sample demonstrates using the copyObject() method to copy an object that
    is already stored in Amazon S3
    use Aws\S3\S3Client
    sourceBucket '*** Your Source Bucket Name ***'
    sourceKeyname '*** Your Source Object Key ***'
    targetBucket '*** Your Target Bucket Name ***'
    targetKeyname '*** Your Target Key Name ***'

    Instantiate the client
    s3 S3Clientfactory()
    Copy an object
    s3>copyObject(array(
    'Bucket' > targetBucket
    'Key' > targetKeyname
    'CopySource' > {sourceBucket}{sourceKeyname}
    ))
    The following tasks guide you through using PHP classes to make multiple copies of an object within
    Amazon S3
    Copying Objects
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class
    factory() method
    2 To make multiple copies of an object you execute a batch of calls to the Amazon S3
    client getCommand() method which is inherited from the Guzzle\Service\Client class
    You provide the CopyObject command as the first argument and an array containing
    the source bucket source key name target bucket and target key name as the second
    argument
    The following PHP code sample demonstrates making multiple copies of an object that is stored in
    Amazon S3
    API Version 20060301
    218Amazon Simple Storage Service Developer Guide
    Copying Objects
    use Aws\S3\S3Client
    sourceBucket '*** Your Source Bucket Name ***'
    sourceKeyname '*** Your Source Object Key ***'
    targetBucket '*** Your Target Bucket Name ***'
    targetKeyname '*** Your Target Key Name ***'
    Instantiate the client
    s3 S3Clientfactory()
    Perform a batch of CopyObject operations
    batch array()
    for (i 1 i < 3 i++) {
    batch[] s3>getCommand('CopyObject' array(
    'Bucket' > targetBucket
    'Key' > {targetKeyname}{i}
    'CopySource' > {sourceBucket}{sourceKeyname}
    ))
    }
    try {
    successful s3>execute(batch)
    failed array()
    } catch (\Guzzle\Service\Exception\CommandTransferException e) {
    successful e>getSuccessfulCommands()
    failed e>getFailedCommands()
    }
    API Version 20060301
    219Amazon Simple Storage Service Developer Guide
    Copying Objects
    Example of Copying Objects within Amazon S3
    The following PHP example illustrates the use of the copyObject() method to copy a single object
    within Amazon S3 and using a batch of calls to CopyObject using the getcommand() method to
    make multiple copies of an object
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    sourceBucket '*** Your Source Bucket Name ***'
    sourceKeyname '*** Your Source Object Key ***'
    targetBucket '*** Your Target Bucket Name ***'
    Instantiate the client
    s3 S3Clientfactory()
    Copy an object
    s3>copyObject(array(
    'Bucket' > targetBucket
    'Key' > {sourceKeyname}copy
    'CopySource' > {sourceBucket}{sourceKeyname}
    ))
    Perform a batch of CopyObject operations
    batch array()
    for (i 1 i < 3 i++) {
    batch[] s3>getCommand('CopyObject' array(
    'Bucket' > targetBucket
    'Key' > {sourceKeyname}copy{i}
    'CopySource' > {sourceBucket}{sourceKeyname}
    ))
    }
    try {
    successful s3>execute(batch)
    failed array()
    } catch (\Guzzle\Service\Exception\CommandTransferException e) {
    successful e>getSuccessfulCommands()
    failed e>getFailedCommands()
    }
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientcopyObject() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Guzzle\Service\Client Class
    • AWS SDK for PHP for Amazon S3 Guzzle\Service\ClientgetCommand() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    API Version 20060301
    220Amazon Simple Storage Service Developer Guide
    Copying Objects
    Copy an Object Using the AWS SDK for Ruby
    The following tasks guide you through using the Ruby classes to copy an object in Amazon S3 from
    one bucket to another or to copy an object within the same bucket
    Copying Objects
    1 Create an instance of the AWSS3 class by providing your AWS credentials
    2 Execute either the AWSS3S3Object#copy_to or
    AWSS3S3Object#copy_from method You need to provide the request
    information such as source bucket name source key name destination bucket name
    and destination key
    The following Ruby code sample demonstrates the preceding tasks using the #copy_to method to
    copy an object from one bucket to another
    s3 AWSS3new
    # Upload a file and set serverside encryption
    bucket1 s3buckets[source_bucket]
    bucket2 s3buckets[target_bucket]
    obj1 bucket1objects[source_key]
    obj2 bucket2objects[target_key]
    obj1copy_to(obj2)
    Example
    The following Ruby script example makes a copy of an object using the #copy_from method The
    copied object with a different key is saved in the same source bucket For instructions about how to
    create and test a working sample see Using the AWS SDK for Ruby Version 2 (p 568)
    #usrbinenv ruby
    require 'rubygems'
    require 'awssdk'
    bucket_name '*** Provide bucket name ***'
    source_key '*** Provide source key ***'
    target_key '*** Provide target key ***'
    # Get an instance of the S3 interface
    s3 AWSS3new
    # Copy the object
    s3buckets[bucket_name]objects[target_key]copy_from(source_key)
    puts Copying file #{source_key} to #{target_key}
    Copy an Object Using the REST API
    This example describes how to copy an object using REST For more information about the REST API
    go to PUT Object (Copy)
    This example copies the flotsam object from the pacific bucket to the jetsam object of the
    atlantic bucket preserving its metadata
    API Version 20060301
    221Amazon Simple Storage Service Developer Guide
    Copying Objects
    PUT jetsam HTTP11
    Host atlantics3amazonawscom
    xamzcopysource pacificflotsam
    Authorization AWS AKIAIOSFODNN7EXAMPLEENoSbxYByFA0UGLZUqJN5EUnLDg
    Date Wed 20 Feb 2008 221221 +0000
    The signature was generated from the following information
    PUT\r\n
    \r\n
    \r\n
    Wed 20 Feb 2008 221221 +0000\r\n
    xamzcopysourcepacificflotsam\r\n
    atlanticjetsam
    Amazon S3 returns the following response that specifies the ETag of the object and when it was last
    modified
    HTTP11 200 OK
    xamzid2 Vyaxt7qEbzv34BnSu5hctyyNSlHTYZFMWK4FtzO+iX8JQNyaLdTshL0KxatbaOZt
    xamzrequestid 6B13C3C5B34AF333
    Date Wed 20 Feb 2008 221301 +0000
    ContentType applicationxml
    TransferEncoding chunked
    Connection close
    Server AmazonS3


    20080220T221301
    7e9c608af58950deeb370c98608ed097

    API Version 20060301
    222Amazon Simple Storage Service Developer Guide
    Copying Objects
    Copying Objects Using the Multipart Upload API
    Topics
    • Copy an Object Using the AWS SDK for Java Multipart Upload API (p 223)
    • Copy an Object Using the AWS SDK for NET Multipart Upload API (p 226)
    • Copy Object Using the REST Multipart Upload API (p 229)
    The examples in this section show you how to copy objects greater than 5 GB using the multipart
    upload API You can copy objects less than 5 GB in a single operation For more information see
    Copying Objects in a Single Operation (p 213)
    Copy an Object Using the AWS SDK for Java Multipart Upload API
    The following task guides you through using the Java SDK to copy an Amazon S3 object from one
    source location to another such as from one bucket to another You can use the code demonstrated
    here to copy objects greater than 5 GB For objects less than 5 GB use the single operation copy
    described in Copy an Object Using the AWS SDK for Java (p 214)
    Copying Objects
    1 Create an instance of the AmazonS3Client class by providing your AWS credentials
    2 Initiate a multipart copy by executing the
    AmazonS3ClientinitiateMultipartUpload method Create an instance of
    InitiateMultipartUploadRequest You will need to provide a bucket name and a
    key name
    3 Save the upload ID from the response object that the
    AmazonS3ClientinitiateMultipartUpload method returns You will need to
    provide this upload ID for each subsequent multipart upload operation
    4 Copy all the parts For each part copy create a new instance of the CopyPartRequest
    class and provide part information including source bucket destination bucket object key
    uploadID first byte of the part last byte of the part and the part number
    5 Save the response of the CopyPartRequest method in a list The response includes the
    ETag value and the part number You will need the part number to complete the multipart
    upload
    6 Repeat tasks 4 and 5 for each part
    7 Execute the AmazonS3ClientcompleteMultipartUpload method to complete the
    copy
    The following Java code sample demonstrates the preceding tasks
    Step 1 Create instance and provide credentials
    AmazonS3Client s3Client new AmazonS3Client(new
    PropertiesCredentials(
    LowLevel_LargeObjectCopyclassgetResourceAsStream(
    AwsCredentialsproperties)))
    Create lists to hold copy responses
    List copyResponses
    new ArrayList()
    API Version 20060301
    223Amazon Simple Storage Service Developer Guide
    Copying Objects
    Step 2 Initialize
    InitiateMultipartUploadRequest initiateRequest
    new InitiateMultipartUploadRequest(targetBucketName targetObjectKey)

    InitiateMultipartUploadResult initResult
    s3ClientinitiateMultipartUpload(initiateRequest)
    Step 3 Save upload Id
    String uploadId initResultgetUploadId()
    try {

    Get object size
    GetObjectMetadataRequest metadataRequest
    new GetObjectMetadataRequest(sourceBucketName sourceObjectKey)
    ObjectMetadata metadataResult
    s3ClientgetObjectMetadata(metadataRequest)
    long objectSize metadataResultgetContentLength() in bytes
    Step 4 Copy parts
    long partSize 5 * (long)Mathpow(20 200) 5 MB
    long bytePosition 0
    for (int i 1 bytePosition < objectSize i++)
    {
    Step 5 Save copy response
    CopyPartRequest copyRequest new CopyPartRequest()
    withDestinationBucketName(targetBucketName)
    withDestinationKey(targetObjectKey)
    withSourceBucketName(sourceBucketName)
    withSourceKey(sourceObjectKey)
    withUploadId(initResultgetUploadId())
    withFirstByte(bytePosition)
    withLastByte(bytePosition + partSize 1 > objectSize
    objectSize 1 bytePosition + partSize 1)
    withPartNumber(i)
    copyResponsesadd(s3ClientcopyPart(copyRequest))
    bytePosition + partSize
    }
    Step 7 Complete copy operation
    CompleteMultipartUploadResult completeUploadResponse
    s3ClientcompleteMultipartUpload(completeRequest)
    } catch (Exception e) {
    Systemoutprintln(egetMessage())
    }
    API Version 20060301
    224Amazon Simple Storage Service Developer Guide
    Copying Objects
    Example
    The following Java code example copies an object from one Amazon S3 bucket to another
    For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioIOException
    import javautilArrayList
    import javautilList
    import comamazonawsauthPropertiesCredentials
    import comamazonawsservicess3*
    import comamazonawsservicess3model*
    public class LowLevel_LargeObjectCopy {
    public static void main(String[] args) throws IOException {
    String sourceBucketName *** SourceBucketName ***
    String targetBucketName *** TargetBucketName ***
    String sourceObjectKey *** SourceObjectKey ***
    String targetObjectKey *** TargetObjectKey ***
    AmazonS3Client s3Client new AmazonS3Client(new
    PropertiesCredentials(
    LowLevel_LargeObjectCopyclassgetResourceAsStream(
    AwsCredentialsproperties)))

    List to store copy part responses
    List copyResponses
    new ArrayList()

    InitiateMultipartUploadRequest initiateRequest
    new InitiateMultipartUploadRequest(targetBucketName
    targetObjectKey)

    InitiateMultipartUploadResult initResult
    s3ClientinitiateMultipartUpload(initiateRequest)
    try {
    Get object size
    GetObjectMetadataRequest metadataRequest
    new GetObjectMetadataRequest(sourceBucketName sourceObjectKey)
    ObjectMetadata metadataResult
    s3ClientgetObjectMetadata(metadataRequest)
    long objectSize metadataResultgetContentLength() in bytes
    Copy parts
    long partSize 5 * (long)Mathpow(20 200) 5 MB
    long bytePosition 0
    for (int i 1 bytePosition < objectSize i++)
    {
    CopyPartRequest copyRequest new CopyPartRequest()
    withDestinationBucketName(targetBucketName)
    withDestinationKey(targetObjectKey)
    withSourceBucketName(sourceBucketName)
    withSourceKey(sourceObjectKey)
    withUploadId(initResultgetUploadId())
    withFirstByte(bytePosition)
    withLastByte(bytePosition + partSize 1 > objectSize
    objectSize 1 bytePosition + partSize 1)
    withPartNumber(i)
    copyResponsesadd(s3ClientcopyPart(copyRequest))
    bytePosition + partSize
    }
    CompleteMultipartUploadRequest completeRequest new
    CompleteMultipartUploadRequest(
    targetBucketName
    targetObjectKey
    initResultgetUploadId()
    GetETags(copyResponses))
    CompleteMultipartUploadResult completeUploadResponse
    s3ClientcompleteMultipartUpload(completeRequest)
    } catch (Exception e) {
    Systemoutprintln(egetMessage())
    }
    }

    Helper function that constructs ETags
    static List GetETags(List responses)
    {
    List etags new ArrayList()
    for (CopyPartResult response responses)
    {
    etagsadd(new PartETag(responsegetPartNumber()
    responsegetETag()))
    }
    return etags
    }
    }
    API Version 20060301
    225Amazon Simple Storage Service Developer Guide
    Copying Objects
    Copy an Object Using the AWS SDK for NET Multipart Upload API
    The following task guides you through using the NET SDK to copy an Amazon S3 object from one
    source location to another such as from one bucket to another You can use the code demonstrated
    here to copy objects that are greater than 5 GB For objects less than 5 GB use the single operation
    copy described in Copy an Object Using the AWS SDK for NET (p 215)
    Copying Objects
    1 Create an instance of the AmazonS3Client class by providing your AWS credentials
    2 Initiate a multipart copy by executing the
    AmazonS3ClientInitiateMultipartUpload method Create an instance of the
    InitiateMultipartUploadRequest You will need to provide a bucket name and key
    name
    3 Save the upload ID from the response object that the
    AmazonS3ClientInitiateMultipartUpload method returns You will need to
    provide this upload ID for each subsequent multipart upload operation
    4 Copy all the parts For each part copy create a new instance of the CopyPartRequest
    class and provide part information including source bucket destination bucket object key
    uploadID first byte of the part last byte of the part and the part number
    5 Save the response of the CopyPartRequest method in a list The response includes the
    ETag value and the part number you will need to complete the multipart upload
    6 Repeat tasks 4 and 5 for each part
    7 Execute the AmazonS3ClientCompleteMultipartUpload method to complete the
    copy
    The following C# code sample demonstrates the preceding tasks
    Step 1 Create instance and provide credentials
    IAmazonS3 s3Client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    List to store upload part responses
    List uploadResponses new List()
    List copyResponses new List()
    InitiateMultipartUploadRequest initiateRequest
    new InitiateMultipartUploadRequest
    {
    BucketName targetBucket
    Key targetObjectKey
    }
    Step 2 Initialize
    InitiateMultipartUploadResponse initResponse
    s3ClientInitiateMultipartUpload(initiateRequest)
    Step 3 Save Upload Id
    String uploadId initResponseUploadId
    try
    {
    Get object size
    GetObjectMetadataRequest metadataRequest new GetObjectMetadataRequest
    API Version 20060301
    226Amazon Simple Storage Service Developer Guide
    Copying Objects
    {
    BucketName sourceBucket
    Key sourceObjectKey
    }
    GetObjectMetadataResponse metadataResponse
    s3ClientGetObjectMetadata(metadataRequest)
    long objectSize metadataResponseContentLength in bytes
    Copy parts
    long partSize 5 * (long)MathPow(2 20) 5 MB
    long bytePosition 0
    for (int i 1 bytePosition < objectSize i++)
    {
    CopyPartRequest copyRequest new CopyPartRequest
    {
    DestinationBucket targetBucket
    DestinationKey targetObjectKey
    SourceBucket sourceBucket
    SourceKey sourceObjectKey
    UploadId uploadId
    FirstByte bytePosition
    LastByte bytePosition + partSize 1 > objectSize
    objectSize 1 bytePosition + partSize 1
    PartNumber i
    }
    copyResponsesAdd(s3ClientCopyPart(copyRequest))
    bytePosition + partSize
    }
    CompleteMultipartUploadRequest completeRequest
    new CompleteMultipartUploadRequest
    {
    BucketName targetBucket
    Key targetObjectKey
    UploadId initResponseUploadId
    }
    completeRequestAddPartETags(copyResponses)
    CompleteMultipartUploadResponse completeUploadResponse
    s3ClientCompleteMultipartUpload(completeRequest)
    }
    catch (Exception e) {
    ConsoleWriteLine(eMessage)
    }
    API Version 20060301
    227Amazon Simple Storage Service Developer Guide
    Copying Objects
    Example
    The following C# code example copies an object from one Amazon S3 bucket to another For
    instructions on how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using SystemCollectionsGeneric
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class CopyObjectUsingMPUapi
    {
    static string sourceBucket *** Source bucket name ***
    static string targetBucket *** Target bucket name ***
    static string sourceObjectKey *** Source object key ***
    static string targetObjectKey *** Target object key ***
    static void Main(string[] args)
    {
    IAmazonS3 s3Client new
    AmazonS3Client(AmazonRegionEndpointUSEast1)
    List to store upload part responses
    List uploadResponses new
    List()
    List copyResponses new
    List()
    InitiateMultipartUploadRequest initiateRequest
    new InitiateMultipartUploadRequest
    {
    BucketName targetBucket
    Key targetObjectKey
    }
    InitiateMultipartUploadResponse initResponse
    s3ClientInitiateMultipartUpload(initiateRequest)
    String uploadId initResponseUploadId
    try
    {
    Get object size
    GetObjectMetadataRequest metadataRequest new
    GetObjectMetadataRequest
    {
    BucketName sourceBucket
    Key sourceObjectKey
    }
    GetObjectMetadataResponse metadataResponse
    s3ClientGetObjectMetadata(metadataRequest)
    long objectSize metadataResponseContentLength in bytes
    Copy parts
    long partSize 5 * (long)MathPow(2 20) 5 MB
    long bytePosition 0
    for (int i 1 bytePosition < objectSize i++)
    {
    CopyPartRequest copyRequest new CopyPartRequest
    {
    DestinationBucket targetBucket
    DestinationKey targetObjectKey
    SourceBucket sourceBucket
    SourceKey sourceObjectKey
    UploadId uploadId
    FirstByte bytePosition
    LastByte bytePosition + partSize 1 >
    objectSize objectSize 1 bytePosition + partSize 1
    PartNumber i
    }
    copyResponsesAdd(s3ClientCopyPart(copyRequest))
    bytePosition + partSize
    }
    CompleteMultipartUploadRequest completeRequest
    new CompleteMultipartUploadRequest
    {
    BucketName targetBucket
    Key targetObjectKey
    UploadId initResponseUploadId
    }
    completeRequestAddPartETags(copyResponses)
    CompleteMultipartUploadResponse completeUploadResponse
    s3ClientCompleteMultipartUpload(completeRequest)
    }
    catch (Exception e)
    {
    ConsoleWriteLine(eMessage)
    }
    }
    Helper function that constructs ETags
    static List GetETags(List responses)
    {
    List etags new List()
    foreach (CopyPartResponse response in responses)
    {
    etagsAdd(new PartETag(responsePartNumber responseETag))
    }
    return etags
    }
    }
    }
    API Version 20060301
    228Amazon Simple Storage Service Developer Guide
    Listing Object Keys
    Copy Object Using the REST Multipart Upload API
    The following sections in the Amazon Simple Storage Service API Reference describe the REST API
    for multipart upload For copying an existing object you use the Upload Part (Copy) API and specify the
    source object by adding the xamzcopysource request header in your request
    • Initiate Multipart Upload
    • Upload Part
    • Upload Part (Copy)
    • Complete Multipart Upload
    • Abort Multipart Upload
    • List Parts
    • List Multipart Uploads
    You can use these APIs to make your own REST requests or you can use one the SDKs we provide
    For more information about the SDKs see API Support for Multipart Upload (p 169)
    Listing Object Keys
    Keys can be listed by prefix By choosing a common prefix for the names of related keys and marking
    these keys with a special character that delimits hierarchy you can use the list operation to select and
    browse keys hierarchically This is similar to how files are stored in directories within a file system
    Amazon S3 exposes a list operation that lets you enumerate the keys contained in a bucket Keys
    are selected for listing by bucket and prefix For example consider a bucket named dictionary that
    contains a key for every English word You might make a call to list all the keys in that bucket that start
    with the letter q List results are always returned in UTF8 binary order
    Both the SOAP and REST list operations return an XML document that contains the names of
    matching keys and information about the object identified by each key
    Note
    SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3
    features will not be supported for SOAP We recommend that you use either the REST API or
    the AWS SDKs
    Groups of keys that share a prefix terminated by a special delimiter can be rolled up by that common
    prefix for the purposes of listing This enables applications to organize and browse their keys
    hierarchically much like how you would organize your files into directories in a file system For
    example to extend the dictionary bucket to contain more than just English words you might form
    keys by prefixing each word with its language and a delimiter such as Frenchlogical Using this
    naming scheme and the hierarchical listing feature you could retrieve a list of only French words You
    could also browse the toplevel list of available languages without having to iterate through all the
    lexicographically intervening keys
    For more information on this aspect of listing see Listing Keys Hierarchically Using a Prefix and
    Delimiter (p 230)
    List Implementation Efficiency
    List performance is not substantially affected by the total number of keys in your bucket nor by the
    presence or absence of the prefix marker maxkeys or delimiter arguments For information on
    improving overall bucket performance including the list operation see Request Rate and Performance
    Considerations (p 518)
    API Version 20060301
    229Amazon Simple Storage Service Developer Guide
    Listing Object Keys
    Iterating Through MultiPage Results
    As buckets can contain a virtually unlimited number of keys the complete results of a list query can
    be extremely large To manage large result sets Amazon S3 API support pagination to split them
    into multiple responses Each list keys response returns a page of up to 1000 keys with an indicator
    indicating if the response is truncated You send a series of list keys requests until you have received
    all the keys AWS SDK wrapper libraries provide the same pagination
    The following Java and NET SDK examples show how to use pagination when listing keys in a bucket
    • Listing Keys Using the AWS SDK for Java (p 231)
    • Listing Keys Using the AWS SDK for NET (p 233)
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    Listing Keys Hierarchically Using a Prefix and Delimiter
    The prefix and delimiter parameters limit the kind of results returned by a list operation Prefix limits
    results to only those keys that begin with the specified prefix and delimiter causes list to roll up all keys
    that share a common prefix into a single summary list result
    The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys
    hierarchically To do this first pick a delimiter for your bucket such as slash () that doesn't occur in
    any of your anticipated key names Next construct your key names by concatenating all containing
    levels of the hierarchy separating each level with the delimiter
    For example if you were storing information about cities you might naturally organize them by
    continent then by country then by province or state Because these names don't usually contain
    punctuation you might select slash () as the delimiter The following examples use a slash ()
    delimiter
    • EuropeFranceAquitaineBordeaux
    • North AmericaCanadaQuebecMontreal
    • North AmericaUSAWashingtonBellevue
    • North AmericaUSAWashingtonSeattle
    If you stored data for every city in the world in this manner it would become awkward to manage
    a flat key namespace By using Prefix and Delimiter with the list operation you can use the
    hierarchy you've created to list your data For example to list all the states in USA set Delimiter''
    and Prefix'North AmericaUSA' To list all the provinces in Canada for which you have data set
    Delimiter'' and Prefix'North AmericaCanada'
    A list request with a delimiter lets you browse your hierarchy at just one level skipping over and
    summarizing the (possibly millions of) keys nested at deeper levels For example assume you have a
    bucket (ExampleBucket) the following keys
    samplejpg
    photos2006Januarysamplejpg
    photos2006Februarysample2jpg
    photos2006Februarysample3jpg
    API Version 20060301
    230Amazon Simple Storage Service Developer Guide
    Listing Object Keys
    photos2006Februarysample4jpg
    The sample bucket has only the samplejpg object at the root level To list only the root level
    objects in the bucket you send a GET request on the bucket with delimiter character In response
    Amazon S3 returns the samplejpg object key because it does not contain the delimiter character
    All other keys contain the delimiter character Amazon S3 groups these keys and return a single
    CommonPrefixes element with prefix value photos that is a substring from the beginning of these
    keys to the first occurrence of the specified delimiter

    ExampleBucket


    1000

    false

    samplejpg
    20110724T193930000Z
    "d1a7fb5eab1c16cb4f7cf341cf188c3d"
    6

    75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a<
    ID>
    displayname

    STANDARD


    photos


    Listing Keys Using the AWS SDK for Java
    The following Java code example lists object keys in a bucket If the response is truncated
    ( is true in the response) the code loop continues Each subsequent request specifies
    the continuationtoken in the request and sets its value to the
    returned by Amazon S3 in the previous response
    API Version 20060301
    231Amazon Simple Storage Service Developer Guide
    Listing Object Keys
    Example
    For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564)import javaioIOException
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelListObjectsRequest
    import comamazonawsservicess3modelListObjectsV2Request
    import comamazonawsservicess3modelListObjectsV2Result
    import comamazonawsservicess3modelObjectListing
    import comamazonawsservicess3modelS3ObjectSummary
    public class ListKeys {
    private static String bucketName ***bucket name***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3client new AmazonS3Client(new
    ProfileCredentialsProvider())
    try {
    Systemoutprintln(Listing objects)
    final ListObjectsV2Request req new
    ListObjectsV2Request()withBucketName(bucketName)withMaxKeys(2)
    ListObjectsV2Result result
    do {
    result s3clientlistObjectsV2(req)

    for (S3ObjectSummary objectSummary
    resultgetObjectSummaries()) {
    Systemoutprintln( + objectSummarygetKey() + +
    (size + objectSummarygetSize() +
    ))
    }
    Systemoutprintln(Next Continuation Token +
    resultgetNextContinuationToken())
    reqsetContinuationToken(resultgetNextContinuationToken())
    } while(resultisTruncated() true )

    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException +
    which means your request made it +
    to Amazon S3 but was rejected with an error response
    +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException +
    which means the client encountered +
    an internal error while trying to communicate +
    with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    API Version 20060301
    232Amazon Simple Storage Service Developer Guide
    Listing Object Keys
    Listing Keys Using the AWS SDK for NET
    The following C# code example lists object keys in a bucket If the response is truncated
    ( is true in the response) the code loop continues Each subsequent request specifies
    the continuationtoken in the request and sets its value to the
    returned by Amazon S3 in the previous response
    API Version 20060301
    233Amazon Simple Storage Service Developer Guide
    Listing Object Keys
    Example
    For instructions on how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model

    namespace s3amazoncomdocsamples
    {
    class ListObjects
    {
    static string bucketName ***bucket name***
    static IAmazonS3 client

    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    ConsoleWriteLine(Listing objects stored in a bucket)
    ListingObjects()
    }

    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }

    static void ListingObjects()
    {
    try
    {
    ListObjectsV2Request request new ListObjectsV2Request
    {
    BucketName bucketName
    MaxKeys 10
    }
    ListObjectsV2Response response
    do
    {
    response clientListObjectsV2(request)

    Process response
    foreach (S3Object entry in responseS3Objects)
    {
    ConsoleWriteLine(key {0} size {1}
    entryKey entrySize)
    }
    ConsoleWriteLine(Next Continuation Token {0}
    responseNextContinuationToken)
    requestContinuationToken
    responseNextContinuationToken
    } while (responseIsTruncated true)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&

    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS
    Credentials)
    ConsoleWriteLine(
    To sign up for service go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when listing objects
    amazonS3ExceptionMessage)
    }
    }
    }
    }
    }

    API Version 20060301
    234Amazon Simple Storage Service Developer Guide
    Listing Object Keys
    Listing Keys Using the AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to list the object keys
    contained in an Amazon S3 bucket
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    To list the object keys contained in a bucket using the AWS SDK for PHP you first must list the objects
    contained in the bucket and then extract the key from each of the listed objects When listing objects in
    a bucket you have the option of using the lowlevel Aws\S3\S3ClientlistObjects() method or the high
    level Aws\S3\Iterator\ListObjects iterator
    The lowlevel listObjects() method maps to the underlying Amazon S3 REST API Each
    listObjects() request returns a page of up to 1000 objects If you have more than 1000 objects
    in the bucket your response will be truncated and you will need to send another listObjects()
    request to retrieve the next set of 1000 objects
    You can use the highlevel ListObjects iterator to make your task of listing the objects contained
    in a bucket a bit easier To use the ListObjects iterator to create a list of objects you execute
    the Amazon S3 client getIterator() method that is inherited from Guzzle\Service\Client class with
    the ListObjects command as the first argument and an array to contain the returned objects
    from the specified bucket as the second argument When used as a ListObjects iterator the
    getIterator() method returns all the objects contained in the specified bucket There is no 1000
    object limit so you don't need to worry if the response is truncated or not
    The following tasks guide you through using the PHP Amazon S3 client methods to list the objects
    contained in a bucket from which you can list the object keys
    Listing Object Keys
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory
    method
    2 Execute the highlevel Amazon S3 client getIterator() method with the
    ListObjects command as the first argument and an array to contain the returned
    objects from the specified bucket as the second argument
    Or you can execute the lowlevel Amazon S3 client listObjects() method with an
    array to contain the returned objects from the specified bucket as the argument
    3 Extract the object key from each object in the list of returned objects
    The following PHP code sample demonstrates how to list the objects contained in a bucket from which
    you can list the object keys
    use Aws\S3\S3Client
    Instantiate the client
    s3 S3Clientfactory()
    bucket '*** Bucket Name ***'

    Use the highlevel iterators (returns ALL of your objects)
    objects s3>getIterator('ListObjects' array('Bucket' > bucket))
    echo Keys retrieved\n
    foreach (objects as object) {
    API Version 20060301
    235Amazon Simple Storage Service Developer Guide
    Listing Object Keys
    echo object['Key'] \n
    }
    Use the plain API (returns ONLY up to 1000 of your objects)
    result s3>listObjects(array('Bucket' > bucket))
    echo Keys retrieved\n
    foreach (result['Contents'] as object) {
    echo object['Key'] \n
    }
    Example of Listing Object Keys
    The following PHP example demonstrates how to list the keys from a specified bucket It shows how
    to use the highlevel getIterator() method to list the objects in a bucket and then how to extract
    the key from each of the objects in the list It also show how to use the lowlevel listObjects()
    method to list the objects in a bucket and then how to extract the key from each of the objects in
    the list returned For information about running the PHP examples in this guide go to Running PHP
    Examples (p 567)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    use Aws\S3\Exception\S3Exception
    bucket '*** Your Bucket Name ***'
    Instantiate the client
    s3 S3Clientfactory()
    Use the highlevel iterators (returns ALL of your objects)
    try {
    objects s3>getIterator('ListObjects' array(
    'Bucket' > bucket
    ))
    echo Keys retrieved\n
    foreach (objects as object) {
    echo object['Key'] \n
    }
    } catch (S3Exception e) {
    echo e>getMessage() \n
    }
    Use the plain API (returns ONLY up to 1000 of your objects)
    try {
    result s3>listObjects(array('Bucket' > bucket))
    echo Keys retrieved\n
    foreach (result['Contents'] as object) {
    echo object['Key'] \n
    }
    } catch (S3Exception e) {
    echo e>getMessage() \n
    }
    API Version 20060301
    236Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\Iterator\ListObjects
    • AWS SDK for PHP for Amazon S3 Guzzle\Service\Client Class
    • AWS SDK for PHP for Amazon S3 Guzzle\Service\ClientgetIterator() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientlistObjects() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    Listing Keys Using the REST API
    You can use the AWS SDK to list the object keys in a bucket However if your application requires it
    you can send REST requests directly You can send a GET request to return some or all of the objects
    in a bucket or you can use selection criteria to return a subset of the objects in a bucket For more
    information go to GET Bucket (List Objects) Version 2
    Deleting Objects
    Topics
    • Deleting Objects from a VersionEnabled Bucket (p 237)
    • Deleting Objects from an MFAEnabled Bucket (p 238)
    • Related Resources (p 238)
    • Deleting One Object Per Request (p 238)
    • Deleting Multiple Objects Per Request (p 246)
    You can delete one or more objects directly from Amazon S3 You have the following options when
    deleting an object
    • Delete a single object—Amazon S3 provides the DELETE API that you can use to delete one
    object in a single HTTP request
    • Delete multiple objects—Amazon S3 also provides the MultiObject Delete API that you can use to
    delete up to 1000 objects in a single HTTP request
    When deleting objects from a bucket that is not versionenabled you provide only the object key name
    however when deleting objects from a versionenabled bucket you can optionally provide version ID
    of the object to delete a specific version of the object
    Deleting Objects from a VersionEnabled Bucket
    If your bucket is versionenabled then multiple versions of the same object can exist in the bucket
    When working with versionenabled buckets the delete API enables the following options
    • Specify a nonversioned delete request—That is you specify only the object's key and not
    the version ID In this case Amazon S3 creates a delete marker and returns its version ID in the
    response This makes your object disappear from the bucket For information about object versioning
    and the delete marker concept see Object Versioning (p 106)
    • Specify a versioned delete request—That is you specify both the key and also a version ID In this
    case the following two outcomes are possible
    API Version 20060301
    237Amazon Simple Storage Service Developer Guide
    Deleting Objects
    • If the version ID maps to a specific object version then Amazon S3 deletes the specific version of
    the object
    • If the version ID maps to the delete marker of that object Amazon S3 deletes the delete marker
    This makes the object reappear in your bucket
    Deleting Objects from an MFAEnabled Bucket
    When deleting objects from an Multi Factor Authentication (MFA) enabled bucket note the following
    • If you provide an invalid MFA token the request always fails
    • If you have MFAenabled bucket and you make a versioned delete request (you provide an object
    key and version ID) the request will fail if you don't provide a valid MFA token In addition when
    using the MultiObject Delete API on an MFAenabled bucket if any of the deletes is a versioned
    delete request (that is you specify object key and version ID) the entire request will fail if you don't
    provide MFA token
    On the other hand in the following cases the request succeeds
    • If you have an MFA enabled bucket and you make a nonversioned delete request (you are not
    deleting a versioned object) and you don't provide MFA token the delete succeeds
    • If you have a MultiObject Delete request specifying only nonversioned objects to delete from an
    MFAenabled bucket and you don't provide an MFA token the deletions succeed
    For information on MFA delete see MFA Delete (p 424)
    Related Resources
    • Using the AWS SDKs CLI and Explorers (p 560)
    Deleting One Object Per Request
    Topics
    • Deleting an Object Using the AWS SDK for Java (p 238)
    • Deleting an Object Using the AWS SDK for NET (p 242)
    • Deleting an Object Using the AWS SDK for PHP (p 245)
    • Deleting an Object Using the REST API (p 246)
    Amazon S3 provides the DELETE API (see DELETE Object) for you to delete one object per request
    To learn more about object deletion see Deleting Objects (p 237)
    You can use the REST API directly or use the wrapper libraries provided by the AWS SDKs that can
    simplify your application development
    Deleting an Object Using the AWS SDK for Java
    The following tasks guide you through using the AWS SDK for Java classes to delete an object
    Deleting an Object (NonVersioned Bucket)
    1 Create an instance of the AmazonS3Client class
    API Version 20060301
    238Amazon Simple Storage Service Developer Guide
    Deleting Objects
    2 Execute one of the AmazonS3ClientdeleteObject methods
    You can provide a bucket name and an object name as parameters or provide the same
    information in a DeleteObjectRequest object and pass the object as a parameter
    If you have not enabled versioning on the bucket the operation deletes the object If you
    have enabled versioning the operation adds a delete marker For more information see
    Deleting One Object Per Request (p 238)
    The following Java sample demonstrates the preceding steps The sample uses the
    DeleteObjectRequest class to provide a bucket name and an object key
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())

    s3clientdeleteObject(new DeleteObjectRequest(bucketName keyName))
    Deleting a Specific Version of an Object (VersionEnabled Bucket)
    1 Create an instance of the AmazonS3Client class
    2 Execute one of the AmazonS3ClientdeleteVersion methods
    You can provide a bucket name and an object key directly as parameters or use the
    DeleteVersionRequest to provide the same information
    The following Java sample demonstrates the preceding steps The sample uses the
    DeleteVersionRequest class to provide a bucket name an object key and a version Id
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())

    s3clientdeleteObject(new DeleteVersionRequest(bucketName keyName
    versionId))
    API Version 20060301
    239Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 1 Deleting an Object (NonVersioned Bucket)
    The following Java example deletes an object from a bucket If you have not enabled versioning on
    the bucket Amazon S3 deletes the object If you have enabled versioning Amazon S3 adds a delete
    marker and the object is not deleted For information about how to create and test a working sample
    see Testing the Java Code Examples (p 564)
    import javaioIOException
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelDeleteObjectRequest
    public class DeleteAnObjectNonVersionedBucket {
    private static String bucketName *** Provide a Bucket Name ***
    private static String keyName *** Provide a Key Name ****
    public static void main(String[] args) throws IOException {
    AmazonS3 s3Client new AmazonS3Client(new
    ProfileCredentialsProvider())
    try {
    s3ClientdeleteObject(new DeleteObjectRequest(bucketName
    keyName))
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    API Version 20060301
    240Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 2 Deleting an Object (Versioned Bucket)
    The following Java example deletes a specific version of an object from a versioned bucket The
    deleteObject request removes the specific object version from the bucket
    To test the sample you must provide a bucket name The code sample performs the following tasks
    1 Enable versioning on the bucket
    2 Add a sample object to the bucket In response Amazon S3 returns the version ID of the newly
    added object
    3 Delete the sample object using the deleteVersion method The DeleteVersionRequest class
    specifies both an object key name and a version ID
    For information about how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioByteArrayInputStream
    import javaioIOException
    import javautilRandom
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelBucketVersioningConfiguration
    import comamazonawsservicess3modelCannedAccessControlList
    import comamazonawsservicess3modelDeleteVersionRequest
    import comamazonawsservicess3modelObjectMetadata
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3modelPutObjectResult
    import
    comamazonawsservicess3modelSetBucketVersioningConfigurationRequest
    public class DeleteAnObjectVersionEnabledBucket {
    static String bucketName *** Provide a Bucket Name ***
    static String keyName *** Provide a Key Name ****
    static AmazonS3Client s3Client

    public static void main(String[] args) throws IOException {
    s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    try {
    Make the bucket versionenabled
    enableVersioningOnBucket(s3Client bucketName)

    Add a sample object
    String versionId putAnObject(keyName)
    s3ClientdeleteVersion(
    new DeleteVersionRequest(
    bucketName
    keyName
    versionId))

    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }

    static void enableVersioningOnBucket(AmazonS3Client s3Client
    String bucketName) {
    BucketVersioningConfiguration config new
    BucketVersioningConfiguration()
    withStatus(BucketVersioningConfigurationENABLED)
    SetBucketVersioningConfigurationRequest
    setBucketVersioningConfigurationRequest new
    SetBucketVersioningConfigurationRequest(
    bucketName config)

    s3ClientsetBucketVersioningConfiguration(setBucketVersioningConfigurationRequest)
    }

    static String putAnObject(String keyName) {
    String content This is the content body
    String key ObjectToDelete + new Random()nextInt()
    ObjectMetadata metadata new ObjectMetadata()
    metadatasetHeader(Subject ContentAsObject)
    metadatasetHeader(ContentLength contentlength())
    PutObjectRequest request new PutObjectRequest(bucketName key
    new ByteArrayInputStream(contentgetBytes()) metadata)
    withCannedAcl(CannedAccessControlListAuthenticatedRead)
    PutObjectResult response s3ClientputObject(request)
    return responsegetVersionId()
    }
    }
    API Version 20060301
    241Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Deleting an Object Using the AWS SDK for NET
    You can delete an object from a bucket If you have versioning enabled on the bucket you can also
    delete a specific version of an object
    The following tasks guide you through using the NET classes to delete an object
    Deleting an Object (NonVersioned Bucket)
    1 Create an instance of the AmazonS3Client class by providing your AWS credentials
    2 Execute the AmazonS3DeleteObject method by providing a bucket name and an
    object key in an instance of DeleteObjectRequest
    If you have not enabled versioning on the bucket the operation deletes the object If you
    have enabled versioning the operation adds a delete marker For more information see
    Deleting One Object Per Request (p 238)
    The following C# code sample demonstrates the preceding steps
    static IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    DeleteObjectRequest deleteObjectRequest
    new DeleteObjectRequest
    {
    BucketName bucketName
    Key keyName
    }
    using (client AmazonAWSClientFactoryCreateAmazonS3Client(
    accessKeyID secretAccessKeyID))
    {
    clientDeleteObject(deleteObjectRequest)
    }
    Deleting a Specific Version of an Object (VersionEnabled Bucket)
    1 Create an instance of the AmazonS3Client class by providing your AWS credentials
    2 Execute the AmazonS3DeleteObject method by providing a bucket name an object
    key name and object version Id in an instance of DeleteObjectRequest
    The DeleteObject method deletes the specific version of the object
    The following C# code sample demonstrates the preceding steps
    IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    DeleteObjectRequest deleteObjectRequest new DeleteObjectRequest
    {
    BucketName bucketName
    Key keyName
    VersionId versionID
    }
    API Version 20060301
    242Amazon Simple Storage Service Developer Guide
    Deleting Objects
    using (client new AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    clientDeleteObject(deleteObjectRequest)
    ConsoleWriteLine(Deleting an object)
    }
    Example 1 Deleting an Object (NonVersioned Bucket)
    The following C# code example deletes an object from a bucket It does not provide a version Id in
    the delete request If you have not enabled versioning on the bucket Amazon S3 deletes the object
    If you have enabled versioning Amazon S3 adds a delete marker and the object is not deleted For
    information about how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class DeleteObjectNonVersionedBucket
    {
    static string bucketName *** Provide a bucket name ***
    static string keyName *** Provide a key name ****
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    DeleteObjectRequest deleteObjectRequest new
    DeleteObjectRequest
    {
    BucketName bucketName
    Key keyName
    }
    try
    {
    clientDeleteObject(deleteObjectRequest)
    ConsoleWriteLine(Deleting an object)
    }
    catch (AmazonS3Exception s3Exception)
    {
    ConsoleWriteLine(s3ExceptionMessage
    s3ExceptionInnerException)
    }
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    }
    }
    API Version 20060301
    243Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 2 Deleting an Object (Versioned Bucket)
    The following C# code example deletes an object from a versioned bucket The
    DeleteObjectRequest instance specifies an object key name and a version ID The DeleteObject
    method removes the specific object version from the bucket
    To test the sample you must provide a bucket name The code sample performs the following tasks
    1 Enable versioning on the bucket
    2 Add a sample object to the bucket In response Amazon S3 returns the version ID of the newly
    added object You can also obtain version IDs of an object by sending a ListVersions request
    var listResponse clientListVersions(new ListVersionsRequest { BucketName
    bucketName Prefix keyName })
    3 Delete the sample object using the DeleteObject method The DeleteObjectRequest class
    specifies both an object key name and a version ID
    For information about how to create and test a working sample see Running the Amazon S3 NET
    Code Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class DeleteObjectVersion
    {
    static string bucketName *** Provide a Bucket Name ***
    static string keyName *** Provide a Key Name ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    try
    {
    Make the bucket versionenabled
    EnableVersioningOnBucket(bucketName)
    Add a sample object
    string versionID PutAnObject(keyName)
    Delete the object by specifying an object key and a
    version ID
    DeleteObjectRequest request new DeleteObjectRequest
    {
    BucketName bucketName
    Key keyName
    VersionId versionID
    }
    ConsoleWriteLine(Deleting an object)
    clientDeleteObject(request)
    }
    catch (AmazonS3Exception s3Exception)
    {
    ConsoleWriteLine(s3ExceptionMessage
    s3ExceptionInnerException)
    }
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void EnableVersioningOnBucket(string bucketName)
    {
    PutBucketVersioningRequest setBucketVersioningRequest new
    PutBucketVersioningRequest
    {
    BucketName bucketName
    VersioningConfig new S3BucketVersioningConfig { Status
    VersionStatusEnabled }
    }
    clientPutBucketVersioning(setBucketVersioningRequest)
    }
    static string PutAnObject(string objectKey)
    {
    PutObjectRequest request new PutObjectRequest
    {
    BucketName bucketName
    Key objectKey
    ContentBody This is the content body
    }
    PutObjectResponse response clientPutObject(request)
    return responseVersionId
    }
    }
    }
    API Version 20060301
    244Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Deleting an Object Using the AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to delete an object from a
    nonversioned bucket For information on deleting an object from a versioned bucket see Deleting an
    Object Using the REST API (p 246)
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    Deleting One Object (NonVersioned Bucket)
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Execute the Aws\S3\S3ClientdeleteObject() method You must provide a bucket name
    and a key name in the array parameter's required keys Bucket and Key
    If you have not enabled versioning on the bucket the operation deletes the object If you
    have enabled versioning the operation adds a delete marker For more information see
    Deleting Objects (p 237)
    The following PHP code sample demonstrates how to delete an object from an Amazon S3 bucket
    using the deleteObject() method
    use Aws\S3\S3Client
    s3 S3Clientfactory()
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    result s3>deleteObject(array(
    'Bucket' > bucket
    'Key' > keyname
    ))
    API Version 20060301
    245Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example Deleting an Object from a NonVersioned Bucket
    The following PHP code example deletes an object from a bucket It does not provide a version Id in
    the delete request If you have not enabled versioning on the bucket Amazon S3 deletes the object
    If you have enabled versioning Amazon S3 adds a delete marker and the object is not deleted For
    information about running the PHP examples in this guide go to Running PHP Examples (p 567)
    For information on deleting an object from a versioned bucket see Deleting an Object Using the REST
    API (p 246)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    s3 S3Clientfactory()
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    result s3>deleteObject(array(
    'Bucket' > bucket
    'Key' > keyname
    ))
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientdeleteObject() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    Deleting an Object Using the REST API
    You can use the AWS SDKs to delete an object However if your application requires it you can send
    REST requests directly For more information go to DELETE Object in the Amazon Simple Storage
    Service API Reference
    Deleting Multiple Objects Per Request
    Topics
    • Deleting Multiple Objects Using the AWS SDK for Java (p 247)
    • Deleting Multiple Objects Using the AWS SDK for NET (p 251)
    • Deleting Multiple Objects Using the AWS SDK for PHP (p 255)
    • Deleting Multiple Objects Using the REST API (p 259)
    Amazon S3 provides the MultiObject Delete API (see Delete MultiObject Delete) that enables you to
    delete multiple objects in a single request The API supports two modes for the response verbose and
    quiet By default the operation uses verbose mode in which the response includes the result each keys
    deletion that was encountered in your request In quiet mode the response includes only keys where
    the delete operation encountered an error
    API Version 20060301
    246Amazon Simple Storage Service Developer Guide
    Deleting Objects
    If all keys were successfully deleted when using the quiet mode Amazon S3 returns empty response
    To learn more about object deletion see Deleting Objects (p 237)
    You can use the REST API directly or use the AWS SDKs
    Deleting Multiple Objects Using the AWS SDK for Java
    The following tasks guide you through using the AWS SDK for Java classes to delete multiple objects
    in a single HTTP request
    Deleting Multiple Objects (NonVersioned Bucket)
    1 Create an instance of the AmazonS3Client class
    2 Create an instance of the DeleteObjectsRequest class and provide a list of objects
    keys you want to delete
    3 Execute the AmazonS3ClientdeleteObjects method
    The following Java code sample demonstrates the preceding steps
    DeleteObjectsRequest multiObjectDeleteRequest new
    DeleteObjectsRequest(bucketName)
    List keys new ArrayList()
    keysadd(new KeyVersion(keyName1))
    keysadd(new KeyVersion(keyName2))
    keysadd(new KeyVersion(keyName3))

    multiObjectDeleteRequestsetKeys(keys)
    try {
    DeleteObjectsResult delObjRes
    s3ClientdeleteObjects(multiObjectDeleteRequest)
    Systemoutformat(Successfully deleted all the s items\n
    delObjResgetDeletedObjects()size())

    } catch (MultiObjectDeleteException e) {
    Process exception
    }
    In the event of an exception you can review the MultiObjectDeleteException to determine which
    objects failed to delete and why as shown in the following Java example
    Systemoutformat(s \n egetMessage())
    Systemoutformat(No of objects successfully deleted s\n
    egetDeletedObjects()size())
    Systemoutformat(No of objects failed to delete s\n
    egetErrors()size())
    Systemoutformat(Printing error data\n)
    for (DeleteError deleteError egetErrors()){
    Systemoutformat(Object Key s\ts\ts\n
    deleteErrorgetKey() deleteErrorgetCode()
    deleteErrorgetMessage())
    }
    API Version 20060301
    247Amazon Simple Storage Service Developer Guide
    Deleting Objects
    The following tasks guide you through deleting objects from a versionenabled bucket
    Deleting Multiple Objects (VersionEnabled Bucket)
    1 Create an instance of the AmazonS3Client class
    2 Create an instance of the DeleteObjectsRequest class and provide a list of objects
    keys and optionally the version IDs of the objects that you want to delete
    If you specify the version ID of the object that you want to delete Amazon S3 deletes the
    specific object version If you don't specify the version ID of the object that you want to
    delete Amazon S3 adds a delete marker For more information see Deleting One Object
    Per Request (p 238)
    3 Execute the AmazonS3ClientdeleteObjects method
    The following Java code sample demonstrates the preceding steps
    List keys new ArrayList()
    Provide a list of object keys and versions
    DeleteObjectsRequest multiObjectDeleteRequest new
    DeleteObjectsRequest(bucketName)
    withKeys(keys)

    try {
    DeleteObjectsResult delObjRes
    s3ClientdeleteObjects(multiObjectDeleteRequest)
    Systemoutformat(Successfully deleted all the s items\n
    delObjResgetDeletedObjects()size())

    } catch (MultiObjectDeleteException e) {
    Process exception
    }
    API Version 20060301
    248Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 1 MultiObject Delete (NonVersioned Bucket)
    The following Java code example uses the MultiObject Delete API to delete objects from a non
    versioned bucket The example first uploads the sample objects to the bucket and then uses the
    deleteObjects method to delete the objects in a single request
    For information about how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioByteArrayInputStream
    import javaioIOException
    import javautilArrayList
    import javautilList
    import javautilRandom
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelCannedAccessControlList
    import comamazonawsservicess3modelDeleteObjectsRequest
    import comamazonawsservicess3modelDeleteObjectsRequestKeyVersion
    import comamazonawsservicess3modelDeleteObjectsResult
    import comamazonawsservicess3modelMultiObjectDeleteException
    import
    comamazonawsservicess3modelMultiObjectDeleteExceptionDeleteError
    import comamazonawsservicess3modelObjectMetadata
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3modelPutObjectResult
    public class DeleteMultipleObjectsNonVersionedBucket {
    static String bucketName *** Provide a bucket name ***
    static AmazonS3Client s3Client
    public static void main(String[] args) throws IOException {
    try {
    s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    Upload sample objectsBecause the bucket is not version
    enabled
    the KeyVersions list returned will have null values for
    version IDs
    List keysAndVersions1 putObjects(3)
    Delete specific object versions
    multiObjectNonVersionedDelete(keysAndVersions1)
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    static List putObjects(int number) {
    List keys new ArrayList()
    String content This is the content body
    for (int i 0 i < number i++) {
    String key ObjectToDelete + new Random()nextInt()
    ObjectMetadata metadata new ObjectMetadata()
    metadatasetHeader(Subject ContentAsObject)
    metadatasetHeader(ContentLength (long)contentlength())
    PutObjectRequest request new PutObjectRequest(bucketName key
    new ByteArrayInputStream(contentgetBytes()) metadata)

    withCannedAcl(CannedAccessControlListAuthenticatedRead)
    PutObjectResult response s3ClientputObject(request)
    KeyVersion keyVersion new KeyVersion(key
    responsegetVersionId())
    keysadd(keyVersion)
    }
    return keys
    }
    static void multiObjectNonVersionedDelete(List keys) {
    Multiobject delete by specifying only keys (no version ID)
    DeleteObjectsRequest multiObjectDeleteRequest new
    DeleteObjectsRequest(
    bucketName)withQuiet(false)
    Create request that include only object key names
    List justKeys new ArrayList()
    for (KeyVersion key keys) {
    justKeysadd(new KeyVersion(keygetKey()))
    }
    multiObjectDeleteRequestsetKeys(justKeys)
    Execute DeleteObjects Amazon S3 add delete marker for each
    object
    deletion The objects no disappear from your bucket (verify)
    DeleteObjectsResult delObjRes null
    try {
    delObjRes s3ClientdeleteObjects(multiObjectDeleteRequest)
    Systemoutformat(Successfully deleted all the s items\n
    delObjResgetDeletedObjects()size())
    } catch (MultiObjectDeleteException mode) {
    printDeleteResults(mode)
    }
    }
    static void printDeleteResults(MultiObjectDeleteException mode) {
    Systemoutformat(s \n modegetMessage())
    Systemoutformat(No of objects successfully deleted s\n
    modegetDeletedObjects()size())
    Systemoutformat(No of objects failed to delete s\n
    modegetErrors()size())
    Systemoutformat(Printing error data\n)
    for (DeleteError deleteError modegetErrors()){
    Systemoutformat(Object Key s\ts\ts\n
    deleteErrorgetKey() deleteErrorgetCode()
    deleteErrorgetMessage())
    }
    }
    }
    API Version 20060301
    249Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 2 MultiObject Delete (VersionEnabled Bucket)
    The following Java code example uses the MultiObject Delete API to delete objects from a version
    enabled bucket
    Before you can test the sample you must create a sample bucket and provide the bucket name in the
    example You can use the AWS Management Console to create a bucket
    The example performs the following actions
    1 Enable versioning on the bucket
    2 Perform a versioneddelete
    The example first uploads the sample objects In response Amazon S3 returns the version IDs for
    each sample object that you uploaded The example then deletes these objects using the Multi
    Object Delete API In the request it specifies both the object keys and the version IDs (that is
    versioned delete)
    3 Perform a nonversioned delete
    The example uploads the new sample objects Then it deletes the objects using the MultiObject
    API However in the request it specifies only the object keys In this case Amazon S3 adds the
    delete markers and the objects disappear from your bucket
    4 Delete the delete markers
    To illustrate how the delete markers work the sample deletes the delete markers In the MultiObject
    Delete request it specifies the object keys and the version IDs of the delete markers it received in
    the response in the preceding step This action makes the objects reappear in your bucket
    For information about how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioByteArrayInputStream
    import javaioIOException
    import javautilArrayList
    import javautilList
    import javautilRandom
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelBucketVersioningConfiguration
    import comamazonawsservicess3modelCannedAccessControlList
    import comamazonawsservicess3modelDeleteObjectsRequest
    import comamazonawsservicess3modelDeleteObjectsRequestKeyVersion
    import comamazonawsservicess3modelDeleteObjectsResult
    import comamazonawsservicess3modelDeleteObjectsResultDeletedObject
    import comamazonawsservicess3modelMultiObjectDeleteException
    import
    comamazonawsservicess3modelMultiObjectDeleteExceptionDeleteError
    import comamazonawsservicess3modelObjectMetadata
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3modelPutObjectResult
    import
    comamazonawsservicess3modelSetBucketVersioningConfigurationRequest
    public class DeleteMultipleObjectsVersionEnabledBucket {
    static String bucketName *** Provide a bucket name ***
    static AmazonS3Client s3Client
    public static void main(String[] args) throws IOException {
    try {
    s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    1 Enable versioning on the bucket
    enableVersioningOnBucket(s3Client bucketName)
    2a Upload sample objects
    List keysAndVersions1 putObjects(3)
    2b Delete specific object versions
    multiObjectVersionedDelete(keysAndVersions1)
    3a Upload samples objects
    List keysAndVersions2 putObjects(3)
    3b Delete objects using only keys Amazon S3 creates a delete
    marker and
    returns its version Id in the response
    DeleteObjectsResult response
    multiObjectNonVersionedDelete(keysAndVersions2)
    3c Additional exercise using multiobject versioned delete
    remove the
    delete markers received in the preceding response This
    results in your objects
    reappear in your bucket
    multiObjectVersionedDeleteRemoveDeleteMarkers(response)

    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    static void enableVersioningOnBucket(AmazonS3Client s3Client
    String bucketName) {
    BucketVersioningConfiguration config new
    BucketVersioningConfiguration()
    withStatus(BucketVersioningConfigurationENABLED)
    SetBucketVersioningConfigurationRequest
    setBucketVersioningConfigurationRequest new
    SetBucketVersioningConfigurationRequest(
    bucketName config)

    s3ClientsetBucketVersioningConfiguration(setBucketVersioningConfigurationRequest)
    }
    static List putObjects(int number) {
    List keys new ArrayList()
    String content This is the content body
    for (int i 0 i < number i++) {
    String key ObjectToDelete + new Random()nextInt()
    ObjectMetadata metadata new ObjectMetadata()
    metadatasetHeader(Subject ContentAsObject)
    metadatasetHeader(ContentLength (long)contentlength())
    PutObjectRequest request new PutObjectRequest(bucketName key
    new ByteArrayInputStream(contentgetBytes()) metadata)

    withCannedAcl(CannedAccessControlListAuthenticatedRead)
    PutObjectResult response s3ClientputObject(request)
    KeyVersion keyVersion new KeyVersion(key
    responsegetVersionId())
    keysadd(keyVersion)
    }
    return keys
    }
    static void multiObjectVersionedDelete(List keys) {
    DeleteObjectsRequest multiObjectDeleteRequest new
    DeleteObjectsRequest(
    bucketName)withKeys(keys)
    DeleteObjectsResult delObjRes null
    try {
    delObjRes s3ClientdeleteObjects(multiObjectDeleteRequest)
    Systemoutformat(Successfully deleted all the s items\n
    delObjResgetDeletedObjects()size())
    } catch(MultiObjectDeleteException mode) {
    printDeleteResults(mode)
    }
    }
    static DeleteObjectsResult multiObjectNonVersionedDelete(List
    keys) {
    Multiobject delete by specifying only keys (no version ID)
    DeleteObjectsRequest multiObjectDeleteRequest new
    DeleteObjectsRequest(
    bucketName)
    Create request that include only object key names
    List justKeys new ArrayList()
    for (KeyVersion key keys) {
    justKeysadd(new KeyVersion(keygetKey()))
    }
    multiObjectDeleteRequestsetKeys(justKeys)
    Execute DeleteObjects Amazon S3 add delete marker for each
    object
    deletion The objects no disappear from your bucket (verify)
    DeleteObjectsResult delObjRes null
    try {
    delObjRes s3ClientdeleteObjects(multiObjectDeleteRequest)
    Systemoutformat(Successfully deleted all the s items\n
    delObjResgetDeletedObjects()size())
    } catch (MultiObjectDeleteException mode) {
    printDeleteResults(mode)
    }
    return delObjRes
    }
    static void multiObjectVersionedDeleteRemoveDeleteMarkers(
    DeleteObjectsResult response) {
    List keyVersionList new ArrayList()
    for (DeletedObject deletedObject responsegetDeletedObjects()) {
    keyVersionListadd(new KeyVersion(deletedObjectgetKey()
    deletedObjectgetDeleteMarkerVersionId()))
    }
    Create a request to delete the delete markers
    DeleteObjectsRequest multiObjectDeleteRequest2 new
    DeleteObjectsRequest(
    bucketName)withKeys(keyVersionList)
    Now delete the delete marker bringing your objects back to the
    bucket
    DeleteObjectsResult delObjRes null
    try {
    delObjRes s3ClientdeleteObjects(multiObjectDeleteRequest2)
    Systemoutformat(Successfully deleted all the s items\n
    delObjResgetDeletedObjects()size())
    } catch (MultiObjectDeleteException mode) {
    printDeleteResults(mode)
    }
    }
    static void printDeleteResults(MultiObjectDeleteException mode) {
    Systemoutformat(s \n modegetMessage())
    Systemoutformat(No of objects successfully deleted s\n
    modegetDeletedObjects()size())
    Systemoutformat(No of objects failed to delete s\n
    modegetErrors()size())
    Systemoutformat(Printing error data\n)
    for (DeleteError deleteError modegetErrors()){
    Systemoutformat(Object Key s\ts\ts\n
    deleteErrorgetKey() deleteErrorgetCode()
    deleteErrorgetMessage())
    }
    }
    }
    API Version 20060301
    250Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Deleting Multiple Objects Using the AWS SDK for NET
    The following tasks guide you through using the AWS SDK for NET classes to delete multiple objects
    in a single HTTP request
    Deleting Multiple Objects (NonVersioned Bucket)
    1 Create an instance of the AmazonS3Client class
    2 Create an instance of the DeleteObjectsRequest class and provide list of the object
    keys you want to delete
    3 Execute the AmazonS3ClientDeleteObjects method
    If one or more objects fail to delete Amazon S3 throws a DeleteObjectsException
    The following C# code sample demonstrates the preceding steps
    DeleteObjectsRequest multiObjectDeleteRequest new DeleteObjectsRequest()
    multiObjectDeleteRequestBucketName bucketName

    multiObjectDeleteRequestAddKey( null) version ID is null
    multiObjectDeleteRequestAddKey( null)
    multiObjectDeleteRequestAddKey( null)
    try
    {
    DeleteObjectsResponse response
    clientDeleteObjects(multiObjectDeleteRequest)
    ConsoleWriteLine(Successfully deleted all the {0} items
    responseDeletedObjectsCount)
    }
    catch (DeleteObjectsException e)
    {
    Process exception
    }
    The DeleteObjectsRequest can also take the list of KeyVersion objects as parameter For bucket
    without versioning version ID is null
    List keys new List()
    KeyVersion keyVersion new KeyVersion
    {
    Key key
    VersionId null For buckets without versioning
    }
    keysAdd(keyVersion)
    List keys new List()

    DeleteObjectsRequest multiObjectDeleteRequest new DeleteObjectsRequest
    {
    BucketName bucketName
    Objects keys This includes the object keys and null version IDs
    }
    API Version 20060301
    251Amazon Simple Storage Service Developer Guide
    Deleting Objects
    In the event of an exception you can review the DeleteObjectsException to determine which
    objects failed to delete and why as shown in the following C# code example
    DeleteObjectsResponse errorResponse eResponse
    ConsoleWriteLine(No of objects successfully deleted {0}
    errorResponseDeletedObjectsCount)
    ConsoleWriteLine(No of objects failed to delete {0}
    errorResponseDeleteErrorsCount)
    ConsoleWriteLine(Printing error data)
    foreach (DeleteError deleteError in errorResponseDeleteErrors)
    {
    ConsoleWriteLine(Object Key {0}\t{1}\t{2} deleteErrorKey
    deleteErrorCode deleteErrorMessage)
    }
    The following tasks guide you through deleting objects from a versionenabled bucket
    Deleting Multiple Objects (VersionEnabled Bucket)
    1 Create an instance of the AmazonS3Client class
    2 Create an instance of the DeleteObjectsRequest class and provide a list of object
    keys and optionally the version IDs of the objects that you want to delete
    If you specify the version ID of the object you want to delete Amazon S3 deletes the
    specific object version If you don't specify the version ID of the object that you want to
    delete Amazon S3 adds a delete marker For more information see Deleting One Object
    Per Request (p 238)
    3 Execute the AmazonS3ClientDeleteObjects method
    The following C# code sample demonstrates the preceding steps
    List keysAndVersions new List()
    provide a list of object keys and versions
    DeleteObjectsRequest multiObjectDeleteRequest new DeleteObjectsRequest
    {
    BucketName bucketName
    Objects keysAndVersions
    }
    try
    {
    DeleteObjectsResponse response
    clientDeleteObjects(multiObjectDeleteRequest)
    ConsoleWriteLine(Successfully deleted all the {0} items
    responseDeletedObjectsCount)
    }
    catch (DeleteObjectsException e)
    {
    Process exception
    }
    API Version 20060301
    252Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 1 MultiObject Delete (NonVersioned Bucket)
    The following C# code example uses the MultiObject API to delete objects from a bucket that is
    not versionenabled The example first uploads the sample objects to the bucket and then uses the
    DeleteObjects method to delete the objects in a single request In the DeleteObjectsRequest
    the example specifies only the object key names because the version IDs are null
    For information about how to create and test a working sample see Running the Amazon S3 NET
    Code Examples (p 566)
    using System
    using SystemCollectionsGeneric
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class DeleteMultipleObjects
    {
    static string bucketName *** Provide a bucket name ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    var keysAndVersions PutObjects(3)
    Delete the objects
    MultiObjectDelete(keysAndVersions)
    }
    ConsoleWriteLine(Click ENTER to continue)
    ConsoleReadLine()
    }
    static void MultiObjectDelete(List keys)
    {
    a multiobject delete by specifying the key names and version
    IDs
    DeleteObjectsRequest multiObjectDeleteRequest new
    DeleteObjectsRequest
    {
    BucketName bucketName
    Objects keys This includes the object keys and null
    version IDs
    }
    multiObjectDeleteRequestAddKey(AWSSDKcopy2dll null)
    try
    {
    DeleteObjectsResponse response
    clientDeleteObjects(multiObjectDeleteRequest)
    ConsoleWriteLine(Successfully deleted all the {0} items
    responseDeletedObjectsCount)
    }
    catch (DeleteObjectsException e)
    {
    PrintDeletionReport(e)
    }
    }
    private static void PrintDeletionReport(DeleteObjectsException e)
    {
    var errorResponse eErrorResponse
    DeleteObjectsResponse errorResponse eResponse
    ConsoleWriteLine(x {0} errorResponseDeletedObjectsCount)
    ConsoleWriteLine(No of objects successfully deleted {0}
    errorResponseDeletedObjectsCount)
    ConsoleWriteLine(No of objects failed to delete {0}
    errorResponseDeleteErrorsCount)
    ConsoleWriteLine(Printing error data)
    foreach (DeleteError deleteError in errorResponseDeleteErrors)
    {
    ConsoleWriteLine(Object Key {0}\t{1}\t{2}
    deleteErrorKey deleteErrorCode deleteErrorMessage)
    }
    }
    static List PutObjects(int number)
    {
    List keys new List()
    for (int i 0 i < number i++)
    {
    string key ExampleObject + new SystemRandom()Next()
    PutObjectRequest request new PutObjectRequest
    {
    BucketName bucketName
    Key key
    ContentBody This is the content body
    }
    PutObjectResponse response clientPutObject(request)
    KeyVersion keyVersion new KeyVersion
    {
    Key key
    For nonversioned bucket operations we only need
    object key
    VersionId responseVersionId
    }
    keysAdd(keyVersion)
    }
    return keys
    }
    }
    }
    API Version 20060301
    253Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 2 MultiObject Delete (VersionEnabled Bucket)
    The following C# code example uses the MultiObject API to delete objects from a versionenabled
    bucket In addition to showing the DeleteObjects MultiObject Delete API usage it also illustrates how
    versioning works in a versionenabled bucket
    Before you can test the sample you must create a sample bucket and provide the bucket name in the
    example You can use the AWS Management Console to create a bucket
    The example performs the following actions
    1 Enable versioning on the bucket
    2 Perform a versioneddelete
    The example first uploads the sample objects In response Amazon S3 returns the version IDs for
    each sample object that you uploaded The example then deletes these objects using the Multi
    Object Delete API In the request it specifies both the object keys and the version IDs (that is
    versioned delete)
    3 Perform a nonversioned delete
    The example uploads the new sample objects Then it deletes the objects using the MultiObject
    API However in the request it specifies only the object keys In this case Amazon S3 adds the
    delete markers and the objects disappear from your bucket
    4 Delete the delete markers
    To illustrate how the delete markers work the sample deletes the delete markers In the MultiObject
    Delete request it specifies the object keys and the version IDs of the delete markers it received in
    the response in the preceding step This action makes the objects reappear in your bucket
    For information about how to create and test a working sample see Running the Amazon S3 NET
    Code Examples (p 566)
    using System
    using SystemCollectionsGeneric
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class DeleteMultipleObjectsVersionedBucket
    {
    static string bucketName *** Provide a bucket name ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    1 Enable versioning on the bucket
    EnableVersioningOnBucket(bucketName)
    2a Upload the sample objects
    var keysAndVersions1 PutObjects(3)
    2b Delete the specific object versions
    VersionedDelete(keysAndVersions1)
    3a Upload the sample objects
    var keysAndVersions2 PutObjects(3)
    3b Delete objects using only keys Amazon S3 creates a
    delete marker and
    returns its version Id in the response
    List deletedObjects
    NonVersionedDelete(keysAndVersions2)
    3c Additional exercise using a multiobject versioned
    delete remove the
    delete markers received in the preceding response This
    results in your objects
    reappearing in your bucket
    RemoveMarkers(deletedObjects)
    }
    ConsoleWriteLine(Click ENTER to continue)
    ConsoleReadLine()
    }
    private static void PrintDeletionReport(DeleteObjectsException e)
    {
    var errorResponse eResponse
    ConsoleWriteLine(No of objects successfully deleted {0}
    errorResponseDeletedObjectsCount)
    ConsoleWriteLine(No of objects failed to delete {0}
    errorResponseDeleteErrorsCount)
    ConsoleWriteLine(Printing error data)
    foreach (DeleteError deleteError in errorResponseDeleteErrors)
    {
    ConsoleWriteLine(Object Key {0}\t{1}\t{2}
    deleteErrorKey deleteErrorCode deleteErrorMessage)
    }
    }
    static void EnableVersioningOnBucket(string bucketName)
    {
    PutBucketVersioningRequest setBucketVersioningRequest new
    PutBucketVersioningRequest
    {
    BucketName bucketName
    VersioningConfig new S3BucketVersioningConfig { Status
    VersionStatusEnabled }
    }
    clientPutBucketVersioning(setBucketVersioningRequest)
    }
    static void VersionedDelete(List keys)
    {
    a Perform a multiobject delete by specifying the key names
    and version IDs
    DeleteObjectsRequest multiObjectDeleteRequest new
    DeleteObjectsRequest
    {
    BucketName bucketName
    Objects keys This includes the object keys and specific
    version IDs
    }
    try
    {
    ConsoleWriteLine(Executing VersionedDelete)
    DeleteObjectsResponse response
    clientDeleteObjects(multiObjectDeleteRequest)
    ConsoleWriteLine(Successfully deleted all the {0} items
    responseDeletedObjectsCount)
    }
    catch (DeleteObjectsException e)
    {
    PrintDeletionReport(e)
    }
    }
    static List NonVersionedDelete(List keys)
    {
    Create a request that includes only the object key names
    DeleteObjectsRequest multiObjectDeleteRequest new
    DeleteObjectsRequest()
    multiObjectDeleteRequestBucketName bucketName
    foreach (var key in keys)
    {
    multiObjectDeleteRequestAddKey(keyKey)
    }
    Execute DeleteObjects Amazon S3 add delete marker for each
    object
    deletion The objects disappear from your bucket
    You can verify that using the Amazon S3 console
    DeleteObjectsResponse response
    try
    {
    ConsoleWriteLine(Executing NonVersionedDelete)
    response clientDeleteObjects(multiObjectDeleteRequest)
    ConsoleWriteLine(Successfully deleted all the {0} items
    responseDeletedObjectsCount)
    }
    catch (DeleteObjectsException e)
    {
    PrintDeletionReport(e)
    throw Some deletes failed Investigate before continuing
    }
    This response contains the DeletedObjects list which we use to
    delete the delete markers
    return responseDeletedObjects
    }
    private static void RemoveMarkers(List deletedObjects)
    {
    List keyVersionList new List()
    foreach (var deletedObject in deletedObjects)
    {
    KeyVersion keyVersion new KeyVersion
    {
    Key deletedObjectKey
    VersionId deletedObjectDeleteMarkerVersionId
    }
    keyVersionListAdd(keyVersion)
    }
    Create another request to delete the delete markers
    var multiObjectDeleteRequest new DeleteObjectsRequest
    {
    BucketName bucketName
    Objects keyVersionList
    }
    Now delete the delete marker to bring your objects back to
    the bucket
    try
    {
    ConsoleWriteLine(Removing the delete markers )
    var deleteObjectResponse
    clientDeleteObjects(multiObjectDeleteRequest)
    ConsoleWriteLine(Successfully deleted all the {0} delete
    markers

    deleteObjectResponseDeletedObjectsCount)
    }
    catch (DeleteObjectsException e)
    {
    PrintDeletionReport(e)
    }
    }
    static List PutObjects(int number)
    {
    List keys new List()
    for (int i 0 i < number i++)
    {
    string key ObjectToDelete + new SystemRandom()Next()
    PutObjectRequest request new PutObjectRequest
    {
    BucketName bucketName
    Key key
    ContentBody This is the content body
    }
    PutObjectResponse response clientPutObject(request)
    KeyVersion keyVersion new KeyVersion
    {
    Key key
    VersionId responseVersionId
    }
    keysAdd(keyVersion)
    }
    return keys
    }
    }
    }
    API Version 20060301
    254Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Deleting Multiple Objects Using the AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to delete multiple objects from
    versioned and nonversioned Amazon S3 buckets For more information about versioning see Using
    Versioning (p 423)
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    The following tasks guide you through using the PHP SDK classes to delete multiple objects from a
    nonversioned bucket
    Deleting Multiple Objects (NonVersioned Bucket)
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class factory()
    method
    2 Execute the Aws\S3\S3ClientdeleteObjects() method You need to provide a bucket
    name and an array of object keys as parameters You can specify up to 1000 keys
    The following PHP code sample demonstrates deleting multiple objects from an Amazon S3 non
    versioned bucket
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname1 '*** Your Object Key1 ***'
    keyname2 '*** Your Object Key2 ***'
    keyname3 '*** Your Object Key3 ***'
    s3 S3Clientfactory()
    Delete objects from a bucket
    result s3>deleteObjects(array(
    'Bucket' > bucket
    'Objects' > array(
    array('Key' > keyname1)
    array('Key' > keyname2)
    array('Key' > keyname3)
    )
    ))
    The following tasks guide you through deleting multiple objects from an Amazon S3 versionenabled
    bucket
    Deleting Multiple Objects (VersionEnabled Bucket)
    1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class
    factory() method
    2 Execute the Aws\S3\S3ClientdeleteObjects() method and provide a list of
    objects keys and optionally the version IDs of the objects that you want to delete
    If you specify version ID of the object that you want to delete Amazon S3 deletes the
    specific object version If you don't specify the version ID of the object that you want to
    API Version 20060301
    255Amazon Simple Storage Service Developer Guide
    Deleting Objects
    delete Amazon S3 adds a delete marker For more information see Deleting One Object
    Per Request (p 238)
    The following PHP code sample demonstrates deleting multiple objects from an Amazon S3 version
    enabled bucket
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    versionId1 '*** Your Object Key Version ID1 ***'
    versionId2 '*** Your Object Key Version ID2 ***'
    versionId3 '*** Your Object Key Version ID3 ***'
    s3 S3Clientfactory()
    Delete object versions from a versioningenabled bucket
    result s3>deleteObjects(array(
    'Bucket' > bucket
    'Objects' > array(
    array('Key' > keyname 'VersionId' > versionId1)
    array('Key' > keyname 'VersionId' > versionId2)
    array('Key' > keyname 'VersionId' > versionId3)
    )
    ))
    Amazon S3 returns a response that shows the objects that were deleted and objects it could not delete
    because of errors (for example permission errors)
    The following PHP code sample prints the object keys for objects that were deleted It also prints the
    object keys that were not deleted and the related error messages
    echo The following objects were deleted successfully\n
    foreach (result['Deleted'] as object) {
    echo Key {object['Key']} VersionId {object['VersionId']}\n
    }
    echo \nThe following objects could not be deleted\n
    foreach (result['Errors'] as object) {
    echo Key {object['Key']} VersionId {object['VersionId']}\n
    }
    API Version 20060301
    256Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 1 MultiObject Delete (NonVersioned Bucket)
    The following PHP code example uses the deleteObjects() method to delete multiple objects from
    a bucket that is not versionenabled
    The example performs the following actions
    1 Creates a few objects by using the Aws\S3\S3ClientputObject() method
    2 Lists the objects and gets the keys of the created objects using the Aws\S3\S3ClientlistObjects()
    method
    3 Performs a nonversioned delete by using the Aws\S3\S3ClientdeleteObjects() method
    For information about running the PHP examples in this guide go to Running PHP
    Examples (p 567)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    Instantiate the client
    s3 S3Clientfactory()
    1 Create a few objects
    for (i 1 i < 3 i++) {
    s3>putObject(array(
    'Bucket' > bucket
    'Key' > key{i}
    'Body' > content {i}
    ))
    }
    2 List the objects and get the keys
    keys s3>listObjects(array('Bucket' > bucket))
    >getPath('Contents*Key')
    3 Delete the objects
    result s3>deleteObjects(array(
    'Bucket' > bucket
    'Objects' > array_map(function (key) {
    return array('Key' > key)
    } keys)
    ))
    API Version 20060301
    257Amazon Simple Storage Service Developer Guide
    Deleting Objects
    Example 2 MultiObject Delete (VersionEnabled Bucket)
    The following PHP code example uses the deleteObjects() method to delete multiple objects from
    a versionenabled bucket
    The example performs the following actions
    1 Enables versioning on the bucket by using the Aws\S3\S3ClientputBucketVersioning() method
    2 Creates a few versions of an object by using the Aws\S3\S3ClientputObject() method
    3 Lists the objects versions and gets the keys and version IDs for the created object versions using
    the Aws\S3\S3ClientlistObjectVersions() method
    4 Performs a versioneddelete by using the Aws\S3\S3ClientdeleteObjects() method with
    the retrieved keys and versions IDs
    5 Disables versioning on the bucket by using the Aws\S3\S3ClientputBucketVersioning()
    method
    For information about running the PHP examples in this guide go to Running PHP
    Examples (p 567)
    Include the AWS SDK using the Composer autoloader
    require 'vendorautoloadphp'
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    Instantiate the client
    s3 S3Clientfactory()
    1 Enable object versioning for the bucket
    s3>putBucketVersioning(array(
    'Bucket' > bucket
    'Status' > 'Enabled'
    ))
    2 Create a few versions of an object
    for (i 1 i < 3 i++) {
    s3>putObject(array(
    'Bucket' > bucket
    'Key' > keyname
    'Body' > content {i}
    ))
    }
    3 List the objects versions and get the keys and version IDs
    versions s3>listObjectVersions(array('Bucket' > bucket))
    >getPath('Versions')
    4 Delete the object versions
    result s3>deleteObjects(array(
    'Bucket' > bucket
    'Objects' > array_map(function (version) {
    return array(
    'Key' > version['Key']
    'VersionId' > version['VersionId']
    )
    } versions)
    ))
    echo The following objects were deleted successfully\n
    foreach (result['Deleted'] as object) {
    echo Key {object['Key']} VersionId {object['VersionId']}\n
    }
    echo \nThe following objects could not be deleted\n
    foreach (result['Errors'] as object) {
    echo Key {object['Key']} VersionId {object['VersionId']}\n
    }
    5 Suspend object versioning for the bucket
    s3>putBucketVersioning(array(
    'Bucket' > bucket
    'Status' > 'Suspended'
    ))
    API Version 20060301
    258Amazon Simple Storage Service Developer Guide
    Restoring Archived Objects
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientdeleteObject() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientlistObjects() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientlistObjectVersions() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientputObject() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientputBucketVersioning() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    Deleting Multiple Objects Using the REST API
    You can use the AWS SDKs to delete multiple objects using the MultiObject Delete API However if
    your application requires it you can send REST requests directly For more information go to Delete
    Multiple Objects in the Amazon Simple Storage Service API Reference
    Restoring Archived Objects
    Objects archived to Amazon Glacier are not accessible in realtime You must first initiate a restore
    request and then wait until a temporary copy of the object is available for the duration that you specify
    in the request Restore jobs typically complete in three to five hours so it is important that you archive
    only objects that you will not need to access in real time For more information about archiving objects
    to Amazon Glacier see Transitioning to the GLACIER storage class (Object Archival) (p 111)
    After you receive a temporary copy of the restored object the object's storage class remains GLACIER
    (a GET or HEAD request will return GLACIER as the storage class) Note that when you restore an
    archive you pay for both the archive (GLACIER rate) and a copy you restored temporarily (RRS rate)
    For information about pricing see Amazon S3 Pricing
    You can restore an archived object programmatically or by using the Amazon S3 console Amazon S3
    processes only one restore request at a time per object The following topics describe how to use both
    the console and the Amazon S3 API to check the restoration status and to find out when Amazon S3
    will delete the restored copy
    Topics
    • Restore an Archived Object Using the Amazon S3 Console (p 259)
    • Restore an Archived Object Using the AWS SDK for Java (p 261)
    • Restore an Archived Object Using the AWS SDK for NET (p 262)
    • Restore an Archived Object Using the REST API (p 265)
    Restore an Archived Object Using the Amazon S3 Console
    You can use the Amazon S3 console to restore a copy of an object that has been archived to Amazon
    Glacier In the console you rightclick the object and then choose Initiate Restore
    API Version 20060301
    259Amazon Simple Storage Service Developer Guide
    Restoring Archived Objects
    You specify the number of days you want the object copy restored
    It takes about three to five hours for Amazon S3 to complete the restoration The object properties in
    the console shows the object restoration status
    When object copy is restored the object properties in the console shows the object is restored and
    when Amazon S3 will remove the restored copy The console also gives you option to modify the
    restoration period
    Note that when you restore an archive you are paying for both the archive and a copy you restored
    temporarily For information about pricing see Amazon S3 Pricing
    Amazon S3 restores a temporary copy of the object only for the specified duration After that Amazon
    S3 deletes the restored object copy You can modify the expiration period of a restored copy by
    API Version 20060301
    260Amazon Simple Storage Service Developer Guide
    Restoring Archived Objects
    reissuing a restore in which case Amazon S3 updates the expiration period relative to the current
    time
    Amazon S3 calculates expiration time of the restored object copy by adding the number of days
    specified in the restoration request to the current time and rounding the resulting time to the next day
    midnight UTC For example if an object was created on 10152012 1030 am UTC and the restoration
    period was specified as 3 days then the restored copy expires on 10192012 0000 UTC at which time
    Amazon S3 delete the object copy
    You can restore an object copy for any number of days However you should restore objects only
    for the duration you need because of the storage costs associated with the object copy For pricing
    information see Amazon S3 Pricing
    Restore an Archived Object Using the AWS SDK for Java
    The following tasks guide you through use the AWS SDK for Java to initiate a restoration of an
    archived object
    Downloading Objects
    1 Create an instance of the AmazonS3Client class
    2 Create an instance of RestoreObjectRequest class by providing bucket name object
    key to restore and the number of days for which you the object copy restored
    3 Execute one of the AmazonS3RestoreObject methods to initiate the archive
    restoration
    The following Java code sample demonstrates the preceding tasks
    String bucketName examplebucket
    String objectkey examplekey
    AmazonS3Client s3Client new AmazonS3Client()
    RestoreObjectRequest request new RestoreObjectRequest(bucketName
    objectkey 2)
    s3ClientrestoreObject(request)
    Amazon S3 maintains the restoration status in the object metadata You can retrieve object metadata
    and check the value of the RestoreInProgress property as shown in the following Java code
    snippet
    String bucketName examplebucket
    String objectkey examplekey
    AmazonS3Client s3Client new AmazonS3Client()
    client new AmazonS3Client()
    GetObjectMetadataRequest request new GetObjectMetadataRequest(bucketName
    objectKey)

    ObjectMetadata response s3ClientgetObjectMetadata(request)

    Boolean restoreFlag responsegetOngoingRestore()
    Systemoutformat(Restoration status s\n
    (restoreFlag true) in progress finished)
    API Version 20060301
    261Amazon Simple Storage Service Developer Guide
    Restoring Archived Objects
    Example
    The following Java code example initiates a restoration request for the specified archived object You
    must update the code and provide a bucket name and an archived object key name For instructions
    on how to create and test a working sample see Testing the Java Code Examples (p 564)
    import javaioIOException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelAmazonS3Exception
    import comamazonawsservicess3modelGetObjectMetadataRequest
    import comamazonawsservicess3modelObjectMetadata
    import comamazonawsservicess3modelRestoreObjectRequest

    public class RestoreArchivedObject {
    public static String bucketName *** Provide bucket name ***
    public static String objectKey *** Provide object key name ***
    public static AmazonS3Client s3Client
    public static void main(String[] args) throws IOException {
    AmazonS3Client s3Client new AmazonS3Client(new
    ProfileCredentialsProvider())

    try {
    RestoreObjectRequest requestRestore new
    RestoreObjectRequest(bucketName objectKey 2)
    s3ClientrestoreObject(requestRestore)

    GetObjectMetadataRequest requestCheck new
    GetObjectMetadataRequest(bucketName objectKey)
    ObjectMetadata response s3ClientgetObjectMetadata(requestCheck)

    Boolean restoreFlag responsegetOngoingRestore()
    Systemoutformat(Restoration status s\n
    (restoreFlag true) in progress finished)

    } catch (AmazonS3Exception amazonS3Exception) {
    Systemoutformat(An Amazon S3 error occurred Exception s
    amazonS3ExceptiontoString())
    } catch (Exception ex) {
    Systemoutformat(Exception s extoString())
    }
    }
    }
    Restore an Archived Object Using the AWS SDK for NET
    The following tasks guide you through using the AWS SDK for NET to initiate a restoration of an
    archived object
    Downloading Objects
    1 Create an instance of the AmazonS3 class
    API Version 20060301
    262Amazon Simple Storage Service Developer Guide
    Restoring Archived Objects
    2 Create an instance of RestoreObjectRequest class by providing bucket name object
    key to restore and the number of days for which you the object copy restored
    3 Execute one of the AmazonS3RestoreObject methods to initiate the archive
    restoration
    The following C# code sample demonstrates the preceding tasks
    IAmazonS3 client
    string bucketName examplebucket
    string objectKey examplekey
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    RestoreObjectRequest restoreRequest new RestoreObjectRequest()
    {
    BucketName bucketName
    Key objectKey
    Days 2
    }
    clientRestoreObject(restoreRequest)
    Amazon S3 maintains the restoration status in the object metadata You can retrieve object metadata
    and check the value of the RestoreInProgress property as shown in the following C# code snippet
    IAmazonS3 client
    string bucketName examplebucket
    string objectKey examplekey
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    GetObjectMetadataRequest metadataRequest new GetObjectMetadataRequest()
    {
    BucketName bucketName
    Key objectKey
    }
    GetObjectMetadataResponse response
    clientGetObjectMetadata(metadataRequest)
    ConsoleWriteLine(Restoration status {0} responseRestoreInProgress)
    if (responseRestoreInProgress false)
    ConsoleWriteLine(Restored object copy expires on {0}
    responseRestoreExpiration)
    API Version 20060301
    263Amazon Simple Storage Service Developer Guide
    Restoring Archived Objects
    Example
    The following C# code example initiates a restoration request for the specified archived object
    You must update the code and provide a bucket name and an archived object key name For
    instructions on how to create and test a working sample see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class RestoreArchivedObject
    {
    static string bucketName *** provide bucket name ***
    static string objectKey *** archived object keyname ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    try
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    RestoreObject(client bucketName objectKey)
    CheckRestorationStatus(client bucketName objectKey)
    }
    ConsoleWriteLine(Example complete To continue click
    Enter)
    ConsoleReadKey()
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    ConsoleWriteLine(S3 error occurred Exception +
    amazonS3ExceptionToString())
    }
    catch (Exception e)
    {
    ConsoleWriteLine(Exception + eToString())
    }
    }
    static void RestoreObject(IAmazonS3 client string bucketName string
    objectKey)
    {
    RestoreObjectRequest restoreRequest new RestoreObjectRequest
    {
    BucketName bucketName
    Key objectKey
    Days 2
    }
    RestoreObjectResponse response
    clientRestoreObject(restoreRequest)
    }
    static void CheckRestorationStatus(IAmazonS3 client string
    bucketName string objectKey)
    {
    GetObjectMetadataRequest metadataRequest new
    GetObjectMetadataRequest
    {
    BucketName bucketName
    Key objectKey
    }
    GetObjectMetadataResponse response
    clientGetObjectMetadata(metadataRequest)
    ConsoleWriteLine(Restoration status {0}
    responseRestoreInProgress)
    if (responseRestoreInProgress false)
    ConsoleWriteLine(Restored object copy expires on {0}
    responseRestoreExpiration)
    }
    }
    }
    API Version 20060301
    264Amazon Simple Storage Service Developer Guide
    Restoring Archived Objects
    Restore an Archived Object Using the REST API
    Amazon S3 provides an API for you to initiate an archive restoration For more information go to POST
    Object restore in the Amazon Simple Storage Service API Reference
    API Version 20060301
    265Amazon Simple Storage Service Developer Guide
    Introduction
    Managing Access Permissions to
    Your Amazon S3 Resources
    By default all Amazon S3 resources—buckets objects and related subresources (for example
    lifecycle configuration and website configuration)—are private only the resource owner an AWS
    account that created it can access the resource The resource owner can optionally grant access
    permissions to others by writing an access policy
    Amazon S3 offers access policy options broadly categorized as resourcebased policies and user
    policies Access policies you attach to your resources (buckets and objects) are referred to as
    resourcebased policies For example bucket policies and access control lists (ACLs) are resource
    based policies You can also attach access policies to users in your account These are called user
    policies You may choose to use resourcebased policies user policies or some combination of
    these to manage permissions to your Amazon S3 resources The introductory topics provide general
    guidelines for managing permissions
    We recommend you first review the access control overview topics For more information see
    Introduction to Managing Access Permissions to Your Amazon S3 Resources (p 266) Then for more
    information about specific access policy options see the following topics
    • Using Bucket Policies and User Policies (p 308)
    • Managing Access with ACLs (p 364)
    Introduction to Managing Access Permissions to
    Your Amazon S3 Resources
    Topics
    • Overview of Managing Access (p 267)
    • How Amazon S3 Authorizes a Request (p 272)
    • Guidelines for Using the Available Access Policy Options (p 277)
    • Example Walkthroughs Managing Access to Your Amazon S3 Resources (p 280)
    The topics in this section provide an overview of managing access permissions to your Amazon S3
    resources and provides guidelines for when to use which access control method The topic also
    provides introductory example walkthroughs We recommend you review these topics in order
    API Version 20060301
    266Amazon Simple Storage Service Developer Guide
    Overview
    Overview of Managing Access
    Topics
    • Amazon S3 Resources (p 267)
    • Resource Operations (p 268)
    • Managing Access to Resources (Access Policy Options) (p 268)
    • So Which Access Control Method Should I Use (p 271)
    • Related Topics (p 271)
    When granting permissions you decide who is getting them which Amazon S3 resources they are
    getting permissions for and specific actions you want to allow on those resources
    Amazon S3 Resources
    Buckets and objects are primary Amazon S3 resources and both have associated subresources For
    example bucket subresources include the following
    • lifecycle – Stores lifecycle configuration information (see Object Lifecycle Management (p 109))
    • website – Stores website configuration information if you configure your bucket for website hosting
    (see Hosting a Static Website on Amazon S3 (p 449)
    • versioning – Stores versioning configuration (see PUT Bucket versioning)
    • policy and acl (Access Control List) – Store access permission information for the bucket
    • cors (CrossOrigin Resource Sharing) – Supports configuring your bucket to allow crossorigin
    requests (see CrossOrigin Resource Sharing (CORS) (p 131))
    • logging – Enables you to request Amazon S3 to save bucket access logs
    Object subresources include the following
    • acl – Stores a list of access permissions on the object This topic discusses how to use this
    subresource to manage object permissions (see Managing Access with ACLs (p 364))
    • restore – Supports temporarily restoring an archived object (see POST Object restore) An object
    in the Glacier storage class is an archived object To access the object you must first initiate a
    restore request which restores a copy of the archived object In the request you specify the number
    of days that you want the restored copy to exist For more information about archiving objects see
    Object Lifecycle Management (p 109)
    About the Resource Owner
    By default all Amazon S3 resources are private Only a resource owner can access the resource The
    resource owner refers to the AWS account that creates the resource For example
    • The AWS account that you use to create buckets and objects owns those resources
    • If you create an AWS Identity and Access Management (IAM) user in your AWS account your AWS
    account is the parent owner If the IAM user uploads an object the parent account to which the user
    belongs owns the object
    • A bucket owner can grant crossaccount permissions to another AWS account (or users in another
    account) to upload objects In this case the AWS account that uploads objects owns those objects
    The bucket owner does not have permissions on the objects that other accounts own with the
    following exceptions
    • The bucket owner pays the bills The bucket owner can deny access to any objects or delete any
    objects in the bucket regardless of who owns them
    API Version 20060301
    267Amazon Simple Storage Service Developer Guide
    Overview
    • The bucket owner can archive any objects or restore archived objects regardless of who owns
    them Archival refers to the storage class used to store the objects For more information see
    Object Lifecycle Management (p 109)
    Important
    AWS recommends not using the root credentials of your AWS account to make requests
    Instead create an IAM user and grant that user full access We refer to these users as
    administrator users You can use the administrator user credentials instead of root credentials
    of your account to interact with AWS and perform tasks such as create a bucket create
    users and grant them permissions For more information go to Root Account Credentials vs
    IAM User Credentials in the AWS General Reference and IAM Best Practices in the IAM User
    Guide
    The following diagram shows an AWS account owning resources the IAM users buckets and objects
    Resource Operations
    Amazon S3 provides a set of operations to work with the Amazon S3 resources For a list of available
    operations go to Operations on Buckets and Operations on Objects in the Amazon Simple Storage
    Service API Reference
    Managing Access to Resources (Access Policy Options)
    Managing access refers to granting others (AWS accounts and users) permission to perform the
    resource operations by writing an access policy For example you can grant PUT Object permission
    to a user in an AWS account so the user can upload objects to your bucket In addition to granting
    permissions to individual users and accounts you can grant permissions to everyone (also referred
    as anonymous access) or to all authenticated users (users with AWS credentials) For example if you
    configure your bucket as a website you may want to make objects public by granting the GET Object
    permission to everyone
    Access policy describes who has access to what You can associate an access policy with a resource
    (bucket and object) or a user Accordingly you can categorize the available Amazon S3 access
    policies as follows
    • Resourcebased policies – Bucket policies and access control lists (ACLs) are resourcebased
    because you attach them to your Amazon S3 resources
    API Version 20060301
    268Amazon Simple Storage Service Developer Guide
    Overview
    • ACL – Each bucket and object has an ACL associated with it An ACL is a list of grants identifying
    grantee and permission granted You use ACLs to grant basic readwrite permissions to other
    AWS accounts ACLs use an Amazon S3–specific XML schema
    The following is an example bucket ACL The grant in the ACL shows a bucket owner as having
    full control permission



    *** OwnerCanonicalUserID ***
    ownerdisplayname



    xsitypeCanonical User>
    *** OwnerCanonicalUserID ***
    displayname

    FULL_CONTROL



    Both bucket and object ACLs use the same XML schema
    • Bucket Policy – For your bucket you can add a bucket policy to grant other AWS accounts or IAM
    users permissions for the bucket and the objects in it Any object permissions apply only to the
    objects that the bucket owner creates Bucket policies supplement and in many cases replace
    ACLbased access policies
    The following is an example bucket policy You express bucket policy (and user policy) using a
    JSON file The policy grants anonymous read permission on all objects in a bucket The bucket
    policy has one statement which allows the s3GetObject action (read permission) on objects in
    a bucket named examplebucket By specifying the principal with a wild card (*) the policy
    grants anonymous access
    API Version 20060301
    269Amazon Simple Storage Service Developer Guide
    Overview
    {
    Version20121017
    Statement [
    {
    EffectAllow
    Principal *
    Action[s3GetObject]
    Resource[arnawss3examplebucket*]
    }
    ]
    }
    • User policies – You can use AWS Identity and Access Management (IAM) to manage access to
    your Amazon S3 resources Using IAM you can create IAM users groups and roles in your account
    and attach access policies to them granting them access to AWS resources including Amazon S3
    For more information about IAM go to AWS Identity and Access Management (IAM) product detail
    page
    The following is an example of a user policy You cannot grant anonymous permissions in an IAM
    user policy because the policy is attached to a user The example policy allows the associated user
    that it's attached to perform six different Amazon S3 actions on a bucket and the objects in it You
    can attach this policy to a specific IAM user group or role
    {
    Statement [
    {
    EffectAllow
    Action [
    s3PutObject
    s3GetObject
    s3DeleteObject
    s3ListAllMyBuckets
    s3GetBucketLocation
    s3ListBucket
    ]
    Resourcearnawss3examplebucket*
    }
    ]
    }
    API Version 20060301
    270Amazon Simple Storage Service Developer Guide
    Overview
    When Amazon S3 receives a request it must evaluate all the access policies to determine whether to
    authorize or deny the request For more information about how Amazon S3 evaluates these policies
    see How Amazon S3 Authorizes a Request (p 272)
    So Which Access Control Method Should I Use
    With the options available to write an access policy the following questions arise
    • When should I use which access control method For example to grant bucket permissions should
    I use bucket policy or bucket ACL I own a bucket and the objects in the bucket Should I use a
    resourcebased access policy or an IAM user policy If I use a resourcebased access policy should
    I use a bucket policy or an object ACL to manage object permissions
    • I own a bucket but I don't own all of the objects in it How are access permissions managed for the
    objects that somebody else owns
    • If I grant access by using a combination of these access policy options how does Amazon S3
    determine if a user has permission to perform a requested operation
    The following sections explains these access control alternatives how Amazon S3 evaluates
    access control mechanisms when to use which access control method and also provide example
    walkthroughs
    How Amazon S3 Authorizes a Request (p 272)
    Guidelines for Using the Available Access Policy Options (p 277)
    Example Walkthroughs Managing Access to Your Amazon S3 Resources (p 280)
    Related Topics
    We recommend that you first review the introductory topics that explain the options available for you
    to manage access to your Amazon S3 resources For more information see Introduction to Managing
    Access Permissions to Your Amazon S3 Resources (p 266) You can then use the following topics for
    more information about specific access policy options
    • Using Bucket Policies and User Policies (p 308)
    • Managing Access with ACLs (p 364)
    API Version 20060301
    271Amazon Simple Storage Service Developer Guide
    How Amazon S3 Authorizes a Request
    How Amazon S3 Authorizes a Request
    Topics
    • Related Topics (p 273)
    • How Amazon S3 Authorizes a Request for a Bucket Operation (p 273)
    • How Amazon S3 Authorizes a Request for an Object Operation (p 276)
    When Amazon S3 receives a request—for example a bucket or an object operation—it first verifies
    that the requester has the necessary permissions Amazon S3 evaluates all the relevant access
    policies user policies and resourcebased policies (bucket policy bucket ACL object ACL) in deciding
    whether to authorize the request The following are some of the example scenarios
    • If the requester is an IAM user Amazon S3 must determine if the parent AWS account to which the
    user belongs has granted the user necessary permission to perform the operation In addition if the
    request is for a bucket operation such as a request to list the bucket content Amazon S3 must verify
    that the bucket owner has granted permission for the requester to perform the operation
    Note
    To perform a specific operation on a resource an IAM user needs permission from both the
    parent AWS account to which it belongs and the AWS account that owns the resource
    • If the request is for an operation on an object that the bucket owner does not own in addition to
    making sure the requester has permissions from the object owner Amazon S3 must also check the
    bucket policy to ensure the bucket owner has not set explicit deny on the object
    Note
    A bucket owner (who pays the bill) can explicitly deny access to objects in the bucket
    regardless of who owns it The bucket owner can also delete any object in the bucket
    In order to determine whether the requester has permission to perform the specific operation Amazon
    S3 does the following in order when it receives a request
    1 Converts all the relevant access policies (user policy bucket policy ACLs) at run time into a set of
    policies for evaluation
    2 Evaluates the resulting set of policies in the following steps In each step Amazon S3 evaluates a
    subset of policies in a specific context based on the context authority
    a User context – In the user context the parent account to which the user belongs is the context
    authority
    Amazon S3 evaluates a subset of policies owned by the parent account This subset includes
    the user policy that the parent attaches to the user If the parent also owns the resource in the
    request (bucket object) Amazon S3 also evaluates the corresponding resource policies (bucket
    policy bucket ACL and object ACL) at the same time
    A user must have permission from the parent account to perform the operation
    This step applies only if the request is made by a user in an AWS account If the request is made
    using root credentials of an AWS account Amazon S3 skips this step
    b Bucket context – In the bucket context Amazon S3 evaluates policies owned by the AWS
    account that owns the bucket
    If the request is for a bucket operation the requester must have permission from the bucket
    owner If the request is for an object Amazon S3 evaluates all the policies owned by the bucket
    owner to check if the bucket owner has not explicitly denied access to the object If there is an
    explicit deny set Amazon S3 does not authorize the request
    API Version 20060301
    272Amazon Simple Storage Service Developer Guide
    How Amazon S3 Authorizes a Request
    c Object context – If the request is for an object Amazon S3 evaluates the subset of policies
    owned by the object owner
    The following sections describe in detail and provide examples
    • How Amazon S3 Authorizes a Request for a Bucket Operation (p 273)
    • How Amazon S3 Authorizes a Request for an Object Operation (p 276)
    Related Topics
    We recommend you first review the introductory topics that explain the options for managing access to
    your Amazon S3 resources For more information see Introduction to Managing Access Permissions
    to Your Amazon S3 Resources (p 266) You can then use the following topics for more information
    about specific access policy options
    • Using Bucket Policies and User Policies (p 308)
    • Managing Access with ACLs (p 364)
    How Amazon S3 Authorizes a Request for a Bucket Operation
    When Amazon S3 receives a request for a bucket operation Amazon S3 converts all the relevant
    permissions—resourcebased permissions (bucket policy bucket access control list (ACL)) and IAM
    user policy if the request is from a user—into a set of policies to evaluate at run time It then evaluates
    the resulting set of policies in a series of steps according to a specific context—user context or bucket
    context
    1 User context – If the requester is an IAM user the user must have permission from the parent
    AWS account to which it belongs In this step Amazon S3 evaluates a subset of policies owned by
    the parent account (also referred to as the context authority) This subset of policies includes the
    user policy that the parent account attaches to the user If the parent also owns the resource in the
    request (in this case the bucket) Amazon S3 also evaluates the corresponding resource policies
    (bucket policy and bucket ACL) at the same time Whenever a request for a bucket operation is
    made the server access logs record the canonical user ID of the requester For more information
    see Server Access Logging (p 546)
    2 Bucket context – The requester must have permissions from the bucket owner to perform a
    specific bucket operation In this step Amazon S3 evaluates a subset of policies owned by the AWS
    account that owns the bucket
    The bucket owner can grant permission by using a bucket policy or bucket ACL Note that if the
    AWS account that owns the bucket is also the parent account of an IAM user then it can configure
    bucket permissions in a user policy
    The following is a graphical illustration of the contextbased evaluation for bucket operation
    API Version 20060301
    273Amazon Simple Storage Service Developer Guide
    How Amazon S3 Authorizes a Request
    The following examples illustrate the evaluation logic
    Example 1 Bucket Operation Requested by Bucket Owner
    In this example the bucket owner sends a request for a bucket operation using the root credentials of
    the AWS account
    Amazon S3 performs the context evaluation as follows
    1 Because the request is made by using root credentials of an AWS account the user context is not
    evaluated
    2 In the bucket context Amazon S3 reviews the bucket policy to determine if the requester has
    permission to perform the operation Amazon S3 authorizes the request
    Example 2 Bucket Operation Requested by an AWS Account That Is Not the
    Bucket Owner
    In this example a request is made using root credentials of AWS account 111111111111 for a bucket
    operation owned by AWS account 222222222222 No IAM users are involved in this request
    In this case Amazon S3 evaluates the context as follows
    1 Because the request is made using root credentials of an AWS account the user context is not
    evaluated
    2 In the bucket context Amazon S3 examines the bucket policy If the bucket owner (AWS account
    222222222222) has not authorized AWS account 111111111111 to perform the requested
    API Version 20060301
    274Amazon Simple Storage Service Developer Guide
    How Amazon S3 Authorizes a Request
    operation Amazon S3 denies the request Otherwise Amazon S3 grants the request and performs
    the operation
    Example 3 Bucket Operation Requested by an IAM User Whose Parent AWS
    Account Is Also the Bucket Owner
    In the example the request is sent by Jill an IAM user in AWS account 111111111111 which also
    owns the bucket
    Amazon S3 performs the following context evaluation
    1 Because the request is from an IAM user in the user context Amazon S3 evaluates all policies that
    belong to the parent AWS account to determine if Jill has permission to perform the operation
    In this example parent AWS account 111111111111 to which the user belongs is also the bucket
    owner As a result in addition to the user policy Amazon S3 also evaluates the bucket policy and
    bucket ACL in the same context because they belong to the same account
    2 Because Amazon S3 evaluated the bucket policy and bucket ACL as part of the user context it does
    not evaluate the bucket context
    Example 4 Bucket Operation Requested by an IAM User Whose Parent AWS
    Account Is Not the Bucket Owner
    In this example the request is sent by Jill an IAM user whose parent AWS account is
    111111111111 but the bucket is owned by another AWS account 222222222222
    Jill will need permissions from both the parent AWS account and the bucket owner Amazon S3
    evaluates the context as follows
    1 Because the request is from an IAM user Amazon S3 evaluates the user context by reviewing
    the policies authored by the account to verify that Jill has the necessary permissions If Jill has
    permission then Amazon S3 moves on to evaluate the bucket context if not it denies the request
    2 In the bucket context Amazon S3 verifies that bucket owner 222222222222 has granted Jill (or
    her parent AWS account) permission to perform the requested operation If she has that permission
    Amazon S3 grants the request and performs the operation otherwise Amazon S3 denies the
    request
    API Version 20060301
    275Amazon Simple Storage Service Developer Guide
    How Amazon S3 Authorizes a Request
    How Amazon S3 Authorizes a Request for an Object Operation
    When Amazon S3 receives a request for an object operation it converts all the relevant permissions
    —resourcebased permissions (object access control list (ACL) bucket policy bucket ACL) and IAM
    user policies—into a set of policies to be evaluated at run time It then evaluates the resulting set of
    policies in a series of steps In each step it evaluates a subset of policies in three specific contexts—
    user context bucket context and object context
    1 User context – If the requester is an IAM user the user must have permission from the parent
    AWS account to which it belongs In this step Amazon S3 evaluates a subset of policies owned
    by the parent account (also referred as the context authority) This subset of policies includes the
    user policy that the parent attaches to the user If the parent also owns the resource in the request
    (bucket object) Amazon S3 evaluates the corresponding resource policies (bucket policy bucket
    ACL and object ACL) at the same time
    Note
    If the parent AWS account owns the resource (bucket or object) it can grant resource
    permissions to its IAM user by using either the user policy or the resource policy
    2 Bucket context – In this context Amazon S3 evaluates policies owned by the AWS account that
    owns the bucket
    If the AWS account that owns the object in the request is not same as the bucket owner in the
    bucket context Amazon S3 checks the policies if the bucket owner has explicitly denied access to
    the object If there is an explicit deny set on the object Amazon S3 does not authorize the request
    3 Object context – The requester must have permissions from the object owner to perform a specific
    object operation In this step Amazon S3 evaluates the object ACL
    Note
    If bucket and object owners are the same access to the object can be granted in the
    bucket policy which is evaluated at the bucket context If the owners are different the
    object owners must use an object ACL to grant permissions If the AWS account that owns
    the object is also the parent account to which the IAM user belongs it can configure object
    permissions in a user policy which is evaluated at the user context For more information
    about using these access policy alternatives see Guidelines for Using the Available Access
    Policy Options (p 277)
    The following is an illustration of the contextbased evaluation for an object operation
    Example 1 Object Operation Request
    In this example IAM user Jill whose parent AWS account is 111111111111 sends an object
    operation request (for example Get object) for an object owned by AWS account 333333333333 in a
    bucket owned by AWS account 222222222222
    API Version 20060301
    276Amazon Simple Storage Service Developer Guide
    Guidelines for Using the Available Access Policy Options
    Jill will need permission from the parent AWS account the bucket owner and the object owner
    Amazon S3 evaluates the context as follows
    1 Because the request is from an IAM user Amazon S3 evaluates the user context to verify that the
    parent AWS account 111111111111 has given Jill permission to perform the requested operation
    If she has that permission Amazon S3 evaluates the bucket context Otherwise Amazon S3 denies
    the request
    2 In the bucket context the bucket owner AWS account 222222222222 is the context authority
    Amazon S3 evaluates the bucket policy to determine if the bucket owner has explicitly denied Jill
    access to the object
    3 In the object context the context authority is AWS account 333333333333 the object owner
    Amazon S3 evaluates the object ACL to determine if Jill has permission to access the object If she
    does Amazon S3 authorizes the request
    Guidelines for Using the Available Access Policy
    Options
    Amazon S3 supports resourcebased policies and user policies to manage access to your Amazon
    S3 resources (see Managing Access to Resources (Access Policy Options) (p 268)) Resource
    based policies include bucket policies bucket ACLs and object ACLs This section describes specific
    scenarios for using resourcebased access policies to manage access to your Amazon S3 resources
    When to Use an ACLbased Access Policy (Bucket and Object
    ACLs)
    Both buckets and objects have associated ACLs that you can use to grant permissions The following
    sections describe scenarios for using object ACLs and bucket ACLs
    When to Use an Object ACL
    In addition to an object ACL there are other ways an object owner can manage object permissions
    For example
    • If the AWS account that owns the object also owns the bucket then it can write a bucket policy to
    manage the object permissions
    • If the AWS account that owns the object wants to grant permission to a user in its account it can use
    a user policy
    So when do you use object ACLs to manage object permissions The following are the scenarios when
    you use object ACLs to manage object permissions
    API Version 20060301
    277Amazon Simple Storage Service Developer Guide
    Guidelines for Using the Available Access Policy Options
    • An object ACL is the only way to manage access to objects not owned by the bucket owner
    – An AWS account that owns the bucket can grant another AWS account permission to upload
    objects The bucket owner does not own these objects The AWS account that created the object
    must grant permissions using object ACLs
    Note
    A bucket owner cannot grant permissions on objects it does not own For example a bucket
    policy granting object permissions applies only to objects owned by the bucket owner
    However the bucket owner who pays the bills can write a bucket policy to deny access to
    any objects in the bucket regardless of who owns it The bucket owner can also delete any
    objects in the bucket
    • Permissions vary by object and you need to manage permissions at the object level – You can
    write a single policy statement granting an AWS account read permission on millions of objects with
    a specific key name prefix For example grant read permission on objects starting with key name
    prefix logs However if your access permissions vary by object granting permissions to individual
    objects using a bucket policy may not be practical Also the bucket policies are limited to 20 KB in
    size
    In this case you may find using object ACLs a suitable alternative Although even an object ACL is
    also limited to a maximum of 100 grants (see Access Control List (ACL) Overview (p 364))
    • Object ACLs control only objectlevel permissions – There is a single bucket policy for the entire
    bucket but object ACLs are specified per object
    An AWS account that owns a bucket can grant another AWS account permission to manage access
    policy It allows that account to change anything in the policy To better manage permissions
    you may choose not to give such a broad permission and instead grant only the READACP and
    WRITEACP permissions on a subset of objects This limits the account to manage permissions only
    on specific objects by updating individual object ACLs
    When to Use a Bucket ACL
    The only recommended use case for the bucket ACL is to grant write permission to the Amazon S3
    Log Delivery group to write access log objects to your bucket (see Server Access Logging (p 546))
    If you want Amazon S3 to deliver access logs to your bucket you will need to grant write permission
    on the bucket to the Log Delivery group The only way you can grant necessary permissions to the Log
    Delivery group is via a bucket ACL as shown in the following bucket ACL fragment










    xsitypeGroup>
    httpacsamazonawscomgroupss3LogDelivery

    WRITE



    API Version 20060301
    278Amazon Simple Storage Service Developer Guide
    Guidelines for Using the Available Access Policy Options
    When to Use a Bucket Policy
    If an AWS account that owns a bucket wants to grant permission to users in its account it can use
    either a bucket policy or a user policy But in the following scenarios you will need to use a bucket
    policy
    • You want to manage crossaccount permissions for all Amazon S3 permissions – You can
    use ACLs to grant crossaccount permissions to other accounts but ACLs support only a finite
    set of permission (What Permissions Can I Grant (p 366)) these don't include all Amazon S3
    permissions For example you cannot grant permissions on bucket subresources (see Managing
    Access Permissions to Your Amazon S3 Resources (p 266)) using an ACL
    Although both bucket and user policies support granting permission for all Amazon S3 operations
    (see Specifying Permissions in a Policy (p 312)) the user policies are for managing permissions
    for users in your account For crossaccount permissions to other AWS accounts or users in another
    account you must use a bucket policy
    When to Use a User Policy
    In general you can use either a user policy or a bucket policy to manage permissions You may
    choose to manage permissions by creating users and managing permissions individually by attaching
    policies to users (or user groups) or you may find that resourcebased policies such as a bucket
    policy work better for your scenario
    Note that AWS Identity and Access Management (IAM) enables you to create multiple users
    within your AWS account and manage their permissions via user policies An IAM user must have
    permissions from the parent account to which it belongs and from the AWS account that owns the
    resource the user wants to access The permissions can be granted as follows
    • Permission from the parent account – The parent account can grant permissions to its user by
    attaching a user policy
    • Permission from the resource owner – The resource owner can grant permission to either the
    IAM user (using a bucket policy) or the parent account (using a bucket policy bucket ACL or object
    ACL)
    This is akin to a child who wants to play with a toy that belongs to someone else In this case the child
    must get permission from a parent to play with the toy and permission from the toy owner
    Permission Delegation
    If an AWS account owns a resource it can grant those permissions to another AWS account That
    account can then delegate those permissions or a subset of them to users in the account This is
    referred to as permission delegation But an account that receives permissions from another account
    cannot delegate permission crossaccount to another AWS account
    Related Topics
    We recommend you first review all introductory topics that explain how you manage access to your
    Amazon S3 resources and related guidelines For more information see Introduction to Managing
    Access Permissions to Your Amazon S3 Resources (p 266) You can then use the following topics for
    more information about specific access policy options
    • Using Bucket Policies and User Policies (p 308)
    • Managing Access with ACLs (p 364)
    API Version 20060301
    279Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Example Walkthroughs Managing Access to Your
    Amazon S3 Resources
    This topic provides the following introductory walkthrough examples for granting access to Amazon S3
    resources These examples use the AWS Management Console to create resources (buckets objects
    users) and grant them permissions The examples then show you how to verify permissions using the
    command line tools so you don't have to write any code We provide commands using both the AWS
    Command Line Interface (CLI) and the AWS Tools for Windows PowerShell
    • Example 1 Bucket Owner Granting Its Users Bucket Permissions (p 284)
    The IAM users you create in your account have no permissions by default In this exercise you grant
    a user permission to perform bucket and object operations
    • Example 2 Bucket Owner Granting CrossAccount Bucket Permissions (p 289)
    In this exercise a bucket owner Account A grants crossaccount permissions to another AWS
    account Account B Account B then delegates those permissions to users in its account
    • Managing object permissions when the object and bucket owners are not the same
    The example scenarios in this case are about a bucket owner granting object permissions to others
    but not all objects in the bucket are owned by the bucket owner What permissions does the bucket
    owner need and how can it delegate those permissions
    The AWS account that creates a bucket is called the bucket owner The owner can grant other AWS
    accounts permission to upload objects and the AWS accounts that create objects own them The
    bucket owner has no permissions on those objects created by other AWS accounts If the bucket
    owner writes a bucket policy granting access to objects the policy does not apply to objects that are
    owned by other accounts
    In this case the object owner must first grant permissions to the bucket owner using an object ACL
    The bucket owner can then delegate those object permissions to others to users in its own account
    or to another AWS account as illustrated by the following examples
    • Example 3 Bucket Owner Granting Its Users Permissions to Objects It Does Not Own (p 295)
    In this exercise the bucket owner first gets permissions from the object owner The bucket owner
    then delegates those permissions to users in its own account
    • Example 4 Bucket Owner Granting Crossaccount Permission to Objects It Does Not
    Own (p 299)
    After receiving permissions from the object owner the bucket owner cannot delegate permission
    to other AWS accounts because crossaccount delegation is not supported (see Permission
    Delegation (p 279)) Instead the bucket owner can create an IAM role with permissions to
    perform specific operations (such as get object) and allow another AWS account to assume that
    role Anyone who assumes the role can then access objects This example shows how a bucket
    owner can use an IAM role to enable this crossaccount delegation
    Before You Try the Example Walkthroughs
    These examples use the AWS Management Console to create resources and grant permissions And
    to test permissions the examples use the command line tools AWS Command Line Interface (CLI)
    and AWS Tools for Windows PowerShell so you don't need to write any code To test permissions you
    will need to set up one of these tools For more information see Setting Up the Tools for the Example
    Walkthroughs (p 281)
    In addition when creating resources these examples don't use root credentials of an AWS account
    Instead you create an administrator user in these accounts to perform these tasks
    API Version 20060301
    280Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    About Using an Administrator User to Create Resources and Grant
    Permissions
    AWS Identity and Access Management (IAM) recommends not using the root credentials of your AWS
    account to make requests Instead create an IAM user grant that user full access and then use
    that user's credentials to interact with AWS We refer to this user as an administrator user For more
    information go to Root Account Credentials vs IAM User Credentials in the AWS General Reference
    and IAM Best Practices in the IAM User Guide
    All example walkthroughs in this section use the administrator user credentials If you have not created
    an administrator user for your AWS account the topics show you how
    Note that to sign in to the AWS Management Console using the user credentials you will need to use
    the IAM User SignIn URL The IAM console provides this URL for your AWS account The topics show
    you how to get the URL
    Setting Up the Tools for the Example Walkthroughs
    The introductory examples (see Example Walkthroughs Managing Access to Your Amazon S3
    Resources (p 280)) use the AWS Management Console to create resources and grant permissions
    And to test permissions the examples use the command line tools AWS Command Line Interface
    (CLI) and AWS Tools for Windows PowerShell so you don't need to write any code To test
    permissions you must set up one of these tools
    To set up the AWS CLI
    1 Download and configure the AWS CLI For instructions see the following topics in the AWS
    Command Line Interface User Guide
    Getting Set Up with the AWS Command Line Interface
    Installing the AWS Command Line Interface
    Configuring the AWS Command Line Interface
    2 Set the default profile
    You will store user credentials in the AWS CLI config file Create a default profile in the config file
    using your AWS account credentials
    [default]
    aws_access_key_id access key ID
    aws_secret_access_key secret access key
    region uswest2
    3 Verify the setup by entering the following command at the command prompt Both these
    commands don't provide credentials explicitly so the credentials of the default profile are used
    • Try the help command
    aws help
    • Use aws s3 ls to get a list of buckets on the configured account
    aws s3 ls
    API Version 20060301
    281Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    As you go through the example walkthroughs you will create users and you will save user credentials
    in the config files by creating profiles as the following example shows Note that these profiles have
    names (AccountAadmin and AccountBadmin)
    [profile AccountAadmin]
    aws_access_key_id User AccountAadmin access key ID
    aws_secret_access_key User AccountAadmin secret access key
    region uswest2
    [profile AccountBadmin]
    aws_access_key_id Account B access key ID
    aws_secret_access_key Account B secret access key
    region useast1
    To execute a command using these user credentials you add the profile parameter specifying
    the profile name The following AWS CLI command retrieves a listing of objects in examplebucket
    and specifies the AccountBadmin profile
    aws s3 ls s3examplebucket profile AccountBadmin
    Alternatively you can configure one set of user credentials as the default profile by changing the
    AWS_DEFAULT_PROFILE environment variable from the command prompt Once you've done this
    whenever you execute AWS CLI commands without the profile parameter the AWS CLI will use
    the profile you set in the environment variable as the default profile
    export AWS_DEFAULT_PROFILEAccountAadmin
    To set up AWS Tools for Windows PowerShell
    1 Download and configure the AWS Tools for Windows PowerShell For instructions go to
    Download and Install the AWS Tools for Windows PowerShell in the AWS Tools for Windows
    PowerShell User Guide
    Note
    In order to load the AWS Tools for Windows PowerShell module you need to enable
    PowerShell script execution For more information go to Enable Script Execution in the
    AWS Tools for Windows PowerShell User Guide
    2 For these exercises you will specify AWS credentials per session using the Set
    AWSCredentials command The command saves the credentials to a persistent store (
    StoreAs parameter)
    SetAWSCredentials AccessKey AccessKeyID SecretKey SecretAccessKey
    storeas string
    3 Verify the setup
    • Execute the GetCommand to retrieve a list of available commands you can use for Amazon S3
    operations
    GetCommand module awspowershell noun s3* StoredCredentials string
    • Execute the GetS3Object command to retrieve a list of objects in a bucket
    GetS3Object BucketName bucketname StoredCredentials string
    API Version 20060301
    282Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    For a list of commands go to Amazon Simple Storage Service Cmdlets
    Now you are ready to try the exercises Follow the links provided at the beginning of the section
    API Version 20060301
    283Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Example 1 Bucket Owner Granting Its Users Bucket
    Permissions
    Topics
    • Step 0 Preparing for the Walkthrough (p 285)
    • Step 1 Create Resources (a Bucket and an IAM User) in Account A and Grant
    Permissions (p 285)
    • Step 2 Test Permissions (p 287)
    In this exercise an AWS account owns a bucket and it has an IAM user in the account The
    user by default has no permissions The parent account must grant permissions to the user to
    perform any tasks Both the bucket owner and the parent account to which the user belongs are the
    same Therefore the AWS account can use a bucket policy a user policy or both to grant its user
    permissions on the bucket You will grant some permissions using a bucket policy and grant other
    permissions using a user policy
    The following steps summarize the walkthrough
    1 Account administrator creates a bucket policy granting a set of permissions to the user
    2 Account administrator attaches a user policy to the user granting additional permissions
    3 User then tries permissions granted via both the bucket policy and the user policy
    For this example you will need an AWS account Instead of using the root credentials of the account
    you will create an administrator user (see About Using an Administrator User to Create Resources and
    Grant Permissions (p 281)) We refer to the AWS account and the administrator user as follows
    Account ID Account Referred To As Administrator User in the
    Account
    111111111111 Account A AccountAadmin
    All the tasks of creating users and granting permissions are done in the AWS Management Console
    To verify permissions the walkthrough uses the command line tools AWS Command Line Interface
    (CLI) and AWS Tools for Windows PowerShell to verify the permissions so you don't need to write
    any code
    API Version 20060301
    284Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Step 0 Preparing for the Walkthrough
    1 Make sure you have an AWS account and that it has a user with administrator privileges
    a Sign up for an account if needed We refer to this account as Account A
    i Go to httpawsamazoncoms3 and click Sign Up
    ii Follow the onscreen instructions
    AWS will notify you by email when your account is active and available for you to use
    b In Account A create an administrator user AccountAadmin Using Account A credentials sign
    in to the IAM console and do the following
    i Create user AccountAadmin and note down the user security credentials
    For instructions see Creating an IAM User in Your AWS Account in the IAM User Guide
    ii Grant AccountAadmin administrator privileges by attaching a user policy giving full
    access
    For instructions see Working with Policies in the IAM User Guide
    iii Note down the IAM User SignIn URL for AccountAadmin You will need to use this URL
    when signing in to the AWS Management Console For more information about where to
    find it see How Users Sign in to Your Account in IAM User Guide Note down the URL for
    each of the accounts
    2 Set up either the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell
    Make sure you save administrator user credentials as follows
    • If using the AWS CLI create two profiles AccountAadmin and AccountBadmin in the config file
    • If using the AWS Tools for Windows PowerShell make sure you store credentials for the
    session as AccountAadmin and AccountBadmin
    For instructions see Setting Up the Tools for the Example Walkthroughs (p 281)
    Step 1 Create Resources (a Bucket and an IAM User) in Account A and Grant
    Permissions
    Using the credentials of user AccountAadmin in Account A and the special IAM user signin URL sign
    in to the AWS Management Console and do the following
    1 Create Resources (a bucket and an IAM user)
    a In the Amazon S3 console create a bucket Note down the AWS region in which you created
    it For instructions go to Creating a Bucket in the Amazon Simple Storage Service Console
    User Guide
    b In the IAM console do the following
    i Create a user Dave
    For instructions see Creating IAM Users (AWS Management Console) in the IAM User
    Guide
    ii Note down the UserDave credentials
    iii Note down the Amazon Resource Name (ARN) for user Dave In the IAM console select
    the user and the Summary tab provides the user ARN
    2 Grant Permissions
    API Version 20060301
    285Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Because the bucket owner and the parent account to which the user belongs are the same the
    AWS account can grant user permissions using a bucket policy a user policy or both In this
    example you do both If the object is also owned by the same account the bucket owner can
    grant object permissions in the bucket policy (or an IAM policy)
    a In the Amazon S3 console attach the following bucket policy to examplebucket
    The policy has two statements
    • The first statement grants Dave the bucket operation permissions
    s3GetBucketLocation and s3ListBucket
    • The second statement grants the s3GetObject permission Because Account A also
    owns the object the account administrator is able to grant the s3GetObject permission
    In the Principal statement Dave is identified by his user ARN For more information about
    policy elements see Access Policy Language Overview (p 308)
    {
    Version 20121017
    Statement [
    {
    Sid statement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountAIDuserDave
    }
    Action [
    s3GetBucketLocation
    s3ListBucket
    ]
    Resource [
    arnawss3examplebucket
    ]
    }
    {
    Sid statement2
    Effect Allow
    Principal {
    AWS arnawsiamAccountAIDuserDave
    }
    Action [
    s3GetObject
    ]
    Resource [
    arnawss3examplebucket*
    ]
    }
    ]
    }
    b Create an inline policy for the user Dave by using the following policy The policy grants Dave
    the s3PutObject permission You need to update the policy by providing your bucket
    name
    {
    Version 20121017
    API Version 20060301
    286Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Statement [
    {
    Sid PermissionForObjectOperations
    Effect Allow
    Action [
    s3PutObject
    ]
    Resource [
    arnawss3examplebucket*
    ]
    }
    ]
    }
    For instructions see Working with Inline Policies in the IAM User Guide Note you need to
    sign in to the console using Account B credentials
    Step 2 Test Permissions
    Using Dave's credentials verify that the permissions work You can use either of the following two
    procedures
    Test Using the AWS CLI
    1 Update the AWS CLI config file by adding the following UserDaveAccountA profile For more
    information see Setting Up the Tools for the Example Walkthroughs (p 281)
    [profile UserDaveAccountA]
    aws_access_key_id accesskey
    aws_secret_access_key secretaccesskey
    region useast1
    2 Verify that Dave can perform the operations as granted in the user policy Upload a sample object
    using the following AWS CLI putobject command
    The body parameter in the command identifies the source file to upload For example if the file
    is in the root of the C drive on a Windows machine you specify c\HappyFacejpg The key
    parameter provides the key name for the object
    aws s3api putobject bucket examplebucket key HappyFacejpg
    body HappyFacejpg profile UserDaveAccountA
    Execute the following AWS CLI command to get the object
    aws s3api getobject bucket examplebucket
    key HappyFacejpg OutputFilejpg profile UserDaveAccountA
    Test Using the AWS Tools for Windows PowerShell
    1 Store Dave's credentials as AccountADave You then use these credentials to PUT and GET an
    object
    setawscredentials AccessKey AccessKeyID SecretKey SecretAccessKey
    storeas AccountADave
    API Version 20060301
    287Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    2 Upload a sample object using the AWS Tools for Windows PowerShell WriteS3Object
    command using user Dave's stored credentials
    WriteS3Object bucketname examplebucket key HappyFacejpg
    file HappyFacejpg StoredCredentials AccountADave
    Download the previously uploaded object
    ReadS3Object bucketname examplebucket key HappyFacejpg
    file Outputjpg StoredCredentials AccountADave
    API Version 20060301
    288Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Example 2 Bucket Owner Granting CrossAccount Bucket
    Permissions
    Topics
    • Step 0 Preparing for the Walkthrough (p 290)
    • Step 1 Do the Account A Tasks (p 291)
    • Step 2 Do the Account B Tasks (p 292)
    • Step 3 Extra Credit Try Explicit Deny (p 293)
    • Step 4 Clean Up (p 294)
    An AWS account—for example Account A—can grant another AWS account Account B permission
    to access its resources such as buckets and objects Account B can then delegate those permissions
    to users in its account In this example scenario a bucket owner grants crossaccount permission to
    another account to perform specific bucket operations
    Note
    Account A can also directly grant a user in Account B permissions using a bucket policy
    But the user will still need permission from the parent account Account B to which the user
    belongs even if Account B does not have permissions from Account A As long as the user
    has permission from both the resource owner and the parent account the user will be able to
    access the resource
    The following is a summary of the walkthrough steps
    1 Account A administrator user attaches a bucket policy granting crossaccount permissions to
    Account B to perform specific bucket operations
    Note that administrator user in Account B will automatically inherit the permissions
    2 Account B administrator user attaches user policy to the user delegating the permissions it received
    from Account A
    3 User in Account B then verifies permissions by accessing an object in the bucket owned by Account
    A
    For this example you need two accounts The following table shows how we refer to these accounts
    and the administrator users in them Per IAM guidelines (see About Using an Administrator User to
    Create Resources and Grant Permissions (p 281)) we do not use the account root credentials in this
    walkthrough Instead you create an administrator user in each account and use those credentials in
    creating resources and granting them permissions
    API Version 20060301
    289Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    AWS Account ID Account Referred To As Administrator User in the
    Account
    111111111111 Account A AccountAadmin
    222222222222 Account B AccountBadmin
    All the tasks of creating users and granting permissions are done in the AWS Management Console
    To verify permissions the walkthrough uses the command line tools AWS Command Line Interface
    (CLI) and AWS Tools for Windows PowerShell so you don't need to write any code
    Step 0 Preparing for the Walkthrough
    1 Make sure you have two AWS accounts and that each account has one administrator user as
    shown in the table in the preceding section
    a Sign up for an AWS account if needed
    i Go to httpawsamazoncoms3 and click Create an AWS Account
    ii Follow the onscreen instructions
    AWS will notify you by email when your account is active and available for you to use
    b Using Account A credentials sign in to the IAM console to create the administrator user
    i Create user AccountAadmin and note down the security credentials For instructions see
    Creating an IAM User in Your AWS Account in the IAM User Guide
    ii Grant AccountAadmin administrator privileges by attaching a user policy giving full
    access For instructions see Working with Policies in the IAM User Guide
    c While you are in the IAM console note down the IAM User SignIn URL on the Dashboard
    All users in the account must use this URL when signing in to the AWS Management Console
    For more information see How Users Sign in to Your Account in IAM User Guide
    d Repeat the preceding step using Account B credentials and create administrator user
    AccountBadmin
    2 Set up either the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell
    Make sure you save administrator user credentials as follows
    • If using the AWS CLI create two profiles AccountAadmin and AccountBadmin in the config file
    • If using the AWS Tools for Windows PowerShell make sure you store credentials for the
    session as AccountAadmin and AccountBadmin
    For instructions see Setting Up the Tools for the Example Walkthroughs (p 281)
    3 Save the administrator user credentials also referred to as profiles You can use the profile name
    instead of specifying credentials for each command you enter For more information see Setting
    Up the Tools for the Example Walkthroughs (p 281)
    a Add profiles in the AWS CLI config file for each of the administrator users in the two accounts
    [profile AccountAadmin]
    aws_access_key_id accesskeyID
    aws_secret_access_key secretaccesskey
    region useast1
    [profile AccountBadmin]
    aws_access_key_id accesskeyID
    API Version 20060301
    290Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    aws_secret_access_key secretaccesskey
    region useast1
    b If you are using the AWS Tools for Windows PowerShell
    setawscredentials –AccessKey AcctAaccesskeyID –SecretKey AcctA
    secretaccesskey –storeas AccountAadmin
    setawscredentials –AccessKey AcctBaccesskeyID –SecretKey AcctB
    secretaccesskey –storeas AccountBadmin
    Step 1 Do the Account A Tasks
    Step 11 Sign In to the AWS Management Console
    Using the IAM user signin URL for Account A first sign in to the AWS Management Console as
    AccountAadmin user This user will create a bucket and attach a policy to it
    Step 12 Create a Bucket
    1 In the Amazon S3 console create a bucket This exercise assumes the bucket is created in the
    US East (N Virginia) region and is named examplebucket
    For instructions go to Creating a Bucket in the Amazon Simple Storage Service Console User
    Guide
    2 Upload a sample object to the bucket
    For instructions go to Add an Object to a Bucket in the Amazon Simple Storage Service Getting
    Started Guide
    Step 13 Attach a Bucket Policy to Grant CrossAccount Permissions to Account B
    The bucket policy grants the s3GetBucketLocation and s3ListBucket permissions to Account
    B It is assumed you are still signed into the console using AccountAadmin user credentials
    1 Attach the following bucket policy to examplebucket The policy grants Account B permission for
    the s3GetBucketLocation and s3ListBucket actions
    For instructions on editing bucket permissions go to Editing Bucket Permissions in the Amazon
    Simple Storage Service Console User Guide Follow these steps to add a bucket policy
    {
    Version 20121017
    Statement [
    {
    Sid Example permissions
    Effect Allow
    Principal {
    AWS arnawsiamAccountBIDroot
    }
    Action [
    s3GetBucketLocation
    s3ListBucket
    ]
    Resource [
    arnawss3examplebucket
    ]
    API Version 20060301
    291Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    }
    ]
    }
    2 Verify Account B (and thus its administrator user) can perform the operations
    • Using the AWS CLI
    aws s3 ls s3examplebucket profile AccountBadmin
    aws s3api getbucketlocation bucket examplebucket profile
    AccountBadmin
    • Using the AWS Tools for Windows PowerShell
    gets3object BucketName example2bucket StoredCredentials
    AccountBadmin
    gets3bucketlocation BucketName example2bucket StoredCredentials
    AccountBadmin
    Step 2 Do the Account B Tasks
    Now the Account B administrator creates a user Dave and delegates the Dave permissions received
    from Account A
    Step 21 Sign In to the AWS Management Console
    Using the IAM user signin URL for Account B first sign in to the AWS Management Console as
    AccountBadmin user
    Step 22 Create User Dave in Account B
    1 In the IAM console create a user Dave
    For instructions see Creating IAM Users (AWS Management Console) in the IAM User Guide
    2 Note down the UserDave credentials
    Step 23 Delegate Permissions to User Dave
    • Create an inline policy for the user Dave by using the following policy You will need to update the
    policy by providing your bucket name
    It is assumed you are signed in to the console using AccountBadmin user credentials
    {
    Version 20121017
    Statement [
    {
    Sid Example
    Effect Allow
    Action [
    s3ListBucket
    ]
    Resource [
    arnawss3examplebucket
    ]
    }
    API Version 20060301
    292Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    ]
    }
    For instructions see Working with Inline Policies in the IAM User Guide
    Step 24 Test Permissions
    Now Dave in Account B can list the contents of examplebucket owned by Account A You can verify
    the permissions using either of the following procedures
    Test Using the AWS CLI
    1 Add the UserDave profile to the AWS CLI config file For more information about the config file
    see Setting Up the Tools for the Example Walkthroughs (p 281)
    [profile UserDave]
    aws_access_key_id accesskey
    aws_secret_access_key secretaccesskey
    region useast1
    2 At the command prompt enter the following AWS CLI command to verify Dave can now get
    an object list from the examplebucket owned by Account A Note the command specifies the
    UserDave profile
    aws s3 ls s3examplebucket profile UserDave
    Dave does not have any other permissions So if he tries any other operation—for example the
    following get bucket location—Amazon S3 returns permission denied
    aws s3api getbucketlocation bucket examplebucket profile UserDave
    Test Using AWS Tools for Windows PowerShell
    1 Store Dave's credentials as AccountBDave
    setawscredentials AccessKey AccessKeyID SecretKey SecretAccessKey
    storeas AccountBDave
    2 Try the List Bucket command
    gets3object BucketName example2bucket StoredCredentials AccountBDave
    Dave does not have any other permissions So if he tries any other operation—for example the
    following get bucket location—Amazon S3 returns permission denied
    gets3bucketlocation BucketName example2bucket StoredCredentials
    AccountBDave
    Step 3 Extra Credit Try Explicit Deny
    You can have permissions granted via an ACL a bucket policy and a user policy But if there is
    an explicit deny set via either a bucket policy or a user policy the explicit deny takes precedence
    API Version 20060301
    293Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    over any other permissions For testing let's update the bucket policy and explicitly deny Account B
    the s3ListBucket permission The policy also grants s3ListBucket permission but explicit
    deny takes precedence and Account B or users in Account B will not be able to list objects in
    examplebucket
    1 Using credentials of user AccountAadmin in Account A replace the bucket policy by the following
    {
    Version 20121017
    Statement [
    {
    Sid Example permissions
    Effect Allow
    Principal {
    AWS arnawsiamAccountBIDroot
    }
    Action [
    s3GetBucketLocation
    s3ListBucket
    ]
    Resource [
    arnawss3examplebucket
    ]
    }
    {
    Sid Deny permission
    Effect Deny
    Principal {
    AWS arnawsiamAccountBIDroot
    }
    Action [
    s3ListBucket
    ]
    Resource [
    arnawss3examplebucket
    ]
    }
    ]
    }
    2 Now if you try to get a bucket list using AccountBadmin credentials you will get access denied
    • Using the AWS CLI
    aws s3 ls s3examplebucket profile AccountBadmin
    • Using the AWS Tools for Windows PowerShell
    gets3object BucketName example2bucket StoredCredentials AccountBDave
    Step 4 Clean Up
    1 After you are done testing you can do the following to clean up
    • Sign in to the AWS Management Console (AWS Management Console) using Account A
    credentials and do the following
    API Version 20060301
    294Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    • In the Amazon S3 console remove the bucket policy attached to examplebucket In the
    bucket Properties delete the policy in the Permissions section
    • If the bucket is created for this exercise in the Amazon S3 console delete the objects and
    then delete the bucket
    • In the IAM console remove the AccountAadmin user
    2 Sign in to the AWS Management Console (AWS Management Console) using Account B
    credentials In the IAM console delete user AccountBadmin
    Example 3 Bucket Owner Granting Its Users Permissions to
    Objects It Does Not Own
    Topics
    • Step 0 Preparing for the Walkthrough (p 296)
    • Step 1 Do the Account A Tasks (p 297)
    • Step 2 Do the Account B Tasks (p 298)
    • Step 3 Test Permissions (p 298)
    • Step 4 Clean Up (p 299)
    The scenario for this example is that a bucket owner wants to grant permission to access objects but
    not all objects in the bucket are owned by the bucket owner How can a bucket owner grant permission
    on objects it does not own For this example the bucket owner is trying to grant permission to users in
    its own account
    A bucket owner can enable other AWS accounts to upload objects These objects are owned by the
    accounts that created them The bucket owner does not own objects that were not created by the
    bucket owner Therefore for the bucket owner to grant access to these objects the object owner must
    first grant permission to the bucket owner using an object ACL The bucket owner can then delegate
    those permissions via a bucket policy In this example the bucket owner delegates permission to users
    in its own account
    The following is a summary of the walkthrough steps
    1 Account A administrator user attaches a bucket policy with two statements
    • Allow crossaccount permission to Account B to upload objects
    API Version 20060301
    295Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    • Allow a user in its own account to access objects in the bucket
    2 Account B administrator user uploads objects to the bucket owned by Account A
    3 Account B administrator updates the object ACL adding grant that gives the bucket owner full
    control permission on the object
    4 User in Account A verifies by accessing objects in the bucket regardless of who owns them
    For this example you need two accounts The following table shows how we refer to these accounts
    and the administrator users in these accounts Per IAM guidelines (see About Using an Administrator
    User to Create Resources and Grant Permissions (p 281)) we do not use the account root credentials
    in this walkthrough Instead you create an administrator user in each account and use those
    credentials in creating resources and granting them permissions
    AWS Account ID Account Referred To As Administrator User in the
    Account
    111111111111 Account A AccountAadmin
    222222222222 Account B AccountBadmin
    All the tasks of creating users and granting permissions are done in the AWS Management Console
    To verify permissions the walkthrough uses the command line tools AWS Command Line Interface
    (CLI) and AWS Tools for Windows PowerShell so you don't need to write any code
    Step 0 Preparing for the Walkthrough
    1 Make sure you have two AWS accounts and each account has one administrator user as shown in
    the table in the preceding section
    a Sign up for an AWS account if needed
    i Go to httpawsamazoncoms3 and click Create an AWS Account
    ii Follow the onscreen instructions AWS will notify you by email when your account is
    active and available for you to use
    b Using Account A credentials sign in to the IAM console and do the following to create an
    administrator user
    • Create user AccountAadmin and note down security credentials For more information
    about adding users see Creating an IAM User in Your AWS Account in the IAM User
    Guide
    • Grant AccountAadmin administrator privileges by attaching a user policy giving full access
    For instructions see Working with Policies in the IAM User Guide
    • In the IAM console Dashboard note down the IAM User SignIn URL Users in this
    account must use this URL when signing in to the AWS Management Console For more
    information see How Users Sign in to Your Account in IAM User Guide
    c Repeat the preceding step using Account B credentials and create administrator user
    AccountBadmin
    2 Set up either the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell
    Make sure you save administrator user credentials as follows
    • If using the AWS CLI create two profiles AccountAadmin and AccountBadmin in the config file
    • If using the AWS Tools for Windows PowerShell make sure you store credentials for the
    session as AccountAadmin and AccountBadmin
    For instructions see Setting Up the Tools for the Example Walkthroughs (p 281)
    API Version 20060301
    296Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Step 1 Do the Account A Tasks
    Step 11 Sign In to the AWS Management Console
    Using the IAM user signin URL for Account A first sign in to the AWS Management Console as
    AccountAadmin user This user will create a bucket and attach a policy to it
    Step 12 Create a Bucket a User and Add a Bucket Policy Granting User Permissions
    1 In the Amazon S3 console create a bucket This exercise assumes the bucket is created in the
    US East (N Virginia) region and the name is examplebucket
    For instructions go to Creating a Bucket in the Amazon Simple Storage Service Console User
    Guide
    2 In the IAM console create a user Dave
    For instructions see Creating IAM Users (AWS Management Console) in the IAM User Guide
    3 Note down the Dave credentials
    4 In the Amazon S3 console attach the following bucket policy to examplebucket bucket For
    instructions go to Editing Bucket Permissions in the Amazon Simple Storage Service Console
    User Guide Follow steps to add a bucket policy
    The policy grants Account B the s3PutObject and s3ListBucket permissions The policy
    also grants user Dave the s3GetObject permission
    {
    Version 20121017
    Statement [
    {
    Sid Statement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountBIDroot
    }
    Action [
    s3PutObject
    ]
    Resource [
    arnawss3examplebucket*
    ]
    }
    {
    Sid Statement3
    Effect Allow
    Principal {
    AWS arnawsiamAccountAIDuserDave
    }
    Action [
    s3GetObject
    ]
    Resource [
    arnawss3examplebucket*
    ]
    }
    ]
    }
    API Version 20060301
    297Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Step 2 Do the Account B Tasks
    Now that Account B has permissions to perform operations on Account A's bucket the Account B
    administrator will do the following
    • Upload an object to Account A's bucket
    • Add a grant in the object ACL to allow Account A bucket owner full control
    Using the AWS CLI
    1 Using the putobject AWS CLI command upload an object The body parameter in the
    command identifies the source file to upload For example if the file is on C drive of a Windows
    machine you would specify c\HappyFacejpg The key parameter provides the key name
    for the object
    aws s3api putobject bucket examplebucket key HappyFacejpg body
    HappyFacejpg profile AccountBadmin
    2 Add a grant to the object ACL to allow the bucket owner full control of the object
    aws s3api putobjectacl bucket examplebucket key HappyFacejpg
    grantfullcontrol idAccountACanonicalUserID profile AccountBadmin
    Using the AWS Tools for Windows PowerShell
    1 Using the WriteS3Object AWS Tools for Windows PowerShell command upload an object
    WriteS3Object BucketName examplebucket key HappyFacejpg file
    HappyFacejpg StoredCredentials AccountBadmin
    2 Add a grant to the object ACL to allow the bucket owner full control of the object
    SetS3ACL BucketName examplebucket Key HappyFacejpg CannedACLName
    bucketownerfullcontrol StoredCreden
    Step 3 Test Permissions
    Now verify user Dave in Account A can access the object owned by Account B
    Using the AWS CLI
    1 Add user Dave credentials to the AWS CLI config file and create a new profile
    UserDaveAccountA For more information see Setting Up the Tools for the Example
    Walkthroughs (p 281)
    [profile UserDaveAccountA]
    aws_access_key_id accesskey
    aws_secret_access_key secretaccesskey
    region useast1
    2 Execute the getobject AWS CLI command to download HappyFacejpg and save it locally
    You provide user Dave credentials by adding the profile parameter
    API Version 20060301
    298Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    aws s3api getobject bucket examplebucket key
    HappyFacejpg Outputfilejpg profile UserDaveAccountA
    Using the AWS Tools for Windows PowerShell
    1 Store user Dave AWS credentials as UserDaveAccountA to persistent store
    SetAWSCredentials AccessKey UserDaveAccessKey SecretKey UserDave
    SecretAccessKey storeas UserDaveAccountA
    2 Execute the ReadS3Object command to download the HappyFacejpg object and save it
    locally You provide user Dave credentials by adding the StoredCredentials parameter
    ReadS3Object BucketName examplebucket Key HappyFacejpg file
    HappyFacejpg StoredCredentials UserDaveAccountA
    Step 4 Clean Up
    1 After you are done testing you can do the following to clean up
    • Sign in to the AWS Management Console (AWS Management Console) using Account A
    credentials and do the following
    • In the Amazon S3 console remove the bucket policy attached to examplebucket In the
    bucket Properties delete the policy in the Permissions section
    • If the bucket is created for this exercise in the Amazon S3 console delete the objects and
    then delete the bucket
    • In the IAM console remove the AccountAadmin user
    2 Sign in to the AWS Management Console (AWS Management Console) using Account B
    credentials In the IAM console delete user AccountBadmin
    Example 4 Bucket Owner Granting Crossaccount Permission
    to Objects It Does Not Own
    Topics
    • Background CrossAccount Permissions and Using IAM Roles (p 300)
    • Step 0 Preparing for the Walkthrough (p 301)
    • Step 1 Do the Account A Tasks (p 302)
    • Step 2 Do the Account B Tasks (p 305)
    • Step 3 Do the Account C Tasks (p 305)
    • Step 4 Clean Up (p 307)
    • Related Resources (p 307)
    In this example scenario you own a bucket and you have enabled other AWS accounts to upload
    objects That is your bucket can have objects that other AWS accounts own
    Now suppose as a bucket owner you need to grant crossaccount permission on objects regardless
    of who the owner is to a user in another account For example that user could be a billing application
    that needs to access object metadata There are two core issues
    API Version 20060301
    299Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    • The bucket owner has no permissions on those objects created by other AWS accounts So for the
    bucket owner to grant permissions on objects it does not own the object owner the AWS account
    that created the objects must first grant permission to the bucket owner The bucket owner can then
    delegate those permissions
    • Bucket owner account can delegate permissions to users in its own account (see Example 3
    Bucket Owner Granting Its Users Permissions to Objects It Does Not Own (p 295)) but it cannot
    delegate permissions to other AWS accounts because crossaccount delegation is not supported
    In this scenario the bucket owner can create an AWS Identity and Access Management (IAM) role
    with permission to access objects and grant another AWS account permission to assume the role
    temporarily enabling it to access objects in the bucket
    Background CrossAccount Permissions and Using IAM Roles
    IAM roles enable several scenarios to delegate access to your resources and crossaccount access
    is one of the key scenarios In this example the bucket owner Account A uses an IAM role to
    temporarily delegate object access crossaccount to users in another AWS account Account C Each
    IAM role you create has two policies attached to it
    • A trust policy identifying another AWS account that can assume the role
    • An access policy defining what permissions—for example s3GetObject—are allowed when
    someone assumes the role For a list of permissions you can specify in a policy see Specifying
    Permissions in a Policy (p 312)
    The AWS account identified in the trust policy then grants its user permission to assume the role The
    user can then do the following to access objects
    • Assume the role and in response get temporary security credentials
    • Using the temporary security credentials access the objects in the bucket
    For more information about IAM roles go to IAM Roles in IAM User Guide
    The following is a summary of the walkthrough steps
    1 Account A administrator user attaches a bucket policy granting Account B conditional permission to
    upload objects
    API Version 20060301
    300Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    2 Account A administrator creates an IAM role establishing trust with Account C so users in that
    account can access Account A The access policy attached to the role limits what user in Account C
    can do when the user accesses Account A
    3 Account B administrator uploads an object to the bucket owned by Account A granting fullcontrol
    permission to the bucket owner
    4 Account C administrator creates a user and attaches a user policy that allows the user to assume
    the role
    5 User in Account C first assumes the role which returns the user temporary security credentials
    Using those temporary credentials the user then accesses objects in the bucket
    For this example you need three accounts The following table shows how we refer to these accounts
    and the administrator users in these accounts Per IAM guidelines (see About Using an Administrator
    User to Create Resources and Grant Permissions (p 281)) we do not use the account root credentials
    in this walkthrough Instead you create an administrator user in each account and use those
    credentials in creating resources and granting them permissions
    AWS Account ID Account Referred To As Administrator User in the
    Account
    111111111111 Account A AccountAadmin
    222222222222 Account B AccountBadmin
    333333333333 Account C AccountCadmin
    Step 0 Preparing for the Walkthrough
    Note
    You may want to open a text editor and write down some of the information as you walk
    through the steps In particular you will need account IDs canonical user IDs IAM User Sign
    in URLs for each account to connect to the console and Amazon Resource Names (ARNs) of
    the IAM users and roles
    1 Make sure you have three AWS accounts and each account has one administrator user as shown
    in the table in the preceding section
    a Sign up for AWS accounts as needed We refer to these accounts as Account A Account B
    and Account C
    i Go to httpawsamazoncoms3 and click Create an AWS Account
    ii Follow the onscreen instructions
    AWS will notify you by email when your account is active and available for you to use
    b Using Account A credentials sign in to the IAM console and do the following to create an
    administrator user
    • Create user AccountAadmin and note down security credentials For more information
    about adding users see Creating an IAM User in Your AWS Account in the IAM User
    Guide
    • Grant AccountAadmin administrator privileges by attaching a user policy giving full access
    For instructions see Working with Policies in the IAM User Guide
    • In the IAM Console Dashboard note down the IAM User SignIn URL Users in this
    account must use this URL when signing in to the AWS Management Console For more
    information go to How Users Sign In to Your Account in IAM User Guide
    API Version 20060301
    301Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    c Repeat the preceding step to create administrator users in Account B and Account C
    2 For Account C note down the account ID
    When you create an IAM role in Account A the trust policy grants Account C permission to
    assume the role by specifying the account ID You can find account information as follows
    a Go to httpawsamazoncom and from the My AccountConsole dropdown menu select
    Security Credentials
    b Sign in using appropriate account credentials
    c Click Account Identifiers and note down the AWS Account ID and the Canonical User ID
    3 When creating a bucket policy you will need the following information Note down these values
    • Canonical user ID of Account A – When the Account A administrator grants conditional upload
    object permission to the Account B administrator the condition specifies the canonical user ID of
    the Account A user that must get fullcontrol of the objects
    Note
    The canonical user ID is the Amazon S3–only concept It is s 64character obfuscated
    version of the account ID
    • User ARN for Account B administrator – You can find the user ARN in the IAM console You
    will need to select the user and find the user's ARN in the Summary tab
    In the bucket policy you grant AccountBadmin permission to upload objects and you specify the
    user using the ARN Here's an example ARN value
    arnawsiamAccountBIDuserAccountBadmin
    4 Set up either the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell
    Make sure you save administrator user credentials as follows
    • If using the AWS CLI create profiles AccountAadmin and AccountBadmin in the config file
    • If using the AWS Tools for Windows PowerShell make sure you store credentials for the
    session as AccountAadmin and AccountBadmin
    For instructions see Setting Up the Tools for the Example Walkthroughs (p 281)
    Step 1 Do the Account A Tasks
    In this example Account A is the bucket owner So user AccountAadmin in Account A will create a
    bucket attach a bucket policy granting the Account B administrator permission to upload objects
    create an IAM role granting Account C permission to assume the role so it can access objects in the
    bucket
    Step 11 Sign In to the AWS Management Console
    Using the IAM User Signin URL for Account A first sign in to the AWS Management Console as
    AccountAadmin user This user will create a bucket and attach a policy to it
    Step 12 Create a Bucket and Attach a Bucket Policy
    In the Amazon S3 console do the following
    1 Create a bucket This exercise assumes the bucket name is examplebucket
    For instructions go to Creating a Bucket in the Amazon Simple Storage Service Console User
    Guide
    API Version 20060301
    302Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    2 Attach the following bucket policy granting conditional permission to the Account B administrator
    permission to upload objects
    You need to update the policy by providing your own values for examplebucket AccountBID
    and the CanonicalUserIdofAWSaccountABucketOwner
    {
    Version 20121017
    Statement [
    {
    Sid 111
    Effect Allow
    Principal {
    AWS arnawsiamAccountBIDuserAccountBadmin
    }
    Action s3PutObject
    Resource arnawss3examplebucket*
    }
    {
    Sid 112
    Effect Deny
    Principal {
    AWS arnawsiamAccountBIDuserAccountBadmin
    }
    Action s3PutObject
    Resource arnawss3examplebucket*
    Condition {
    StringNotEquals {
    s3xamzgrantfullcontrol idCanonicalUserIdof
    AWSaccountABucketOwner
    }
    }
    }
    ]
    }
    Step 13 Create an IAM Role to Allow Account C CrossAccount Access in Account A
    In the IAM console create an IAM role (examplerole) that grants Account C permission to assume
    the role Make sure you are still signed in as the Account A administrator because the role must be
    created in Account A
    1 Before creating the role prepare the managed policy that defines the permissions that the role
    requires You attach this policy to the role in a later step
    a In the navigation pane on the left click Policies and then click Create Policy
    b Next to Create Your Own Policy click Select
    c Enter accessaccountAbucket in the Policy Name field
    d Copy the following access policy and paste it into the Policy Document field The access
    policy grants the role s3GetObject permission so when Account C user assumes the role
    it can only perform the s3GetObject operation
    {
    Version 20121017
    Statement [
    {
    API Version 20060301
    303Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Effect Allow
    Action s3GetObject
    Resource arnawss3examplebucket*
    }
    ]
    }
    e Click Create Policy
    The new policy appears in the list of managed policies
    2 In the navigation pane on the left click Roles and then click Create New Role
    3 Enter examplerole for the role name and then click Next Step
    4 Under Select Role Type select Role for CrossAccount Access and then click the Select
    button next to Provide access between AWS accounts you own
    5 Enter the Account C account ID
    For this walkthrough you do not need to require users to have multifactor authentication (MFA) to
    assume the role so leave that option unselected
    6 Click Next Step to set the permissions that will be associated with the role
    7 Select the box next to the accessaccountAbucket policy that you created and then click Next
    Step
    The Review page appears so you can confirm the settings for the role before it's created One very
    important item to note on this page is the link that you can send to your users who need to use
    this role Users who click the link go straight to the Switch Role page with the Account ID and Role
    Name fields already filled in You can also see this link later on the Role Summary page for any
    crossaccount role
    8 After reviewing the role click Create Role
    The examplerole role is displayed in the list of roles
    9 Click the role name examplerole
    10 Select the Trust Relationships tab
    11 Click Show policy document and verify the trust policy shown matches the following policy
    The following trust policy establishes trust with Account C by allowing it the stsAssumeRole
    action For more information go to AssumeRole in the AWS Security Token Service API
    Reference
    {
    Version 20121017
    Statement [
    {
    Sid
    Effect Allow
    Principal {
    AWS arnawsiamAccountCIDroot
    }
    Action stsAssumeRole
    }
    ]
    }
    12 Note down the Amazon Resource Name (ARN) of the examplerole role you created
    Later in the following steps you attach a user policy to allow an IAM user to assume this role and
    you identify the role by the ARN value
    API Version 20060301
    304Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Step 2 Do the Account B Tasks
    The examplebucket owned by Account A needs objects owned by other accounts In this step the
    Account B administrator uploads an object using the command line tools
    • Using the putobject AWS CLI command upload an object to the examplebucket
    aws s3api putobject bucket examplebucket key HappyFacejpg
    body HappyFacejpg grantfullcontrol idcanonicalUserId
    ofTheBucketOwner profile AccountBadmin
    Note the following
    • The Profile parameter specifies AccountBadmin profile so the object is owned by Account
    B
    • The parameter grantfullcontrol grants the bucket owner fullcontrol permission on the
    object as required by the bucket policy
    • The body parameter identifies the source file to upload For example if the file is on the C
    drive of a Windows computer you specify c\HappyFacejpg
    Step 3 Do the Account C Tasks
    In the preceding steps Account A has already created a role examplerole establishing trust with
    Account C This allows users in Account C to access Account A In this step Account C administrator
    creates a user (Dave) and delegates him the stsAssumeRole permission it received from Account
    A This will allow Dave to assume the examplerole and temporarily gain access to Account A
    The access policy that Account A attached to the role will limit what Dave can do when he accesses
    Account A—specifically get objects in examplebucket
    Step 31 Create a User in Account C and Delegate Permission to Assume examplerole
    1 Using the IAM user signin URL for Account C first sign in to the AWS Management Console as
    AccountCadmin user
    2 In the IAM console create a user Dave
    For instructions see Creating IAM Users (AWS Management Console) in the IAM User Guide
    3 Note down the Dave credentials Dave will need these credentials to assume the examplerole
    role
    4 Create an inline policy for the Dave IAM user to delegate the stsAssumeRole permission to
    Dave on the examplerole role in account A
    a In the navigation pane on the left click Users
    b Click the user name Dave
    c On the user details page select the Permissions tab and then expand the Inline Policies
    section
    d Choose click here (or Create User Policy)
    e Click Custom Policy and then click Select
    f Enter a name for the policy in the Policy Name field
    g Copy the following policy into the Policy Document field
    You will need to update the policy by providing the Account A ID
    {
    API Version 20060301
    305Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Version 20121017
    Statement [
    {
    Effect Allow
    Action [stsAssumeRole]
    Resource arnawsiamAccountAIDroleexamplerole
    }
    ]
    }
    h Click Apply Policy
    5 Save Dave's credentials to the config file of the AWS CLI by adding another profile
    AccountCDave
    [profile AccountCDave]
    aws_access_key_id UserDaveAccessKeyID
    aws_secret_access_key UserDaveSecretAccessKey
    region uswest2
    Step 32 Assume Role (examplerole) and Access Objects
    Now Dave can access objects in the bucket owned by Account A as follows
    • Dave first assumes the examplerole using his own credentials This will return temporary
    credentials
    • Using the temporary credentials Dave will then access objects in Account A's bucket
    1 At the command prompt execute the following AWS CLI assumerole command using the
    AccountCDave profile
    You will need to update the ARN value in the command by providing the Account A ID where
    examplerole is defined
    aws sts assumerole rolearn arnawsiamaccountAIDroleexamplerole
    profile AccountCDave rolesessionname test
    In response AWS Security Token Service (STS) returns temporary security credentials (access
    key ID secret access key and a security token)
    2 Save the temporary security credentials in the AWS CLI config file under the TempCred profile
    [profile TempCred]
    aws_access_key_id tempaccesskeyID
    aws_secret_access_key tempsecretaccesskey
    aws_security_token securitytoken
    region uswest2
    3 At the command prompt execute the following AWS CLI command to access objects using the
    temporary credentials For example the command specifies the headobject API to retrieve object
    metadata for the HappyFacejpg object
    aws s3api getobject bucket examplebucket
    key HappyFacejpg SaveFileAsjpg profile TempCred
    API Version 20060301
    306Amazon Simple Storage Service Developer Guide
    Example Walkthroughs Managing Access
    Because the access policy attached to examplerole allows the actions Amazon S3 processes
    the request You can try any other action on any other object in the bucket
    If you try any other action—for example getobjectacl—you will get permission denied
    because the role is not allowed that action
    aws s3api getobjectacl bucket examplebucket key HappyFacejpg
    profile TempCred
    We used user Dave to assume the role and access the object using temporary credentials It could
    also be an application in Account C that accesses objects in examplebucket The application can
    obtain temporary security credentials and Account C can delegate the application permission to
    assume examplerole
    Step 4 Clean Up
    1 After you are done testing you can do the following to clean up
    • Sign in to the AWS Management Console (AWS Management Console) using account A
    credentials and do the following
    • In the Amazon S3 console remove the bucket policy attached to examplebucket In the
    bucket Properties delete the policy in the Permissions section
    • If the bucket is created for this exercise in the Amazon S3 console delete the objects and
    then delete the bucket
    • In the IAM console remove the examplerole you created in Account A
    • In the IAM console remove the AccountAadmin user
    2 Sign in to the AWS Management Console (AWS Management Console) using Account B
    credentials In the IAM console delete user AccountBadmin
    3 Sign in to the AWS Management Console (AWS Management Console) using Account C
    credentials In the IAM console delete user AccountCadmin and user Dave
    Related Resources
    • Creating a Role to Delegate Permissions to an IAM User in the IAM User Guide
    • Tutorial Delegate Access Across AWS Accounts Using IAM Roles in the IAM User Guide
    • Working with Policies in the IAM User Guide
    API Version 20060301
    307Amazon Simple Storage Service Developer Guide
    Using Bucket Policies and User Policies
    Using Bucket Policies and User Policies
    Topics
    • Access Policy Language Overview (p 308)
    • Bucket Policy Examples (p 334)
    • User Policy Examples (p 343)
    Bucket policy and user policy are two of the access policy options available for you to grant permission
    to your Amazon S3 resources Both use JSONbased access policy language The topics in this
    section describe the key policy language elements with emphasis on Amazon S3–specific details and
    provide example bucket and user policies
    Important
    We recommend you first review the introductory topics that explain the basic concepts
    and options available for you to manage access to your Amazon S3 resources For
    more information see Introduction to Managing Access Permissions to Your Amazon S3
    Resources (p 266)
    Access Policy Language Overview
    The topics in this section describe the basic elements used in bucket and user policies as used in
    Amazon S3 For complete policy language information see the Overview of IAM Policies and the AWS
    IAM Policy Reference topics in the IAM User Guide
    Note
    Bucket policies are limited to 20 KB in size
    Common Elements in an Access Policy
    In its most basic sense a policy contains the following elements
    • Resources – Buckets and objects are the Amazon S3 resources for which you can allow or deny
    permissions In a policy you use the Amazon Resource Name (ARN) to identify the resource
    • Actions – For each resource Amazon S3 supports a set of operations You identify resource
    operations you will allow (or deny) by using action keywords (see Specifying Permissions in a
    Policy (p 312))
    For example the s3ListBucket permission will allow the user permission to the Amazon S3 GET
    Bucket (List Objects) operation
    • Effect – What the effect will be when the user requests the specific action—this can be either allow
    or deny
    If you do not explicitly grant access to (allow) a resource access is implicitly denied You can also
    explicitly deny access to a resource which you might do in order to make sure that a user cannot
    access it even if a different policy grants access
    • Principal – The account or user who is allowed access to the actions and resources in the
    statement You specify a principal only in a bucket policy It is the user account service or other
    entity who is the recipient of this permission In a user policy the user to which the policy is attached
    is the implicit principal
    The following example bucket policy shows the preceding common policy elements The policy
    allows Dave a user in account AccountID s3GetBucketLocation s3ListBucket and
    s3GetObject Amazon S3 permissions on the examplebucket bucket
    {
    API Version 20060301
    308Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Version 20121017
    Statement [
    {
    Sid ExampleStatement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountIDuserDave
    }
    Action [
    s3GetBucketLocation
    s3ListBucket
    s3GetObject
    ]
    Resource [
    arnawss3examplebucket
    ]
    }
    ]
    }
    Because this is a bucket policy it includes the Principal element which specifies who gets the
    permission
    For more information about the access policy elements see the following topics
    • Specifying Resources in a Policy (p 309)
    • Specifying a Principal in a Policy (p 310)
    • Specifying Permissions in a Policy (p 312)
    • Specifying Conditions in a Policy (p 315)
    The following topics provide additional policy examples
    • Bucket Policy Examples (p 334)
    • User Policy Examples (p 343)
    Specifying Resources in a Policy
    The following is the common Amazon Resource Name (ARN) format to identify any resources in AWS
    arnpartitionserviceregionnamespacerelativeid
    For your Amazon S3 resources
    • aws is a common partition name If your resources are in China (Beijing) region awscn is the
    partition name
    • s3 is the service
    • you don't specify region and namespace
    • For Amazon S3 it can be a bucketname or a bucketnameobjectkey You can use wild
    card
    Then the ARN format for Amazon S3 resources reduces to
    arnawss3bucket_name
    API Version 20060301
    309Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    arnawss3bucket_namekey_name
    The following are examples of Amazon S3 resource ARNs
    • This ARN identifies the developersdesign_infodoc object in the examplebucket bucket
    arnawss3examplebucketdevelopersdesign_infodoc
    • You can use wildcards as part of the resource ARN You can use wildcard characters (* and ) within
    any ARN segment (the parts separated by colons) An asterisk (*) represents any combination of
    zero or more characters and a question mark () represents any single character You can have use
    multiple * or characters in each segment but a wildcard cannot span segments
    • This ARN uses wildcard '*' in relativeID part of the ARN to identify all objects in the
    examplebucket bucket
    arnawss3examplebucket*
    This ARN uses '*' to indicate all Amazon S3 resources (all bucket and objects in your account)
    arnawss3*
    • This ARN uses both wildcards '*' and '' in the relativeID part It identifies all objects in buckets
    such as example1bucket example2bucket example3bucket and so on
    arnawss3examplebucket*
    • You can use policy variables in Amazon S3 ARNs At policy evaluation time these predefined
    variables are replaced by their corresponding values Suppose you organize your bucket as a
    collection of folders one folder for each of your users The folder name is the same as the user
    name To grant users permission to their folders you can specify a policy variable in the resource
    ARN
    arnawss3bucket_namedevelopers{awsusername}
    At run time when the policy is evaluated the variable {awsusername} in the resource ARN is
    substituted with the user name making the request
    For more information see the following resources
    • Resource in the IAM User Guide
    • IAM Policy Variables Overview in the IAM User Guide
    • ARNs in the AWS General Reference
    For more information about other access policy language elements see Access Policy Language
    Overview (p 308)
    Specifying a Principal in a Policy
    The Principal element specifies the user account service or other entity that is allowed or denied
    access to a resource The Principal element is relevant only in a bucket policy you don't specify it in
    a user policy because you attach user policy directly to a specific user The following are examples of
    specifying Principal For more information see Principal in the IAM User Guide
    API Version 20060301
    310Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    • To grant permissions to an AWS account identify the account using the following format
    AWSaccountARN
    For example
    Principal{AWSarnawsiamAccountNumberWithoutHyphensroot}
    Amazon S3 also supports canonical user ID an obfuscated form of the AWS account ID You can
    specify this ID using the following format
    CanonicalUser64digitalphanumericvalue
    For example
    Principal{CanonicalUser64digitalphanumericvalue}
    To find the canonical user ID associated with your AWS account
    1 Go to httpawsamazoncom and from the My AccountConsole dropdown menu select
    Security Credentials
    2 Sign in using appropriate account credentials
    3 Click Account Identifiers
    • To grant permission to an IAM user within your account you must provide a AWSuserARN
    namevalue pair
    Principal{AWSarnawsiamaccountnumberwithout
    hyphensuserusername}
    • To grant permission to everyone also referred as anonymous access you set the wildcard *
    as the Principal value For example if you configure your bucket as a website you want all the
    objects in the bucket to be publicly accessible The following are equivalent
    Principal*
    Principal{AWS*}
    • You can require that your users access your Amazon S3 content by using CloudFront URLs (instead
    of Amazon S3 URLs) by creating a CloudFront origin access identity and then changing the
    permissions either on your bucket or on the objects in your bucket The format for specifying the
    origin access identity in a Principal statement is
    Principal{CanonicalUserAmazon S3 Canonical User ID assigned to origin
    access identity}
    For more information see Using an Origin Access Identity to Restrict Access to Your Amazon S3
    Content in the Amazon CloudFront Developer Guide
    For more information about other access policy language elements see Access Policy Language
    Overview (p 308) API Version 20060301
    311Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Specifying Permissions in a Policy
    Amazon S3 defines a set of permissions that you can specify in a policy These are keywords each of
    which maps to specific Amazon S3 operations (see Operations on Buckets and Operations on Objects
    in the Amazon Simple Storage Service API Reference)
    Topics
    • Permissions for Object Operations (p 312)
    • Permissions Related to Bucket Operations (p 313)
    • Permissions Related to Bucket Subresource Operations (p 314)
    Permissions for Object Operations
    This section provides a list of the permissions for object operations that you can specify in a policy
    Amazon S3 Permissions for Object Operations
    Permissions Amazon S3 Operations
    s3GetObject GET Object HEAD Object GET Object Torrent
    When you grant this permission on a versionenabled bucket you always get
    the latest version data
    s3GetObjectVersionGET Object HEAD Object GET Object Torrent
    To grant permission for version specific object data you must grant this
    permission That is when you specify version number when making any of
    these requests you need this Amazon S3 permission
    s3PutObject PUT Object POST Object Initiate Multipart Upload Upload Part Complete
    Multipart Upload PUT Object Copy
    s3GetObjectAcl GET Object ACL
    s3GetObjectVersionAclGET ACL (for a Specific Version of the Object)
    s3PutObjectAcl PUT Object ACL
    s3PutObjectVersionAclPUT Object (for a Specific Version of the Object)
    s3DeleteObject DELETE Object
    s3DeleteObjectVersionDELETE Object (a Specific Version of the Object)
    s3ListMultipartUploadPartsList Parts
    s3AbortMultipartUploadAbort Multipart Upload
    s3GetObjectTorrentGET Object Torrent
    s3GetObjectVersionTorrentGET Object Torrent versioning
    s3RestoreObjectPOST Object restore
    The following example bucket policy grants the s3PutObject and the s3PutObjectAcl
    permissions to a user (Dave) If you remove the Principal element you can attach the policy
    to a user These are object operations and accordingly the relativeid portion of the Resource
    API Version 20060301
    312Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    ARN identifies objects (examplebucket*) For more information see Specifying Resources in a
    Policy (p 309)
    {
    Version 20121017
    Statement [
    {
    Sid statement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountBIDuserDave
    }
    Action [s3PutObjects3PutObjectAcl]
    Resource arnawss3examplebucket*
    }
    ]
    }
    You can use a wildcard to grant permission for all Amazon S3 actions
    Action *
    Permissions Related to Bucket Operations
    This section provides a list of the permissions related to bucket operations that you can specify in a
    policy
    Amazon S3 Permissions Related to Bucket Operations
    Permission
    Keywords
    Amazon S3 Operation(s) Covered
    s3CreateBucket PUT Bucket
    s3DeleteBucket DELETE Bucket
    s3ListBucket GET Bucket (List Objects) HEAD Bucket
    s3ListBucketVersionsGET Bucket Object versions
    s3ListAllMyBucketsGET Service
    s3ListBucketMultipartUploadsList Multipart Uploads
    The following example user policy grants the s3CreateBucket s3ListAllMyBuckets and
    the s3GetBucketLocation permissions to a user Note that for all these permissions you set the
    relativeid part of the Resource ARN to * For all other bucket actions you must specify a bucket
    name For more information see Specifying Resources in a Policy (p 309)
    {
    Version20121017
    Statement[
    {
    Sidstatement1
    EffectAllow
    Action[
    API Version 20060301
    313Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    s3CreateBucket s3ListAllMyBuckets s3GetBucketLocation

    ]
    Resource[
    arnawss3*
    ]
    }
    ]
    }
    Note that if your user is going to use the console to view buckets and see content of any
    of these buckets the console will need the user to have the s3ListAllMyBuckets and
    s3GetBucketLocation permissions For an example walkthrough see An Example Walkthrough
    Using user policies to control access to your bucket (p 348)
    Permissions Related to Bucket Subresource Operations
    This section provides a list of the permissions related to bucket subresource operations that you can
    specify in a policy
    Amazon S3 Permissions Related to Bucket Subresource Operations
    Permissions Amazon S3 Operation(s) Covered
    s3GetAccelerateConfiguration GET Bucket accelerate
    s3PutAccelerateConfiguration PUT Bucket accelerate
    s3GetBucketAcl GET Bucket acl
    s3PutBucketAcl PUT Bucket acl
    s3GetBucketCORS GET Bucket cors
    s3PutBucketCORS PUT Bucket cors
    s3GetBucketVersioning GET Bucket versioning
    s3PutBucketVersioning PUT Bucket versioning
    s3GetBucketRequestPayment GET Bucket requestPayment
    s3PutBucketRequestPayment PUT Bucket requestPayment
    s3GetBucketLocation GET Bucket location
    s3GetBucketPolicy GET Bucket policy
    s3DeleteBucketPolicy DELETE Bucket policy
    s3PutBucketPolicy PUT Bucket policy
    s3GetBucketNotification GET Bucket notification
    s3PutBucketNotification PUT Bucket notification
    s3GetBucketLogging GET Bucket logging
    s3PutBucketLogging PUT Bucket logging
    s3GetBucketTagging GET Bucket tagging
    s3PutBucketTagging PUT Bucket tagging
    API Version 20060301
    314Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permissions Amazon S3 Operation(s) Covered
    s3GetBucketWebsite GET Bucket website
    s3PutBucketWebsite PUT Bucket website
    s3DeleteBucketWebsite DELETE Bucket website
    s3GetLifecycleConfiguration GET Bucket lifecycle
    s3PutLifecycleConfiguration PUT Bucket lifecycle
    s3PutReplicationConfiguration PUT Bucket replication
    s3GetReplicationConfiguration GET Bucket replication
    s3DeleteReplicationConfigurationDELETE Bucket replication
    The following user policy grants the s3GetBucketAcl permission on the examplebucket bucket to
    user Dave
    {
    Version 20121017
    Statement [
    {
    Sid statement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountIDuserDave
    }
    Action [
    s3GetObjectVersion
    s3GetBucketAcl
    ]
    Resource arnawss3examplebucket
    }
    ]
    }
    You can delete objects either by explicitly calling the DELETE Object API or by configuring its lifecycle
    (see Object Lifecycle Management (p 109)) so that Amazon S3 can remove the objects when their
    lifetime expires To explicitly block users or accounts from deleting objects you must explicitly deny
    them s3DeleteObject s3DeleteObjectVersion and s3PutLifecycleConfiguration
    permissions Note that by default users have no permissions But as you create users add users to
    groups and grant them permissions it is possible for users to get certain permissions that you did not
    intend to give That is where you can use explicit deny which supersedes all other permissions a user
    might have and denies the user permissions for specific actions
    Specifying Conditions in a Policy
    The access policy language allows you to specify conditions when granting permissions
    The Condition element (or Condition block) lets you specify conditions for when a policy is in
    effect In the Condition element which is optional you build expressions in which you use Boolean
    operators (equal less than etc) to match your condition against values in the request For example
    when granting a user permission to upload an object the bucket owner can require the object be
    publicly readable by adding the StringEquals condition as shown here
    {
    API Version 20060301
    315Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Version 20121017
    Statement [
    {
    Sid statement1
    Effect Allow
    Action [
    s3PutObject
    ]
    Resource [
    arnawss3examplebucket*
    ]
    Condition {
    StringEquals {
    s3xamzacl [
    publicread
    ]
    }
    }
    }
    ]
    }
    The Condition block specifies the StringEquals condition that is applied to the specified key
    value pair s3xamzacl[publicread] There is a set of predefined keys you can use in
    expressing a condition The example uses the s3xamzacl condition key This condition requires
    user to include the xamzacl header with value publicread in every PUT object request
    For more information about specifying conditions in an access policy language see Condition in the
    IAM User Guide
    The following topics describe AWSwide and Amazon S3–specific condition keys and provide example
    policies
    Topics
    • Available Condition Keys (p 316)
    • Amazon S3 Condition Keys for Object Operations (p 318)
    • Amazon S3 Condition Keys for Bucket Operations (p 328)
    Available Condition Keys
    The predefined keys available for specifying conditions in an Amazon S3 access policy can be
    classified as follows
    • AWSwide keys – AWS provides a set of common keys that are supported by all AWS services that
    support policies These keys that are common to all services are called AWSwide keys and use the
    prefix aws For a list of AWSwide keys see Available Keys for Conditions in the IAM User Guide
    There are also keys that are specific to Amazon S3 which use the prefix s3 Amazon S3–specific
    keys are discussed in the next bulleted item

    The new condition keys awssourceVpce and awssourceVpc are used in bucket policies for
    VPC endpoints For examples of using these condition keys see Example Bucket Policies for VPC
    Endpoints for Amazon S3 (p 341)
    The following example bucket policy allows authenticated users permission to use
    the s3GetObject action if the request originates from a specific range of IP addresses
    (192168143*) unless the IP address is 192168143188 In the condition block the IpAddress
    API Version 20060301
    316Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    and the NotIpAddress are conditions and each condition is provided a keyvalue pair for
    evaluation Both the keyvalue pairs in this example use the awsSourceIp AWSwide key
    Note
    The IPAddress and NotIpAddress key values specified in the condition uses CIDR
    notation as described in RFC 4632 For more information go to httpwwwrfceditororgrfc
    rfc4632txt
    {
    Version 20121017
    Id S3PolicyId1
    Statement [
    {
    Sid statement1
    Effect Allow
    Principal *
    Action[s3GetObject]
    Resource arnawss3examplebucket*
    Condition {
    IpAddress {
    awsSourceIp 192168143024
    }
    NotIpAddress {
    awsSourceIp 19216814318832
    }
    }
    }
    ]
    }
    • Amazon S3–specific keys – In addition to the AWSwide keys there are a set of condition keys that
    are applicable only in the context of granting Amazon S3 specific permissions These Amazon S3–
    specific keys use the prefix s3 For a list of Amazon S3–specific keys see Actions and Condition
    Context Keys for Amazon S3 in the IAM User Guide

    For example the following bucket policy allows the s3PutObject permission for two AWS
    accounts if the request includes the xamzacl header making the object publicly readable
    {
    Version20121017
    Statement [
    {
    SidAddCannedAcl
    EffectAllow
    Principal {
    AWS [arnawsiamaccount1
    IDrootarnawsiamaccount2IDroot]
    }
    Action[s3PutObject]
    Resource [arnawss3examplebucket*]
    Condition {
    StringEquals {
    s3xamzacl[publicread]
    }
    }
    }
    ]
    API Version 20060301
    317Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    }
    The Condition block uses the StringEquals condition and it is provided a keyvalue pair
    s3xamzacl[publicread for evaluation In the keyvalue pair the s3xamzacl is
    an Amazon S3–specific key as indicated by the prefix s3
    Important
    Not all conditions make sense for all actions For example it makes sense to include an
    s3LocationConstraint condition on a policy that grants the s3CreateBucket Amazon
    S3 permission but not for the s3GetObject permission Amazon S3 can test for semantic
    errors of this type that involve Amazon S3–specific conditions However if you are creating a
    policy for an IAM user and you include a semantically invalid Amazon S3 condition no error is
    reported because IAM cannot validate Amazon S3 conditions
    The following section describes the condition keys that can be used to grant conditional permission
    for bucket and object operations In addition there are condition keys related to Amazon S3 Signature
    Version 4 authentication For more information go to Amazon S3 Signature Version 4 Authentication
    Specific Policy Keys in the Amazon Simple Storage Service API Reference
    Amazon S3 Condition Keys for Object Operations
    The following table shows which Amazon S3 conditions you can use with which Amazon S3 actions
    Example policies are provided following the table Note the following about the Amazon S3–specific
    condition keys described in the following table
    • The condition key names are preceded by the prefix s3 For example
    s3xamzacl

    • Each condition key maps to the same name request header allowed by the API on which the
    condition can be set That is these condition keys dictate behavior of the same name request
    headers For example
    • The condition key s3xamzacl that you can use to grant condition permission for the
    s3PutObject
    permission defines behavior of the xamzacl request header that the PUT Object API supports
    • The condition key s3VersionId that you can use to grant conditional permission for the
    s3GetObjectVersion
    permission defines behavior of the versionId query parameter that you set in a GET Object
    request
    Permission Applicable Condition Keys
    (or keywords)
    Description
    s3PutObject • s3xamzacl
    (for canned ACL
    permissions)
    • s3xamzgrantpermi
    ssion
    (for explicit permissions)
    where permission can
    be
    read write read
    acp writeacp
    fullcontrol
    The PUT Object operation allows
    access control list (ACL)–specific
    headers that you can use to grant
    ACLbased permissions Using these
    keys the bucket owner can set a
    condition to require specific access
    permissions when the user uploads an
    object
    For an example policy see Example 1
    Granting s3PutObject permission with
    a condition requiring the bucket owner
    to get full control (p 323)
    API Version 20060301
    318Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permission Applicable Condition Keys
    (or keywords)
    Description
    For more information about ACLs
    see Access Control List (ACL)
    Overview (p 364)
    s3xamzcopysource To copy an object you use the PUT
    Object API (see PUT Object) and
    specify the source using the xamz
    copysource header Using this key
    the bucket owner can restrict the copy
    source to a specific bucket a specific
    folder in the bucket or a specific object
    in a bucket
    For a policy example see Example 3
    Granting s3PutObject permission to
    copy objects with a restriction on the
    copy source (p 325)
    s3xamzserverside
    encryption
    When you upload an object you
    can use the xamzserverside
    encryption header to request
    Amazon S3 to encrypt the object
    when it is saved using an envelope
    encryption key managed either by
    AWS Key Management Service
    (KMS) or by Amazon S3 (see
    Protecting Data Using ServerSide
    Encryption (p 381))
    When granting the s3PutObject
    permission the bucket owner can add
    a condition using this key to require
    the user to specify this header in the
    request A bucket owner can grant
    such conditional permission to ensure
    that objects the user uploads are
    encrypted when they are saved
    For a policy example see Example 1
    Granting s3PutObject permission with
    a condition requiring the bucket owner
    to get full control (p 323)
    API Version 20060301
    319Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permission Applicable Condition Keys
    (or keywords)
    Description
    s3xamzserverside
    encryptionawskms
    keyid
    When you upload an object you
    can use the xamzserverside
    encryptionawskmskeyid
    header to request Amazon S3 to
    encrypt the object using the specified
    AWS KMS key when it is saved (see
    Protecting Data Using ServerSide
    Encryption with AWS KMS–Managed
    Keys (SSEKMS) (p 381))
    When granting the s3PutObject
    permission the bucket owner can add
    a condition using this key to restrict
    the AWS KMS key ID used for object
    encryption to a specific value
    A bucket owner can grant such
    conditional permission to ensure that
    objects the user uploads are encrypted
    with a specific key when they are
    saved
    The KMS key you specify in the policy
    must use the following format
    arnawskmsregionacct
    idkeykeyid
    s3xamzmetadatadi
    rective
    When you copy an object using the
    PUT Object API (see PUT Object)
    you can optionally add the xamz
    metadatadirective header to
    specify whether you want the object
    metadata copied from the source
    object or replaced with metadata
    provided in the request
    Using this key bucket an owner can
    add a condition to enforce certain
    behavior when objects are uploaded
    Valid values COPY | REPLACE The
    default is COPY
    API Version 20060301
    320Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permission Applicable Condition Keys
    (or keywords)
    Description
    s3xamzstorageclass By default s3PutObject stores
    objects using the STANDARD storage
    class but you can use the xamz
    storageclass request header to
    specify a different storage class
    When granting the s3PutObject
    permission you can use the s3x
    amzstorageclass condition
    key to restrict which storage class to
    use when storing uploaded objects
    For more information about storage
    classes see Storage Classes
    For an example policy see Example 5
    Restrict object uploads to objects with
    a specific storage class (p 327)
    Valid Values STANDARD
    | STANDARD_IA |
    REDUCED_REDUNDANCY The default is
    STANDARD
    s3PutObjectAcl • s3xamzacl
    (for canned ACL
    permissions)
    • s3xamzgrantpermi
    ssion
    (for explicit permissions)
    where permission can
    be
    read write read
    acp writeacp
    grantfullcontrol
    The PUT Object acl API (see
    PUT Object acl) sets the access
    control list (ACL) on the specified
    object The operation supports ACL
    related headers When granting this
    permission the bucket owner can
    add conditions using these keys to
    require certain permissions For more
    information about ACLs see Access
    Control List (ACL) Overview (p 364)
    For example the bucket owner may
    want to retain control of the object
    regardless of who owns the object
    To accomplish this the bucket owner
    can add a condition using one of these
    keys to require the user to include
    specific permissions to the bucket
    owner
    API Version 20060301
    321Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permission Applicable Condition Keys
    (or keywords)
    Description
    s3GetObjectVersion s3VersionId This Amazon S3 permission
    enables the user to perform a set
    of Amazon S3 API operations (see
    Amazon S3 Permissions for Object
    Operations (p 312)) For a version
    enabled bucket you can specify the
    object version to retrieve data for
    By adding a condition using this key
    the bucket owner can restrict the user
    to accessing data only for a specific
    version of the object For an example
    policy see Example 4 Granting
    access to a specific version of an
    object (p 327)
    s3GetObjectVersionA
    cl
    s3VersionId For a versionenabled bucket this
    Amazon S3 permission allows a user
    to get the ACL for a specific version of
    the object
    The bucket owner can add a condition
    using the key to restrict the user to a
    specific version of the object
    s3VersionId For a versionenabled bucket you
    can specify the object version in the
    PUT Object acl request to set ACL
    on a specific object version Using
    this condition the bucket owner can
    restrict the user to setting an ACL only
    on a specific version of an object
    s3PutObjectVersionA
    cl
    • s3xamzacl
    (for canned ACL
    permissions)
    • s3xamzgrantpermi
    ssion
    (for explicit permissions)
    where permission can
    be
    read write read
    acp writeacp
    grantfullcontrol
    For a versionenabled bucket this
    Amazon S3 permission allows you to
    set an ACL on a specific version of the
    object
    For a description of these condition
    keys see the s3PutObjectACL
    permission in this table
    API Version 20060301
    322Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permission Applicable Condition Keys
    (or keywords)
    Description
    s3DeleteObjectVersi
    on
    s3VersionId For a versionenabled bucket this
    Amazon S3 permission allows the
    user to delete a specific version of the
    object
    The bucket owner can add a condition
    using this key to limit the user's ability
    to delete only a specific version of the
    object
    For an example of using this condition
    key see Example 4 Granting
    access to a specific version of an
    object (p 327) The example is
    about granting the
    s3GetObjectVersion
    action but the policy shows the use of
    this condition key
    Example 1 Granting s3PutObject permission with a condition requiring the bucket owner to
    get full control
    Suppose Account A owns a bucket and the account administrator wants to grant Dave a user in
    Account B permissions to upload objects By default objects that Dave uploads are owned by Account
    B and Account A has no permissions on these objects Because the bucket owner is paying the bills it
    wants full permissions on the objects that Dave uploads The Account A administrator can accomplish
    this by granting the s3PutObject permission to Dave with a condition that the request include ACL
    specific headers that either grants full permission explicitly or uses a canned ACL (see PUT Object)
    • Require the xamzfullcontrol header in the request with full control permission to the bucket
    owner
    The following bucket policy grants the s3PutObject permission to user Dave with a condition
    using the s3xamzgrantfullcontrol condition key which requires the request to include
    the xamzfullcontrol header
    {
    Version 20121017
    Statement [
    {
    Sid statement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountBIDuserDave
    }
    Action s3PutObject
    Resource arnawss3examplebucket*
    Condition {
    StringEquals {
    s3xamzgrantfullcontrol idAccountACanonicalUserID
    }
    }
    }
    ]
    API Version 20060301
    323Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    }
    Note
    This example is about crossaccount permission However if Dave who is getting the
    permission belongs to the AWS account that owns the bucket then this conditional
    permission is not necessary because the parent account to which Dave belongs owns
    objects the user uploads
    The preceding bucket policy grants conditional permission to user Dave in Account B While this
    policy is in effect it is possible for Dave to get the same permission without any condition via some
    other policy For example Dave can belong to a group and you grant the group s3PutObject
    permission without any condition To avoid such permission loopholes you can write a stricter
    access policy by adding explicit deny In this example we explicitly deny user Dave upload
    permission if he does not include the necessary headers in the request granting full permissions to
    the bucket owner Explicit deny always supersedes any other permission granted The following is
    the revised access policy example
    {
    Version 20121017
    Statement [
    {
    Sid statement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountBIDuserAccountBadmin
    }
    Action s3PutObject
    Resource arnawss3examplebucket*
    Condition {
    StringEquals {
    s3xamzgrantfullcontrol idAccountA
    CanonicalUserID
    }
    }
    }
    {
    Sid statement2
    Effect Deny
    Principal {
    AWS arnawsiamAccountBIDuserAccountBadmin
    }
    Action s3PutObject
    Resource arnawss3examplebucket*
    Condition {
    StringNotEquals {
    s3xamzgrantfullcontrol idAccountA
    CanonicalUserID
    }
    }
    }
    ]
    }
    If you have two AWS accounts you can test the policy using the AWS CLI You attach the policy
    and using Dave's credentials test the permission using the following AWS CLI putobject
    command You provide Dave's credentials by adding the profile parameter You grant full
    control permission to the bucket owner by adding the grantfullcontrol parameter For
    API Version 20060301
    324Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    more information about setting up and using the AWS CLI see Setting Up the Tools for the Example
    Walkthroughs (p 281)
    aws s3api putobject bucket examplebucket key HappyFacejpg body c
    \HappyFacejpg grantfullcontrol idAccountACanonicalUserID profile
    AccountBUserProfile
    • Require the xamzacl header with a canned ACL granting full control permission to the bucket
    owner
    To require the xamzacl header in the request you can replace the keyvalue pair in the
    Condition block and specify the s3xamzacl condition key as shown below
    Condition {
    StringNotEquals {
    s3xamzacl bucketownerfullcontrol
    }
    To test the permission using the AWS CLI you specify the acl parameter The AWS CLI then
    adds the xamzacl header when it sends the request
    aws s3api putobject bucket examplebucket key HappyFacejpg body c
    \HappyFacejpg acl bucketownerfullcontrol profile AccountBadmin
    Example 2 Granting s3PutObject permission requiring objects stored using serverside
    encryption
    Suppose Account A owns a bucket and the account administrator wants to grant Jane a user in
    Account A permission to upload objects with a condition that Jane always request serverside
    encryption so that Amazon S3 saves objects encrypted The Account A administrator can accomplish
    using the s3xamzserversideencryption condition key as shown The keyvalue pair in the
    Condition block specifies the s3xamzserversideencryption key
    Condition {
    StringNotEquals {
    s3xamzserversideencryption AES256
    }
    When testing the permission using AWS CLI you will need to add the required parameter using the
    serversideencryption parameter
    aws s3api putobject bucket example1bucket key HappyFacejpg body c
    \HappyFacejpg serversideencryption AES256 profile AccountntBadmin
    Example 3 Granting s3PutObject permission to copy objects with a restriction on the copy
    source
    In the PUT Object request when you specify a source object it is a copy operation (see PUT Object
    Copy) Accordingly the bucket owner can grant a user permission to copy objects with restrictions on
    the source For example
    • allow copying objects only from the sourcebucket bucket
    • allow copying objects from the sourcebucket bucket and only the objects whose key name prefix
    start with public f For example sourcebucketpublic*
    API Version 20060301
    325Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    • allow copying only a specific object from the sourcebucket For example sourcebucket
    examplejpg
    The following bucket policy grants user Dave s3PutObject permission that allows him to copy only
    objects with a condition that the request include the s3xamzcopysource header and the header
    value specify the examplebucketpublic* key name prefix
    {
    Version 20121017
    Statement [
    {
    Sid crossaccount permission to user in your own account
    Effect Allow
    Principal {
    AWS arnawsiamAccountAIDuserDave
    }
    Action [s3PutObject]
    Resource arnawss3examplebucket*
    }
    {
    Sid Deny your user permission to upload object if copy source
    is not bucketfolder
    Effect Deny
    Principal {
    AWS arnawsiamAccountAIDuserDave
    }
    Action s3PutObject
    Resource arnawss3examplebucket*
    Condition {
    StringNotLike {
    s3xamzcopysource examplebucketpublic*
    }
    }
    }
    ]
    }
    You can test the permission using the AWS CLI copyobject command You specify the source
    by adding the copysource parameter the key name prefix must match that the prefix allowed
    in the policy You will need to provide user Dave credentials using the profile parameter
    For more information about setting up AWS CLI see Setting Up the Tools for the Example
    Walkthroughs (p 281)
    aws s3api copyobject bucket examplebucket key HappyFacejpg
    copysource examplebucketpublicPublicHappyFace1jpg profile
    AccountADave
    Note that the preceding policy uses the StringNotLike condition To grant permission to copy only a
    specific object you will need to change the condition from StringNotLike to StringNotEquals and
    then specify the exact object key as shown
    Condition {
    StringNotEquals {
    s3xamzcopysource examplebucketpublic
    PublicHappyFace1jpg
    }
    API Version 20060301
    326Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    }
    Example 4 Granting access to a specific version of an object
    Suppose Account A owns a versionenabled bucket The bucket has several versions of the
    HappyFacejpg object The account administrator now wants to grant its user (Dave) permission
    to get only a specific version of the object The account administrator can accomplish this by
    granting Dave s3GetObjectVersion permission conditionally as shown The keyvalue pair in the
    Condition block specifies the s3VersionId condition key
    {
    Version 20121017
    Statement [
    {
    Sid statement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountAIDuserDave
    }
    Action [s3GetObjectVersion]
    Resource arnawss3examplebucketversionenabled
    HappyFacejpg
    }
    {
    Sid statement2
    Effect Deny
    Principal {
    AWS arnawsiamAccountAIDuserDave
    }
    Action [s3GetObjectVersion]
    Resource arnawss3examplebucketversionenabled
    HappyFacejpg
    Condition {
    StringNotEquals {
    s3VersionId AaaHbAQitwiL_h47_44lRO2DDfLlBO5e
    }
    }
    }
    ]
    }
    In this case Dave will need to know the exact object version ID to retrieve the object
    You can test the permissions using the AWS CLI getobject command with the versionid
    parameter identifying the specific object version The command retrieves the object and saves it to the
    OutputFilejpg file
    aws s3api getobject bucket examplebucketversionenabled key HappyFacejpg
    OutputFilejpg versionid AaaHbAQitwiL_h47_44lRO2DDfLlBO5e profile
    AccountADave
    Example 5 Restrict object uploads to objects with a specific storage class
    Suppose Account A owns a bucket and the account administrator wants to restrict Dave a user in
    Account A to be able to only upload objects to the bucket that will be stored with the STANDARD_IA
    storage class The Account A administrator can accomplish this by using the s3xamzstorage
    class condition key as shown in the following example bucket policy
    API Version 20060301
    327Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    {
    Version 20121017
    Statement [
    {
    Sid statement1
    Effect Allow
    Principal {
    AWS arnawsiamAccountAIDuserDave
    }
    Action s3PutObject
    Resource [
    arnawss3examplebucket*
    ]
    Condition {
    StringEquals {
    s3xamzstorageclass [
    STANDARD_IA
    ]
    }
    }
    }
    ]
    }
    Amazon S3 Condition Keys for Bucket Operations
    The following table shows list of bucket operation–specific permissions you can grant in policies and
    for each of the permissions the available keys you can use in specifying a condition
    Permission Applicable Condition Keys Description
    • s3xamzacl
    (for canned ACL
    permissions)
    • s3xamzgrantpermi
    ssion
    (for explicit permissions)
    where permission can be
    read write read
    acp writeacp full
    control
    The Create Bucket API (see PUT
    Bucket) supports ACLspecific
    headers Using these condition keys
    you can require a user to set these
    headers in the request granting
    specific permissions
    s3CreateBucket
    s3LocationConstraint Using this condition key you can
    restrict user to create bucket in a
    specific region For a policy example
    see Example 1 Allow a user to
    create a bucket but only in a specific
    region (p 331)
    s3ListBucket s3prefix Using this condition key you can limit
    the response of the Get Bucket (List
    Objects) API (see GET Bucket (List
    Objects)) to key names with specific
    prefix
    The Get Bucket (List Objects) API
    returns list of object keys in the
    API Version 20060301
    328Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permission Applicable Condition Keys Description
    specified bucket This API supports
    the prefix header to retrieve only the
    object keys with a specific prefix This
    condition key relates to the prefix
    header
    For example the Amazon S3
    console supports the folder concept
    using key name prefixes So if you
    have two objects with key names
    publicobject1jpg and public
    object2jpg the console shows the
    objects under the public folder If you
    organize your object keys using such
    prefixes you can grant
    s3ListBucket
    permission with the condition that
    will allow the user to get a list of key
    names with a specific prefix
    For a policy example see Example 2
    Allow a user to get a list of objects in
    a bucket according to a specific prefix
    (p 332)
    s3delimiter If you organize your object key names
    using prefixes and delimiters you
    can use this condition key to require
    the user to specify the delimiter
    parameter in the Get Bucket (List
    Objects) request In this case the
    response Amazon S3 returns is a list
    of object keys with common prefixes
    grouped together For an example of
    using prefixes and delimiters go to
    Get Bucket (List Objects)
    s3maxkeys Using this condition you can limit the
    number of keys Amazon S3 returns
    in response to the Get Bucket (List
    Objects) request by requiring the user
    to specify the maxkeys parameter
    By default the API returns up to 1000
    key names
    For a list of numeric conditions you
    can use see Numeric Condition
    Operators in the IAM User Guide
    API Version 20060301
    329Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permission Applicable Condition Keys Description
    s3prefix If your bucket is versionenabled you
    can use the GET Bucket Object
    versions API (see GET Bucket
    Object versions) to retrieve metadata
    of all of the versions of objects For
    this API the bucket owner must grant
    the
    s3ListBucketVersions
    permission in the policy
    Using this condition key you can limit
    the response of the API to key names
    with a specific prefix by requiring the
    user to specify the prefix parameter
    in the request with a specific value
    For example the Amazon S3 console
    supports the folder concept of
    using key name prefixes If you
    have two objects with key names
    publicobject1jpg and public
    object2jpg the console shows the
    objects under the public folder If you
    organize your object keys using such
    prefixes you can grant
    s3ListBucket
    permission with the condition that will
    allow a use to get a list of key names
    with a specific prefix
    For a policy example see Example 2
    Allow a user to get a list of objects in
    a bucket according to a specific prefix
    (p 332)
    s3delimiter If you organize your object key names
    using prefixes and delimiters you
    can use this condition key to require
    the user to specify the delimiter
    parameter in the GET Bucket Object
    versions request In this case the
    response Amazon S3 returns is a list
    of object keys with common prefixes
    grouped together
    s3ListBucketVersion
    s
    s3maxkeys Using this condition you can limit the
    number of keys Amazon S3 returns in
    response to the GET Bucket Object
    versions request by requiring the user
    to specify the maxkeys parameter
    By default the API returns up to
    1000 key names For a list of numeric
    conditions you can use see Numeric
    Condition Operators in the IAM User
    Guide
    API Version 20060301
    330Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Permission Applicable Condition Keys Description
    s3PutBucketAcl • s3xamzacl
    (for canned ACL
    permissions)
    • s3xamzgrantpermi
    ssion
    (for explicit permissions)
    where permission can be
    read write read
    acp writeacp full
    control
    The PUT Bucket acl API (see
    PUT Bucket) supports ACLspecific
    headers You can use these condition
    keys to require a user to set these
    headers in the request
    Example 1 Allow a user to create a bucket but only in a specific region
    Suppose an AWS account administrator wants to grant its user (Dave) permission to create a bucket
    in the South America (São Paulo) region only The account administrator can attach the following user
    policy granting the s3CreateBucket permission with a condition as shown The keyvalue pair in
    the Condition block specifies the s3LocationConstraint key and the saeast1 region as its
    value
    Note
    In this example the bucket owner is granting permission to one of its users so either a bucket
    policy or a user policy can be used This example shows a user policy
    For a list of Amazon S3 regions go to Regions and Endpoints in the Amazon Web Services General
    Reference
    {
    Version20121017
    Statement[
    {
    Sidstatement1
    EffectAllow
    Action[
    s3CreateBucket
    ]
    Resource[
    arnawss3*
    ]
    Condition {
    StringLike {
    s3LocationConstraint saeast1
    }
    }
    }
    ]
    }
    This policy restricts the user from creating a bucket in any other region except saeast1 However
    it is possible some other policy will grant this user permission to create buckets in another region
    For example if the user belongs to a group the group may have a policy attached to it allowing all
    users in the group permission to create buckets in some other region To ensure the user does not get
    permission to create buckets in any other region you can add an explicit deny statement in this policy
    {
    API Version 20060301
    331Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    Version20121017
    Statement[
    {
    Sidstatement1
    EffectAllow
    Action[
    s3CreateBucket
    ]
    Resource[
    arnawss3*
    ]
    Condition {
    StringLike {
    s3LocationConstraint saeast1
    }
    }
    }
    {
    Sidstatement2
    EffectDeny
    Action[
    s3CreateBucket
    ]
    Resource[
    arnawss3*
    ]
    Condition {
    StringNotLike {
    s3LocationConstraint saeast1
    }
    }
    }
    ]
    }
    The Deny statement uses the StringNotLike condition That is a create bucket request will be
    denied if the location constraint is not saeast1 The explicit deny will not allow the user to create a
    bucket in any other region no matter what other permission the user gets
    You can test the policy using the following createbucket AWS CLI command This example uses
    the bucketconfigtxt file to specify the location constraint Note the Windows file path You will
    need to update the bucket name and path as appropriate You must provide user credentials using the
    profile parameter For more information about setting up and using the AWS CLI see Setting Up
    the Tools for the Example Walkthroughs (p 281)
    aws s3api createbucket bucket examplebucket profile AccountADave
    createbucketconfiguration filecUserssomeUserbucketconfigtxt
    The bucketconfigtxt file specifies the configuration as follows
    {LocationConstraint saeast1}
    Example 2 Allow a user to get a list of objects in a bucket according to a specific prefix
    A bucket owner can restrict a user to list content of a specific folder in the bucket This is useful if
    objects in the bucket are organized by key name prefixes the Amazon S3 console then uses the
    prefixes to show a folder hierarchy (only the console supports the concept of folders the Amazon S3
    API supports only buckets and objects)
    API Version 20060301
    332Amazon Simple Storage Service Developer Guide
    Access Policy Language Overview
    In this example the bucket owner and the parent account to which the user belongs are the same So
    the bucket owner can use either a bucket policy or a user policy First we show a user policy
    The following user policy grants the s3ListBucket permission (see GET Bucket (List Objects)) with
    a condition that requires the user to specify the prefix in the request with the value projects
    {
    Version20121017
    Statement[
    {
    Sidstatement1
    EffectAllow
    Action[
    s3ListBucket
    ]
    Resource[
    arnawss3examplebucket
    ]
    Condition {
    StringEquals {
    s3prefix projects
    }
    }
    }
    {
    Sidstatement2
    EffectDeny
    Action[
    s3ListBucket
    ]
    Resource[
    arnawss3examplebucket
    ]
    Condition {
    StringNotEquals {
    s3prefix projects
    }
    }
    }
    ]
    }
    The condition restricts the user to listing object keys with the projects prefix The added explicit
    deny will deny user request for listing keys with any other prefix no matter what other permissions the
    user might have For example it is possible that the user gets permission to list object keys without
    any restriction for example either by updates to the preceding user policy or via a bucket policy But
    because explicit deny always supersedes the user request to list keys other than the project prefix
    will be denied
    The preceding policy is a user policy If you add the Principal element to the policy identifying the
    user you now have a bucket policy as shown
    {
    Version20121017
    Statement[
    {
    Sidstatement1
    EffectAllow
    API Version 20060301
    333Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    Principal {
    AWS arnawsiamBucketOwneraccountIDuserusername
    }
    Action[
    s3ListBucket
    ]
    Resource[
    arnawss3examplebucket
    ]
    Condition {
    StringEquals {
    s3prefix examplefolder
    }
    }
    }
    {
    Sidstatement2
    EffectDeny
    Principal {
    AWS arnawsiamBucketOwnerAccountIDuserusername
    }
    Action[
    s3ListBucket
    ]
    Resource[
    arnawss3examplebucket
    ]
    Condition {
    StringNotEquals {
    s3prefix examplefolder
    }
    }
    }
    ]
    }
    You can test the policy using the following listobject AWS CLI command In the command you
    provide user credentials using the profile parameter For more information about setting up and
    using the AWS CLI see Setting Up the Tools for the Example Walkthroughs (p 281)
    aws s3api listobjects bucket examplebucket prefix examplefolder
    profile AccountADave
    Now if the bucket is versionenabled to list the objects in the bucket instead of s3ListBucket
    permission you must grant the s3ListBucketVersions permission in the preceding policy This
    permission also supports the s3prefix condition key
    Bucket Policy Examples
    This section presents a few examples of typical use cases for bucket policies The policies use
    bucket and examplebucket strings in the resource value To test these policies you need to replace
    these strings with your bucket name For information about access policy language see Access Policy
    Language Overview (p 308)
    You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket You can
    then use the generated document to set your bucket policy by using the Amazon S3 console by a
    number of thirdparty tools or via your application
    API Version 20060301
    334Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    Note
    When testing permissions using the Amazon S3 console you will need to grant additional
    permissions that the console requires—s3ListAllMyBuckets s3GetBucketLocation
    and s3ListBucket permissions For an example walkthrough that grants permissions to
    users and tests them using the console see An Example Walkthrough Using user policies to
    control access to your bucket (p 348)
    Topics
    • Granting Permissions to Multiple Accounts with Added Conditions (p 335)
    • Granting ReadOnly Permission to an Anonymous User (p 335)
    • Restricting Access to Specific IP Addresses (p 336)
    • Restricting Access to a Specific HTTP Referrer (p 337)
    • Granting Permission to an Amazon CloudFront Origin Identity (p 338)
    • Adding a Bucket Policy to Require MFA Authentication (p 339)
    • Granting CrossAccount Permissions to Upload Objects While Ensuring the Bucket Owner Has Full
    Control (p 340)
    • Example Bucket Policies for VPC Endpoints for Amazon S3 (p 341)
    Granting Permissions to Multiple Accounts with Added
    Conditions
    The following example policy grants the s3PutObject and s3PutObjectAcl permissions to
    multiple AWS accounts and requires that any request for these operations include the publicread
    canned ACL For more information see Specifying Permissions in a Policy (p 312) and Specifying
    Conditions in a Policy (p 315)
    {
    Version20121017
    Statement[
    {
    SidAddCannedAcl
    EffectAllow
    Principal {AWS
    [arnawsiam111122223333rootarnawsiam444455556666root]}
    Action[s3PutObjects3PutObjectAcl]
    Resource[arnawss3examplebucket*]
    Condition{StringEquals{s3xamzacl[publicread]}}
    }
    ]
    }
    Granting ReadOnly Permission to an Anonymous User
    The following example policy grants the s3GetObject permission to any public anonymous users
    (For a list of permissions and operations they allow see Specifying Permissions in a Policy (p 312))
    This permission allows anyone to read the object data which is useful for when you configure your
    bucket as a website and want everyone to be able to read objects in the bucket
    {
    Version20121017
    Statement[
    {
    API Version 20060301
    335Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    SidAddPerm
    EffectAllow
    Principal *
    Action[s3GetObject]
    Resource[arnawss3examplebucket*]
    }
    ]
    }
    Restricting Access to Specific IP Addresses
    The following example grants permissions to any user to perform any Amazon S3 operations on
    objects in the specified bucket However the request must originate from the range of IP addresses
    specified in the condition
    The condition in this statement identifies the 54240143* range of allowed Internet Protocol version 4
    (IPv4) IP addresses with one exception 54240143188
    The Condition block uses the IpAddress and NotIpAddress conditions and the awsSourceIp
    condition key which is an AWSwide condition key For more information about these condition keys
    see Specifying Conditions in a Policy (p 315) The awssourceIp IPv4 values use the standard
    CIDR notation For more information see IP Address Condition Operators in the IAM User Guide
    {
    Version 20121017
    Id S3PolicyId1
    Statement [
    {
    Sid IPAllow
    Effect Allow
    Principal *
    Action s3*
    Resource arnawss3examplebucket*
    Condition {
    IpAddress {awsSourceIp 54240143024}
    NotIpAddress {awsSourceIp 5424014318832}
    }
    }
    ]
    }
    Allowing IPv4 and IPv6 Addresses
    When you start using IPv6 addresses we recommend that you update all of your organization's
    policies with your IPv6 address ranges in addition to your existing IPv4 ranges to ensure that the
    policies continue to work as you make the transition to IPv6
    The following example bucket policy shows how to mix IPv4 and IPv6 address ranges to cover all
    of your organization's valid IP addresses The example policy would allow access to the example IP
    addresses 542401431 and 2001DB8123456781 and would deny access to the addresses
    54240143129 and 2001DB812345678ABCD1
    The IPv6 values for awssourceIp must be in standard CIDR format For IPv6 we support using
    to represent a range of 0s for example 2032001DB81234567864 For more information see IP
    Address Condition Operators in the IAM User Guide
    {
    API Version 20060301
    336Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    IdPolicyId2
    Version20121017
    Statement[
    {
    SidAllowIPmix
    EffectAllow
    Principal*
    Actions3*
    Resourcearnawss3examplebucket*
    Condition {
    IpAddress {
    awsSourceIp [
    54240143024
    2001DB81234567864
    ]
    }
    NotIpAddress {
    awsSourceIp [
    5424014312830
    2001DB812345678ABCD80
    ]
    }
    }
    }
    ]
    }
    Restricting Access to a Specific HTTP Referrer
    Suppose you have a website with domain name (wwwexamplecom or examplecom) with links
    to photos and videos stored in your S3 bucket examplebucket By default all the S3 resources
    are private so only the AWS account that created the resources can access them To allow read
    access to these objects from your website you can add a bucket policy that allows s3GetObject
    permission with a condition using the awsreferer key that the get request must originate from
    specific webpages The following policy specifies the StringLike condition with the awsReferer
    condition key
    {
    Version20121017
    Idhttp referer policy example
    Statement[
    {
    SidAllow get requests originating from wwwexamplecom and
    examplecom
    EffectAllow
    Principal*
    Actions3GetObject
    Resourcearnawss3examplebucket*
    Condition{
    StringLike{awsReferer[httpwwwexamplecom*http
    examplecom*]}
    }
    }
    ]
    }
    Make sure the browsers you use include the http referer header in the request
    API Version 20060301
    337Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    You can further secure access to objects in the examplebucket bucket by adding explicit deny to the
    bucket policy as shown in the following example Explicit deny supersedes any permission you might
    grant to objects in the examplebucket bucket using other means such as ACLs or user policies
    {
    Version 20121017
    Id http referer policy example
    Statement [
    {
    Sid Allow get requests referred by wwwexamplecom and
    examplecom
    Effect Allow
    Principal *
    Action s3GetObject
    Resource arnawss3examplebucket*
    Condition {
    StringLike {awsReferer [httpwwwexamplecom*http
    examplecom*]}
    }
    }
    {
    Sid Explicit deny to ensure requests are allowed only from
    specific referer
    Effect Deny
    Principal *
    Action s3*
    Resource arnawss3examplebucket*
    Condition {
    StringNotLike {awsReferer [httpwwwexamplecom
    *httpexamplecom*]}
    }
    }
    ]
    }
    Granting Permission to an Amazon CloudFront Origin Identity
    The following example bucket policy grants a CloudFront Origin Identity permission to get (list) all
    objects in your Amazon S3 bucket The CloudFront Origin Identity is used to enable the CloudFront
    private content feature The policy uses the CanonicalUser prefix instead of AWS to specify a
    Canonical User ID To learn more about CloudFront support for serving private content go to the
    Serving Private Content topic in the Amazon CloudFront Developer Guide You must specify the
    canonical user ID for your CloudFront distribution's origin access identity For instructions about finding
    the canonical user ID see Specifying a Principal in a Policy (p 310)
    {
    Version20121017
    IdPolicyForCloudFrontPrivateContent
    Statement[
    {
    Sid Grant a CloudFront Origin Identity access to support private
    content
    EffectAllow
    Principal
    {CanonicalUser79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be}
    Actions3GetObject
    Resourcearnawss3examplebucket*
    API Version 20060301
    338Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    }
    ]
    }
    Adding a Bucket Policy to Require MFA Authentication
    Amazon S3 supports MFAprotected API access a feature that can enforce multifactor authentication
    for access to your Amazon S3 resources Multifactor authentication provides an extra level of security
    you can apply to your AWS environment It is a security feature that requires users to prove physical
    possession of an MFA device by providing a valid MFA code For more information go to AWS Multi
    Factor Authentication You can require MFA authentication for any requests to access your Amazon S3
    resources
    You can enforce the MFA authentication requirement using the awsMultiFactorAuthAge key in a
    bucket policy IAM users can access Amazon S3 resources by using temporary credentials issued by
    the AWS Security Token Service (STS) You provide the MFA code at the time of the STS request
    When Amazon S3 receives a request with MFA authentication the awsMultiFactorAuthAge key
    provides a numeric value indicating how long ago (in seconds) the temporary credential was created
    If the temporary credential provided in the request was not created using an MFA device this key
    value is null (absent) In a bucket policy you can add a condition to check this value as shown in the
    following example bucket policy The policy denies any Amazon S3 operation on the taxdocuments
    folder in the examplebucket bucket if the request is not MFA authenticated To learn more about
    MFA authentication see Using MultiFactor Authentication (MFA) in AWS in the IAM User Guide
    {
    Version 20121017
    Id 123
    Statement [
    {
    Sid
    Effect Deny
    Principal *
    Action s3*
    Resource arnawss3examplebuckettaxdocuments*
    Condition { Null { awsMultiFactorAuthAge true }}
    }
    ]
    }
    The Null condition in the Condition block evaluates to true if the awsMultiFactorAuthAge key
    value is null indicating that the temporary security credentials in the request were created without the
    MFA key
    The following bucket policy is an extension of the preceding bucket policy It includes two policy
    statements One statement allows the s3GetObject permission on a bucket (examplebucket)
    to everyone and another statement further restricts access to the examplebuckettaxdocuments
    folder in the bucket by requiring MFA authentication
    {
    Version 20121017
    Id 123
    Statement [
    {
    Sid
    Effect Deny
    Principal *
    API Version 20060301
    339Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    Action s3*
    Resource arnawss3examplebuckettaxdocuments*
    Condition { Null { awsMultiFactorAuthAge true } }
    }
    {
    Sid
    Effect Allow
    Principal *
    Action [s3GetObject]
    Resource arnawss3examplebucket*
    }
    ]
    }
    You can optionally use a numeric condition to limit the duration for which the
    awsMultiFactorAuthAge key is valid independent of the lifetime of the temporary security
    credential used in authenticating the request For example the following bucket policy in addition
    to requiring MFA authentication also checks how long ago the temporary session was created The
    policy denies any operation if the awsMultiFactorAuthAge key value indicates that the temporary
    session was created more than an hour ago (3600 seconds)
    {
    Version 20121017
    Id 123
    Statement [
    {
    Sid
    Effect Deny
    Principal *
    Action s3*
    Resource arnawss3examplebuckettaxdocuments*
    Condition {Null {awsMultiFactorAuthAge true }}
    }
    {
    Sid
    Effect Deny
    Principal *
    Action s3*
    Resource arnawss3examplebuckettaxdocuments*
    Condition {NumericGreaterThan {awsMultiFactorAuthAge 3600 }}
    }
    {
    Sid
    Effect Allow
    Principal *
    Action [s3GetObject]
    Resource arnawss3examplebucket*
    }
    ]
    }
    Granting CrossAccount Permissions to Upload Objects While
    Ensuring the Bucket Owner Has Full Control
    You can allow another AWS account to upload objects to your bucket However you may decide
    that as a bucket owner you must have full control of the objects uploaded to your bucket The
    following policy enforces that a specific AWS account (111111111111) be denied the ability to upload
    API Version 20060301
    340Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    objects unless that account grants fullcontrol access to the bucket owner identified by the email
    address (xyz@amazoncom) The StringEquals condition in the policy specifies the s3xamz
    grantfullcontrol condition key to express the requirement (see Specifying Conditions in a
    Policy (p 315))
    {
    Version20121017
    Statement[
    {
    Sid111
    EffectAllow
    Principal{AWS1111111111}
    Actions3PutObject
    Resourcearnawss3examplebucket*
    }
    {
    Sid112
    EffectDeny
    Principal{AWS1111111111 }
    Actions3PutObject
    Resourcearnawss3examplebucket*
    Condition {
    StringNotEquals {s3xamzgrantfullcontrol
    [emailAddressxyz@amazoncom]}
    }
    }
    ]
    }
    Example Bucket Policies for VPC Endpoints for Amazon S3
    You can use Amazon S3 bucket policies to control access to buckets from specific Amazon Virtual
    Private Cloud (Amazon VPC) endpoints or specific VPCs This section contains example bucket
    policies that can be used to control S3 bucket access from VPC endpoints To learn how to set up VPC
    endpoints go to the VPC Endpoints topic in the Amazon VPC User Guide
    Amazon VPC enables you to launch Amazon Web Services (AWS) resources into a virtual network
    that you define A VPC endpoint enables you to create a private connection between your VPC and
    another AWS service without requiring access over the Internet through a VPN connection through a
    NAT instance or through AWS Direct Connect
    A VPC endpoint for Amazon S3 is a logical entity within a VPC that allows connectivity only to Amazon
    S3 The VPC endpoint routes requests to Amazon S3 and routes responses back to the VPC VPC
    endpoints only change how requests are routed Amazon S3 public endpoints and DNS names will
    continue to work with VPC endpoints For important information about using Amazon VPC endpoints
    with Amazon S3 go to the Endpoints for Amazon S3 topic in the Amazon VPC User Guide
    VPC endpoints for Amazon S3 provides two ways to control access to your Amazon S3 data
    • You can control what requests users or groups are allowed through a specific VPC endpoint For
    information on this type of access control go to the VPC Endpoints Controlling Access to Services
    topic in the Amazon VPC User Guide
    • You can control which VPCs or VPC endpoints have access to your S3 buckets by using S3 bucket
    policies For examples of this type of bucket policy access control see the following topics on
    restricting access
    Topics
    API Version 20060301
    341Amazon Simple Storage Service Developer Guide
    Bucket Policy Examples
    • Restricting Access to a Specific VPC Endpoint (p 342)
    • Restricting Access to a Specific VPC (p 342)
    • Related Resources (p 343)
    Restricting Access to a Specific VPC Endpoint
    The following is an example of an S3 bucket policy that allows access to a specific bucket
    examplebucket only from the VPC endpoint with the ID vpce1a2b3c4d The policy uses
    the awssourceVpce condition key to restrict access to the specified VPC endpoint The
    awssourceVpce condition key does not require an ARN for the VPC endpoint resource only the
    VPC endpoint ID For more information about using conditions in a policy see Specifying Conditions in
    a Policy (p 315)
    {
    Version 20121017
    Id Policy1415115909152
    Statement [
    {
    Sid AccesstospecificVPCEonly
    Action s3*
    Effect Deny
    Resource [arnawss3examplebucket
    arnawss3examplebucket*]
    Condition {
    StringNotEquals {
    awssourceVpce vpce1a2b3c4d
    }
    }
    Principal *
    }
    ]
    }
    Restricting Access to a Specific VPC
    You can create a bucket policy that restricts access to a specific VPC by using the awssourceVpc
    condition key This is useful if you have multiple VPC endpoints configured in the same VPC and you
    want to manage access to your S3 buckets for all of your endpoints The following is an example of a
    policy that allows VPC vpc111bbb22 to access examplebucket The vpc111bbb22 condition key
    does not require an ARN for the VPC resource only the VPC ID
    {
    Version 20121017
    Id Policy1415115909153
    Statement [
    {
    Sid AccesstospecificVPConly
    Action s3*
    Effect Deny
    Resource [arnawss3examplebucket
    arnawss3examplebucket*]
    Condition {
    StringNotEquals {
    awssourceVpc vpc111bbb22
    }
    }
    API Version 20060301
    342Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Principal *
    }
    ]
    }
    Related Resources
    • VPC Endpoints in the Amazon VPC User Guide
    • Bucket Policy Examples (p 334)
    User Policy Examples
    This section shows several IAM user policies for controlling user access to Amazon S3 For information
    about access policy language see Access Policy Language Overview (p 308)
    The following example policies will work if you test them programmatically however in order to use
    them with the Amazon S3 console you will need to grant additional permissions that are required by
    the console For information about using policies such as these with the Amazon S3 console see An
    Example Walkthrough Using user policies to control access to your bucket (p 348)
    Topics
    • Example Allow an IAM user access to one of your buckets (p 343)
    • Example Allow each IAM user access to a folder in a bucket (p 344)
    • Example Allow a group to have a shared folder in Amazon S3 (p 347)
    • Example Allow all your users to read objects in a portion of the corporate bucket (p 347)
    • Example Allow a partner to drop files into a specific portion of the corporate bucket (p 347)
    • An Example Walkthrough Using user policies to control access to your bucket (p 348)
    Example Allow an IAM user access to one of your buckets
    In this example you want to grant an IAM user in your AWS account access to one of your buckets
    examplebucket and allow the user to add update and delete objects
    In addition to granting the s3PutObject s3GetObject and s3DeleteObject permissions
    to the user the policy also grants the s3ListAllMyBuckets s3GetBucketLocation and
    s3ListBucket permissions These are the additional permissions required by the console For
    an example walkthrough that grants permissions to users and tests them using the console see An
    Example Walkthrough Using user policies to control access to your bucket (p 348)
    {
    Version20121017
    Statement[
    {
    EffectAllow
    Action[
    s3ListAllMyBuckets
    ]
    Resourcearnawss3*
    }
    {
    EffectAllow
    Action[
    API Version 20060301
    343Amazon Simple Storage Service Developer Guide
    User Policy Examples
    s3ListBucket
    s3GetBucketLocation
    ]
    Resourcearnawss3examplebucket
    }
    {
    EffectAllow
    Action[
    s3PutObject
    s3GetObject
    s3DeleteObject
    ]
    Resourcearnawss3examplebucket*
    }
    ]
    }
    Example Allow each IAM user access to a folder in a bucket
    In this example you want two IAM users Alice and Bob to have access to your bucket
    examplebucket so they can add update and delete objects However you want to restrict each
    user’s access to a single folder in the bucket You might create folders with names that match the user
    names
    examplebucket
    Alice
    Bob
    To grant each user access only to his or her folder you can write a policy for each user and attach it
    individually For example you can attach the following policy to user Alice to allow her specific Amazon
    S3 permissions on the examplebucketAlice folder
    {
    Version20121017
    Statement[
    {
    EffectAllow
    Action[
    s3PutObject
    s3GetObject
    s3GetObjectVersion
    s3DeleteObject
    s3DeleteObjectVersion
    ]
    Resourcearnawss3examplebucketAlice*
    }
    ]
    }
    You then attach a similar policy to user Bob identifying folder Bob in the Resource value
    Instead of attaching policies to individual users though you can write a single policy that uses a
    policy variable and attach the policy to a group You will first need to create a group and add both
    Alice and Bob to the group The following example policy allows a set of Amazon S3 permissions in
    the examplebucket{awsusername} folder When the policy is evaluated the policy variable
    {awsusername} is replaced by the requester's user name For example if Alice sends a request
    API Version 20060301
    344Amazon Simple Storage Service Developer Guide
    User Policy Examples
    to put an object the operation is allowed only if Alice is uploading the object to the examplebucket
    Alice folder
    {
    Version20121017
    Statement[
    {
    EffectAllow
    Action[
    s3PutObject
    s3GetObject
    s3GetObjectVersion
    s3DeleteObject
    s3DeleteObjectVersion
    ]
    Resourcearnawss3examplebucket{awsusername}*
    }
    ]
    }
    Note
    When using policy variables you must explicitly specify version 20121017 in the policy The
    default version of the access policy language 20081017 does not support policy variables
    If you want to test the preceding policy on the Amazon S3 console the console requires permission
    for additional Amazon S3 permissions as shown in the following policy For information about how the
    console uses these permissions see An Example Walkthrough Using user policies to control access
    to your bucket (p 348)
    {
    Version20121017
    Statement [
    {
    Sid AllowGroupToSeeBucketListInTheConsole
    Action [ s3ListAllMyBuckets s3GetBucketLocation ]
    Effect Allow
    Resource [ arnawss3* ]
    }
    {
    Sid AllowRootLevelListingOfTheBucket
    Action [s3ListBucket]
    Effect Allow
    Resource [arnawss3examplebucket]
    Condition{
    StringEquals{
    s3prefix[] s3delimiter[]
    }
    }
    }
    {
    Sid AllowListBucketOfASpecificUserPrefix
    Action [s3ListBucket]
    Effect Allow
    Resource [arnawss3examplebucket]
    Condition{ StringLike{s3prefix[{awsusername}*] }
    }
    }
    {
    API Version 20060301
    345Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Sid AllowUserSpecificActionsOnlyInTheSpecificUserPrefix
    EffectAllow
    Action[
    s3PutObject
    s3GetObject
    s3GetObjectVersion
    s3DeleteObject
    s3DeleteObjectVersion
    ]
    Resourcearnawss3examplebucket{awsusername}*
    }
    ]
    }
    Note
    In the 20121017 version of the policy policy variables start with This change in syntax can
    potentially create a conflict if your object key includes a For example to include an object
    key myfile in a policy you specify the character with {} my{}file
    Although IAM user names are friendly humanreadable identifiers they are not required to be globally
    unique For example if user Bob leaves the organization and another Bob joins then new Bob could
    access old Bob's information Instead of using user names you could create folders based on user
    IDs Each user ID is unique In this case you will need to modify the preceding policy to use the
    {awsuserid} policy variable For more information about user identifiers see IAM Identifiers in the
    IAM User Guide
    {
    Version20121017
    Statement[
    {
    EffectAllow
    Action[
    s3PutObject
    s3GetObject
    s3GetObjectVersion
    s3DeleteObject
    s3DeleteObjectVersion
    ]
    Resourcearnawss3my_corporate_buckethome{awsuserid}*
    }
    ]
    }
    Allow nonIAM users (mobile app users) access to folders in a bucket
    Suppose you want to develop a mobile app a game that stores users' data in an S3 bucket For each
    app user you want to create a folder in your bucket You also want to limit each user’s access to his or
    her own folder But you cannot create folders before someone downloads your app and starts playing
    the game because you don’t have a user ID
    In this case you can require users to sign in to your app by using public identity providers such as
    Login with Amazon Facebook or Google After users have signed in to your app through one of these
    providers they have a user ID that you can use to create userspecific folders at run time
    You can then use web identity federation in AWS Security Token Service to integrate information from
    the identity provider with your app and to get temporary security credentials for each user You can
    then create IAM policies that allow the app to access your bucket and perform such operations as
    API Version 20060301
    346Amazon Simple Storage Service Developer Guide
    User Policy Examples
    creating userspecific folders and uploading data For more information about web identity federation
    see About Web Identity Federation in the IAM User Guide
    Example Allow a group to have a shared folder in Amazon S3
    Attaching the following policy to the group grants everybody in the group access to the following folder
    in Amazon S3 my_corporate_bucketsharemarketing Group members are allowed to access
    only the specific Amazon S3 permissions shown in the policy and only for objects in the specified
    folder
    {
    Version20121017
    Statement[
    {
    EffectAllow
    Action[
    s3PutObject
    s3GetObject
    s3GetObjectVersion
    s3DeleteObject
    s3DeleteObjectVersion
    ]
    Resourcearnawss3my_corporate_bucketsharemarketing*
    }
    ]
    }
    Example Allow all your users to read objects in a portion of the
    corporate bucket
    In this example we create a group called AllUsers which contains all the IAM users that are
    owned by the AWS account We then attach a policy that gives the group access to GetObject and
    GetObjectVersion but only for objects in the my_corporate_bucketreadonly folder
    {
    Version20121017
    Statement[
    {
    EffectAllow
    Action[
    s3GetObject
    s3GetObjectVersion
    ]
    Resourcearnawss3my_corporate_bucketreadonly*
    }
    ]
    }
    Example Allow a partner to drop files into a specific portion of
    the corporate bucket
    In this example we create a group called WidgetCo that represents a partner company We create an
    IAM user for the specific person or application at the partner company that needs access and then we
    put the user in the group
    API Version 20060301
    347Amazon Simple Storage Service Developer Guide
    User Policy Examples
    We then attach a policy that gives the group PutObject access to the following folder in the corporate
    bucket my_corporate_bucketuploadswidgetco
    We want to prevent the WidgetCo group from doing anything else with the bucket so we add a
    statement that explicitly denies permission to any Amazon S3 permissions except PutObject on any
    Amazon S3 resource in the AWS account This step is necessary only if there's a broad policy in use
    elsewhere in your AWS account that gives users wide access to Amazon S3 resources
    {
    Version20121017
    Statement[
    {
    EffectAllow
    Actions3PutObject
    Resourcearnawss3my_corporate_bucketuploadswidgetco*
    }
    {
    EffectDeny
    NotActions3PutObject
    Resourcearnawss3my_corporate_bucketuploadswidgetco*
    }
    {
    EffectDeny
    Actions3*
    NotResourcearnawss3my_corporate_bucketuploadswidgetco*
    }
    ]
    }
    An Example Walkthrough Using user policies to control
    access to your bucket
    This walkthrough explains how user permissions work with Amazon S3 We will create a bucket with
    folders and then we'll create AWS Identity and Access Management users in your AWS account and
    grant those users incremental permissions on your Amazon S3 bucket and the folders in it
    Topics
    • Background Basics of Buckets and Folders (p 349)
    • Walkthrough Example (p 350)
    • Step 0 Preparing for the Walkthrough (p 350)
    • Step 1 Create a Bucket (p 351)
    • Step 2 Create IAM Users and a Group (p 352)
    • Step 3 Verify that IAM Users Have No Permissions (p 352)
    • Step 4 Grant GroupLevel Permissions (p 352)
    • Step 5 Grant IAM User Alice Specific Permissions (p 357)
    • Step 6 Grant IAM User Bob Specific Permissions (p 361)
    • Step 7 Secure the Private Folder (p 361)
    • Cleanup (p 363)
    • Related Resources (p 363)
    API Version 20060301
    348Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Background Basics of Buckets and Folders
    The Amazon S3 data model is a flat structure you create a bucket and the bucket stores objects
    There is no hierarchy of subbuckets or subfolders however you can emulate a folder hierarchy Tools
    such as the Amazon S3 Console can present a view of these logical folders and subfolders in your
    bucket as shown here
    The console shows that a bucket named companybucket has three folders Private Development
    and Finance and an object s3dgpdf The console uses the object names (keys) to create a logical
    hierarchy with folders and subfolders Consider the following examples
    • When you create the Development folder the console creates an object with the key
    Development Note the trailing '' delimiter
    • When you upload an object named Projects1xls in the Development folder the console uploads
    the object and gives it the key DevelopmentProjects1xls
    In the key Development is the prefix and '' is the delimiter The Amazon S3 API supports
    prefixes and delimiters in its operations For example you can get a list of all objects from a bucket
    with a specific prefix and delimiter In the console when you doubleclick the Development folder
    the console lists the objects in that folder In the following example the Development folder contains
    one object
    When the console lists the Development folder in the companybucket bucket it sends a request to
    Amazon S3 in which it specifies a prefix of Development and a delimiter of '' in the request The
    console's response looks just like a folder list in your computer's file system The preceding example
    shows that the bucket companybucket has an object with the key DevelopmentProjects1xls
    The console is using object keys to infer a logical hierarchy Amazon S3 has no physical hierarchy
    only buckets that contain objects in a flat file structure When you create objects by using the Amazon
    S3 API you can use object keys that imply a logical hierarchy
    When you create a logical hierarchy of objects you can manage access to individual folders as we will
    do in this walkthrough
    Before going into the walkthrough you need to familiarize yourself with one more concept the root
    level bucket content Suppose your companybucket bucket has the following objects
    PrivateprivDoc1txt
    PrivateprivDoc2zip
    API Version 20060301
    349Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Developmentproject1xls
    Developmentproject2xls
    FinanceTax2011document1pdf
    FinanceTax2011document2pdf
    s3dgpdf
    These object keys create a logical hierarchy with Private Development and the Finance as root
    level folders and s3dgpdf as a rootlevel object When you click the bucket name in the Amazon
    S3 console the rootlevel items appear as shown The console shows the toplevel prefixes (Private
    Development and Finance) as rootlevel folders The object key s3dgpdf has no prefix and so it
    appears as a rootlevel item
    Walkthrough Example
    The example for this walkthrough is as follows
    • You create a bucket and then add three folders (Private Development and Finance) to it
    • You have two users Alice and Bob You want Alice to access only the Development folder and Bob
    to access only the Finance folder and you want to keep the Private folder content private In the
    walkthrough you manage access by creating AWS Identity and Access Management (IAM) users
    (we will use the same user names Alice and Bob) and grant them the necessary permissions
    IAM also supports creating user groups and granting grouplevel permissions that apply to all users
    in the group This helps you better manage permissions For this exercise both Alice and Bob will
    need some common permissions So you will also create a group named Consultants and then add
    both Alice and Bob to the group You will first grant permissions by attaching a group policy to the
    group Then you will add userspecific permissions by attaching policies to specific users
    Note
    The walkthrough uses companybucket as the bucket name Alice and Bob as the IAM users
    and Consultants as the group name Because Amazon S3 requires that bucket names be
    globally unique you will need to replace the bucket name with a name that you create
    Step 0 Preparing for the Walkthrough
    In this example you will use your AWS account credentials to create IAM users Initially these users
    have no permissions You will incrementally grant these users permissions to perform specific Amazon
    S3 actions To test these permissions you will sign in to the console with each user's credentials
    As you incrementally grant permissions as an AWS account owner and test permissions as an IAM
    user you need to sign in and out each time using different credentials You can do this testing with
    one browser but the process will go faster if you can use two different browsers use one browser to
    API Version 20060301
    350Amazon Simple Storage Service Developer Guide
    User Policy Examples
    connect to the AWS Management Console with your AWS account credentials and another to connect
    with the IAM user credentials
    To sign into the AWS Management Console with your AWS account credentials go to https
    consoleawsamazoncom An IAM user cannot sign in by using the same link An IAM user must use
    an IAMenabled signin page As the account owner you can provide this link to your users
    To provide a signin link for IAM users
    1 Sign in to the Identity and Access Management (IAM) console at httpsconsoleawsamazoncom
    iam
    2 In the Navigation pane click IAM Dashboard
    3 Note the URL under IAM users sign in link You will give this link to IAM users to sign in to the
    console with their IAM user name and password
    For more information about IAM go to The AWS Management Console Signin Page in the IAM User
    Guide
    Step 1 Create a Bucket
    In this step you will sign in to the Amazon S3 console with your AWS account credentials create a
    bucket add folders (Development Finance Private) to the bucket and upload one or two sample
    documents in each folder
    1 Sign in to the AWS Management Console and open the Amazon S3 console at https
    consoleawsamazoncoms3
    2 Create a bucket
    For stepbystep instructions go to Creating a Bucket in the Amazon Simple Storage Service
    Console User Guide
    3 Upload one document to the bucket
    This exercise assumes you have the s3dgpdf document at the root level of this bucket If you
    upload a different document substitute its file name for s3dgpdf
    4 Add three folders named Private Finance and Development to the bucket
    For stepbystep instructions to create a folder go to Creating a Folder in the Amazon Simple
    Storage Service Console User Guide
    5 Upload one or two documents to each folder
    For this exercise assume you have uploaded a couple of documents in each folder resulting in
    the bucket having objects with the following keys
    PrivateprivDoc1txt
    PrivateprivDoc2zip
    Developmentproject1xls
    Developmentproject2xls
    FinanceTax2011document1pdf
    FinanceTax2011document2pdf
    s3dgpdf
    For stepbystep instructions go to Uploading Objects into Amazon S3 in the Amazon Simple
    Storage Service Console User Guide
    API Version 20060301
    351Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Step 2 Create IAM Users and a Group
    Now use the IAM console to add two IAM users Alice and Bob to your AWS account Also create an
    administrative group named Consultants and then add both users to the group
    Caution
    When you add users and a group do not attach any policies that grant permissions to these
    users At first these users will not have any permissions In the following sections you will
    incrementally grant permissions You must first ensure that you have assigned passwords to
    these IAM users You will use these user credentials to test Amazon S3 actions and verify that
    the permissions work as expected
    For stepbyinstructions on creating a new IAM user see Creating an IAM User in Your AWS Account
    in the IAM User Guide
    For stepbystep instructions on creating an administrative group see Creating Your First IAM User
    and Administrators Group section in the IAM User Guide
    Step 3 Verify that IAM Users Have No Permissions
    If you are using two browsers you can now use the second browser to sign into the console using one
    of the IAM user credentials
    1 Using the IAM user signin link (see To provide a signin link for IAM users (p 351)) sign into the
    AWS console using either of the IAM user credentials
    2 Open the Amazon S3 console at httpsconsoleawsamazoncoms3
    Verify the following console message telling you that you have no permissions
    Now let's begin granting incremental permissions to the users First you will attach a group policy that
    grants permissions that both users must have
    Step 4 Grant GroupLevel Permissions
    We want all our users to be able to do the following
    • List all buckets owned by the parent account
    To do so Bob and Alice must have permission for the s3ListAllMyBuckets action
    • List rootlevel items folders and objects in the companybucket bucket
    To do so Bob and Alice must have permission for the s3ListBucket action on the
    companybucket bucket
    Now we'll create a policy that grants these permissions and then we'll attach it to the Consultants
    group
    Step 41 Grant Permission to List All Buckets
    In this step you'll create a managed policy that grants the users minimum permissions to enable them
    to list all buckets owned by the parent account and then you'll attach the policy to the Consultants
    group When you attach the managed policy to a user or a group you allow the user or group
    permission to obtain a list of buckets owned by the parent AWS account
    API Version 20060301
    352Amazon Simple Storage Service Developer Guide
    User Policy Examples
    1 Sign in to the Identity and Access Management (IAM) console at httpsconsoleawsamazoncom
    iam
    Note
    Since you'll be granting user permissions sign in with your AWS account credentials not
    as an IAM user
    2 Create the managed policy
    a In the navigation pane on the left click Policies and then click Create Policy
    b Next to Create Your Own Policy click Select
    c Enter AllowGroupToSeeBucketListInTheConsole in the Policy Name field
    d Copy the following access policy and paste it into the Policy Document field
    {
    Version 20121017
    Statement [
    {
    Sid AllowGroupToSeeBucketListInTheConsole
    Action [s3ListAllMyBuckets]
    Effect Allow
    Resource [arnawss3*]
    }
    ]
    }
    A policy is a JSON document In the document a Statement is an array of objects each
    describing a permission using a collection of name value pairs The preceding policy
    describes one specific permission The Action specifies the type of access In the policy the
    s3ListAllMyBuckets is a predefined Amazon S3 action This action covers the Amazon
    S3 GET Service operation which returns list of all buckets owned by the authenticated
    sender The Effect element value determine if specific permission is allowed or denied
    3 Attach the AllowGroupToSeeBucketListInTheConsole managed policy that you created to
    the Consultants group
    For stepbystep instructions for attaching a managed policy see Working with Managed Policies
    Using the AWS Management Console in the IAM User Guide
    You attach policy documents to IAM users and groups in the IAM console Because we want both
    our users to be able to list the buckets we attach the policy to the group
    4 Test the permission
    a Using the IAM user signin link (see To provide a signin link for IAM users (p 351)) sign
    into the AWS console using any one of IAM user credentials
    b Open the Amazon S3 console at httpsconsoleawsamazoncoms3
    The console should now list all the buckets but not the objects in any of the buckets
    API Version 20060301
    353Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Step 42 Enable Users to List RootLevel Content of a Bucket
    Now let's allow all users to list the rootlevel companybucket bucket items When a user clicks the
    company bucket in the Amazon S3 console he or she will be able to see the rootlevel items in the
    bucket
    Remember we are using companybucket for illustration You must use the name of the bucket that
    you created for this exercise
    To understand what request the console sends to Amazon S3 when you click a bucket name the
    response Amazon S3 returns and how the console interprets the response it is necessary to take a
    little deep dive
    When you click a bucket name the console sends the GET Bucket (List Objects) request to Amazon
    S3 This request includes the following parameters
    • prefix parameter with an empty string as its value
    • delimiter parameter with as its value
    The following is an example request
    GET prefix&delimiter HTTP11
    Host companybuckets3amazonawscom
    Date Wed 01 Aug 2012 120000 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLExQE0diMbLRepdf3YB+FIEXAMPLE
    Amazon S3 returns a response that includes the following element

    companybucket




    s3dgpdf



    Development


    Finance


    Private


    API Version 20060301
    354Amazon Simple Storage Service Developer Guide
    User Policy Examples
    The key s3dgpdf does not contain the '' delimiter and Amazon S3 returns the key in the
    element However all other keys in our example bucket contain the '' delimiter
    Amazon S3 groups these keys and returns a element for each of the distinct
    prefix values Development Finance and Private that is a substring from the beginning of
    these keys to the first occurrence of the specified '' delimiter
    The console interprets this result and displays the rootlevel items as three folders and one object key
    Now if Bob or Alice doubleclicks the Development folder the console sends the GET Bucket (List
    Objects) request to Amazon S3 with the prefix and the delimiter parameters set to the following
    values
    • prefix parameter with value Development
    • delimiter parameter with '' value
    In response Amazon S3 returns the object keys that start with the specified prefix

    companybucket
    Development



    Project1xls



    Project2xls



    The console shows the object keys
    Now let's return to granting users permission to list the rootlevel bucket items To list bucket content
    users need permission to call the s3ListBucket action as shown in the following policy statement
    To ensure that they see only the rootlevel content we add a condition that users must specify an
    empty prefix in the request—that is they are not allowed to doubleclick any of our rootlevel folders
    API Version 20060301
    355Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Finally we will add a condition to require folderstyle access by requiring user requests to include the
    delimiter parameter with value ''
    {
    Sid AllowRootLevelListingOfCompanyBucket
    Action [s3ListBucket]
    Effect Allow
    Resource [arnawss3companybucket]
    Condition{
    StringEquals{
    s3prefix[] s3delimiter[]
    }
    }
    }
    When you use the Amazon S3 console note that when you click a bucket the console first sends
    the GET Bucket location request to find the AWS region where the bucket is deployed Then the
    console uses the regionspecific endpoint for the bucket to send the GET Bucket (List Objects)
    request As a result if users are going to use the console you must grant permission for the
    s3GetBucketLocation action as shown in the following policy statement
    {
    Sid RequiredByS3Console
    Action [s3GetBucketLocation]
    Effect Allow
    Resource [arnawss3*]
    }
    To enable users to list rootlevel bucket content
    1 Sign in to the AWS Management Console and open the Amazon S3 console at https
    consoleawsamazoncoms3
    Use your AWS account credentials not the credentials of an IAM user to sign in to the console
    2 Replace the existing AllowGroupToSeeBucketListInTheConsole managed policy that is
    attached to the Consultants group with the following policy which also allows the s3ListBucket
    action Remember to replace companybucket in the policy Resource with the name of your
    bucket
    For stepbystep instructions see Editing Customer Managed Policies in the IAM User Guide
    When following the stepbystep instructions make sure to follow the directions for applying your
    changes to all principal entities that the policy is attached to
    {
    Version 20121017
    Statement [
    {
    Sid
    AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket
    Action [ s3ListAllMyBuckets s3GetBucketLocation ]
    Effect Allow
    Resource [ arnawss3* ]
    }
    {
    Sid AllowRootLevelListingOfCompanyBucket
    Action [s3ListBucket]
    Effect Allow
    API Version 20060301
    356Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Resource [arnawss3companybucket]
    Condition{
    StringEquals{
    s3prefix[] s3delimiter[]
    }
    }
    }
    ]
    }
    3 Test the updated permissions
    1 Using the IAM user signin link (see To provide a signin link for IAM users (p 351)) sign in
    to the AWS Management Console
    Open the Amazon S3 console at httpsconsoleawsamazoncoms3
    2 Click the bucket that you created for this exercise and the console will now show the root
    level bucket items If you click any folders in the bucket you will not be able to see the folder
    content because you have not yet granted those permissions
    This test succeeds when users use the Amazon S3 console because when you click a bucket in the
    console the console implementation sends a request that includes the prefix parameter with an
    empty string as its value and the delimiter parameter with '' as its value
    Step 43 Summary of the Group Policy
    The net effect of the group policy that you added is to grant the IAM users Alice and Bob the following
    minimum permissions
    • List all buckets owned by the parent account
    • See rootlevel items in the companybucket bucket
    However the users still cannot do much Let's grant userspecific permissions as follows
    • Permit Alice to get and put objects in the Development folder
    • Permit Bob to get and put objects in the Finance folder
    For userspecific permissions you attach a policy to the specific user not to the group In the following
    section you grant Alice permission to work in the Development folder You can repeat the steps to
    grant similar permission to Bob to work in the Finance folder
    Step 5 Grant IAM User Alice Specific Permissions
    Now we grant additional permissions to Alice so she can see the content of the Development folder
    and get and put objects in that folder
    API Version 20060301
    357Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Step 51 Grant IAM User Alice Permission to List the Development Folder Content
    For Alice to list the Development folder content you must apply a policy to the Alice user that grants
    permission for the s3ListBucket action on the companybucket bucket provided the request
    includes the prefix Development Because we want this policy to be applied only to the user Alice
    we'll use an inline policy For more information about inline policies see Managed Policies and Inline
    Policies in the IAM User Guide
    1 Sign in to the AWS Management Console and open the IAM console at https
    consoleawsamazoncomiam
    Use your AWS account credentials not the credentials of an IAM user to sign in to the console
    2 Create an inline policy to grant the user Alice permission to list the Development folder content
    a In the navigation pane on the left click Users
    b Click the user name Alice
    c On the user details page select the Permissions tab and then expand the Inline Policies
    section
    d Choose click here (or Create User Policy)
    e Click Custom Policy and then click Select
    f Enter a name for the policy in the Policy Name field
    g Copy the following policy into the Policy Document field
    {
    Version 20121017
    Statement [
    {
    Sid AllowListBucketIfSpecificPrefixIsIncludedInRequest
    Action [s3ListBucket]
    Effect Allow
    Resource [arnawss3companybucket]
    Condition{ StringLike{s3prefix[Development*] }
    }
    }
    ]
    }
    3 Test the change to Alice's permissions
    a Using the IAM user sign in link (see To provide a signin link for IAM users (p 351)) sign in
    to the AWS Management Console
    b Open the Amazon S3 console at httpsconsoleawsamazoncoms3
    c In the Amazon S3 console verify that Alice can see the list of objects in the Development
    folder in the bucket
    When the user clicks the Development folder to see the list of objects in it the Amazon
    S3 console sends the ListObjects request to Amazon S3 with the prefix Development
    Because the user is granted permission to see the object list with the prefix Development
    and delimiter '' Amazon S3 returns the list of objects with the key prefix Development
    and the console displays the list
    API Version 20060301
    358Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Step 52 Grant IAM User Alice Permissions to Get and Put Objects in the Development Folder
    For Alice to get and put objects in the Development folder she needs permission to call the
    s3GetObject and s3PutObject actions The following policy statements grant these permissions
    provided the request includes the prefix parameter with a value of Development
    {
    SidAllowUserToReadWriteObjectData
    Action[s3GetObject s3PutObject]
    EffectAllow
    Resource[arnawss3companybucketDevelopment*]
    }
    1 Sign in to the AWS Management Console and open the Amazon S3 console at https
    consoleawsamazoncoms3
    Use your AWS account credentials not the credentials of an IAM user to sign in to the console
    2 Edit the inline policy you created in the previous step
    a In the navigation pane on the left click Users
    b Click the user name Alice
    c On the user details page select the Permissions tab and then expand the Inline Policies
    section
    d Click Edit Policy next to the name of the policy you created in the previous step
    e Copy the following policy into the Policy Document field replacing the existing policy
    {
    Version 20121017
    Statement[
    {
    SidAllowListBucketIfSpecificPrefixIsIncludedInRequest
    Action[s3ListBucket]
    EffectAllow
    Resource[arnawss3companybucket]
    Condition{
    StringLike{s3prefix[Development*]
    }
    }
    }
    {
    SidAllowUserToReadWriteObjectDataInDevelopmentFolder
    Action[s3GetObject s3PutObject]
    EffectAllow
    Resource[arnawss3companybucketDevelopment*]
    }
    ]
    API Version 20060301
    359Amazon Simple Storage Service Developer Guide
    User Policy Examples
    }
    3 Test the updated policy
    1 Using the IAM user signin link (see To provide a signin link for IAM users (p 351)) sign
    into the AWS Management Console
    2 Open the Amazon S3 console at httpsconsoleawsamazoncoms3
    3 In the Amazon S3 console verify that Alice can now add an object and download an object in
    the Development folder
    Step 53 Explicitly Deny IAM User Alice Permissions to Any Other Folders in the Bucket
    User Alice can now list the rootlevel content in the companybucket bucket She can also get and
    put objects in the Development folder If you really want to tighten the access permissions you could
    explicitly deny Alice access to any other folders in the bucket If there is any other policy (bucket policy
    or ACL) that grants Alice access to any other folders in the bucket this explicit deny overrides those
    permissions
    You can add the following statement to the user Alice policy that requires all requests that Alice sends
    to Amazon S3 to include the prefix parameter whose value can be either Development* or an
    empty string
    {
    Sid ExplicitlyDenyAnyRequestsForAllOtherFoldersExceptDevelopment
    Action [s3ListBucket]
    Effect Deny
    Resource [arnawss3companybucket]
    Condition{ StringNotLike {s3prefix[Development*] }
    Null {s3prefixfalse }
    }
    }
    Note that there are two conditional expressions in the Condition block The result of these conditional
    expressions is combined by using the logical AND If both conditions are true the result of the
    combined condition is true
    • The Null conditional expression ensures that requests from Alice include the prefix parameter
    The prefix parameter requires folderlike access If you send a request without the prefix
    parameter Amazon S3 returns all the object keys
    If the request includes the prefix parameter with a null value the expression will evaluate to
    true and so the entire Condition will evaluate to true You must allow an empty string as value
    of the prefix parameter You recall from the preceding discussion allowing the null string allows
    Alice to retrieve rootlevel bucket items as the console does in the preceding discussion For more
    information see Step 42 Enable Users to List RootLevel Content of a Bucket (p 354)
    • The StringNotLike conditional expression ensures that if the value of the prefix parameter is
    specified and is not Development* the request will fail
    Follow the steps in the preceding section and again update the inline policy you created for user Alice
    Copy the following policy into the Policy Document field replacing the existing policy
    {
    Statement[
    {
    API Version 20060301
    360Amazon Simple Storage Service Developer Guide
    User Policy Examples
    SidAllowListBucketIfSpecificPrefixIsIncludedInRequest
    Action[s3ListBucket]
    EffectAllow
    Resource[arnawss3companybucket]
    Condition{
    StringLike{s3prefix[Development*]
    }
    }
    }
    {
    SidAllowUserToReadWriteObjectDataInDevelopmentFolder
    Action[s3GetObject s3PutObject]
    EffectAllow
    Resource[arnawss3companybucketDevelopment*]
    }
    {
    Sid
    ExplicitlyDenyAnyRequestsForAllOtherFoldersExceptDevelopment
    Action [s3ListBucket]
    Effect Deny
    Resource [arnawss3companybucket]
    Condition{ StringNotLike {s3prefix[Development*] }
    Null {s3prefixfalse }
    }
    }
    ]
    }
    Step 6 Grant IAM User Bob Specific Permissions
    Now you want to grant Bob permission to the Finance folder Follow the steps you used earlier to grant
    permissions to Alice but replace the Development folder with the Finance folder For stepbystep
    instructions see Step 5 Grant IAM User Alice Specific Permissions (p 357)
    Step 7 Secure the Private Folder
    In this example you have only two users You granted all the minimum required permissions at the
    group level and granted userlevel permissions only when you really need to permissions at the
    individual user level This approach helps minimize the effort of managing permissions As the number
    of users increases managing permissions can become cumbersome For example we don't want any
    of the users in this example to access the content of the Private folder How do you ensure you don't
    accidentally grant a user permission to it You add a policy that explicitly denies access to the folder
    An explicit deny overrides any other permissions To ensure that the Private folder remains private you
    can add the follow two deny statements to the group policy
    • Add the following statement to explicitly deny any action on resources in the Private folder
    (companybucketPrivate*)
    {
    Sid ExplictDenyAccessToPrivateFolderToEveryoneInTheGroup
    Action [s3*]
    Effect Deny
    Resource[arnawss3companybucketPrivate*]
    }
    • You also deny permission for the list objects action when the request specifies the Private prefix
    In the console if Bob or Alice doubleclicks the Private folder this policy causes Amazon S3 to
    return an error response
    API Version 20060301
    361Amazon Simple Storage Service Developer Guide
    User Policy Examples
    {
    Sid DenyListBucketOnPrivateFolder
    Action [s3ListBucket]
    Effect Deny
    Resource [arnawss3*]
    Condition{
    StringLike{s3prefix[Private]}
    }
    }
    Replace the Consultants group policy with an updated policy that includes the preceding deny
    statements After the updated policy is applied none of the users in the group will be able to access
    the Private folder in your bucket
    1 Sign in to the AWS Management Console and open the Amazon S3 console at https
    consoleawsamazoncoms3
    Use your AWS account credentials not the credentials of an IAM user to sign in to the console
    2 Replace the existing AllowGroupToSeeBucketListInTheConsole managed policy
    that is attached to the Consultants group with the following policy Remember to replace
    companybucket in the policy with the name of your bucket
    For instructions see Editing Customer Managed Policies in the IAM User Guide When following
    the instructions make sure to follow the directions for applying your changes to all principal
    entities that the policy is attached to
    {
    Statement [
    {
    Sid
    AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket
    Action [s3ListAllMyBuckets s3GetBucketLocation]
    Effect Allow
    Resource [arnawss3*]
    }
    {
    Sid AllowRootLevelListingOfCompanyBucket
    Action [s3ListBucket]
    Effect Allow
    Resource [arnawss3companybucket]
    Condition{
    StringEquals{s3prefix[]}
    }
    }
    {
    Sid RequireFolderStyleList
    Action [s3ListBucket]
    Effect Deny
    Resource [arnawss3*]
    Condition{
    StringNotEquals{s3delimiter}
    }
    }
    {
    Sid ExplictDenyAccessToPrivateFolderToEveryoneInTheGroup
    Action [s3*]
    API Version 20060301
    362Amazon Simple Storage Service Developer Guide
    User Policy Examples
    Effect Deny
    Resource[arnawss3companybucketPrivate*]
    }
    {
    Sid DenyListBucketOnPrivateFolder
    Action [s3ListBucket]
    Effect Deny
    Resource [arnawss3*]
    Condition{
    StringLike{s3prefix[Private]}
    }
    }
    ]
    }
    Cleanup
    In order to clean up go to the IAM console and remove the users Alice and Bob For stepbystep
    instructions go to Deleting an IAM User in the IAM User Guide
    To ensure that you aren't charged further for storage you should also delete the objects and the
    bucket that you created for this exercise
    Related Resources
    • Working with Policies in the IAM User Guide
    API Version 20060301
    363Amazon Simple Storage Service Developer Guide
    Managing Access with ACLs
    Managing Access with ACLs
    Topics
    • Access Control List (ACL) Overview (p 364)
    • Managing ACLs (p 369)
    Access control lists (ACLs) is one of the resourcebased access policy option (see Overview of
    Managing Access (p 267)) you can use to manage access to your buckets and objects You can use
    ACLs to grant basic readwrite permissions to other AWS accounts There are limits to managing
    permissions using ACLs For example you can grant permissions only to other AWS accounts
    you cannot grant permissions to users in your account You cannot grant conditional permissions
    nor can you explicitly deny permissions ACLs are suitable for specific scenarios For example if a
    bucket owner allows other AWS accounts to upload objects permissions to these objects can only be
    managed using object ACL by the AWS account that owns the object You should read the following
    introductory topics that explain the basic concepts and options available for you to manage access to
    your Amazon S3 resources and guidelines for when to use which access policy options
    • Introduction to Managing Access Permissions to Your Amazon S3 Resources (p 266)
    • Guidelines for Using the Available Access Policy Options (p 277)
    Access Control List (ACL) Overview
    Topics
    • Who Is a Grantee (p 365)
    • What Permissions Can I Grant (p 366)
    • Sample ACL (p 367)
    • Canned ACL (p 368)
    • How to Specify an ACL (p 369)
    Amazon S3 Access Control Lists (ACLs) enable you to manage access to buckets and objects Each
    bucket and object has an ACL attached to it as a subresource It defines which AWS accounts or
    groups are granted access and the type of access When a request is received against a resource
    Amazon S3 checks the corresponding ACL to verify the requester has the necessary access
    permissions
    When you create a bucket or an object Amazon S3 creates a default ACL that grants the resource
    owner full control over the resource as shown in the following sample bucket ACL (the default object
    ACL has the same structure)



    *** OwnerCanonicalUserID ***
    ownerdisplayname



    xsitypeCanonical User>
    *** OwnerCanonicalUserID ***
    API Version 20060301
    364Amazon Simple Storage Service Developer Guide
    Access Control List (ACL) Overview
    displayname

    FULL_CONTROL



    The sample ACL includes an Owner element identifying the owner via the AWS account's canonical
    user ID The Grant element identifies the grantee (either an AWS account or a predefined group) and
    the permission granted This default ACL has one Grant element for the owner You grant permissions
    by adding Grant elements each grant identifying the grantee and the permission
    Note
    An ACL can have up to 100 grants
    Who Is a Grantee
    A grantee can be an AWS account or one of the predefined Amazon S3 groups You grant permission
    to an AWS account by the email address or the canonical user ID However if you provide an email in
    your grant request Amazon S3 finds the canonical user ID for that account and adds it to the ACL The
    resulting ACLs will always contain the canonical user ID for the AWS account not the AWS account's
    email address
    Important
    You cannot use an email address to specify a grantee for any AWS region that was created
    after 1282014 The following regions were created after 1282014 Asia Pacific (Mumbai)
    Asia Pacific (Seoul) EU (Frankfurt) China (Beijing) and AWS GovCloud (US) regions
    Finding an AWS Account Canonical User ID
    The canonical user ID is associated with your AWS account You can get a canonical user ID only
    when you sign in to the AWS Management console by using the root credentials of your AWS account
    You cannot use any other credentials for example you cannot use IAM user or federated user
    credentials to get this ID For information about security credentials see How Do I Get Security
    Credentials
    To find the canonical user ID for your AWS account
    1 Sign in to the AWS Management Console at httpawsamazoncomconsole using your AWS root
    credentials (do not use IAM or federated user credentials)
    2 Go to Security Credentials
    3 In the Account Identifiers section find the canonical user ID associated with your AWS account
    You can also look up the canonical user ID of an AWS account by reading the ACL of a bucket or an
    object to which the AWS account has access permissions When an individual AWS account is granted
    permissions by a grant request a grant entry is added to the ACL with the AWS account's canonical
    user ID For more information about the canonical user ID go to AWS Account Identifiers
    Amazon S3 Predefined Groups
    Amazon S3 has a set of predefined groups When granting account access to a group you specify one
    of our URIs instead of a canonical user ID We provide the following predefined groups
    • Authenticated Users group – Represented by httpacsamazonawscomgroupsglobal
    AuthenticatedUsers
    API Version 20060301
    365Amazon Simple Storage Service Developer Guide
    Access Control List (ACL) Overview
    This group represents all AWS accounts Access permission to this group allows any AWS account
    to access the resource However all requests must be signed (authenticated)
    • All Users group – Represented by httpacsamazonawscomgroupsglobalAllUsers
    Access permission to this group allows anyone to access the resource The requests can be signed
    (authenticated) or unsigned (anonymous) Unsigned requests omit the Authentication header in the
    request
    • Log Delivery group – Represented by httpacsamazonawscomgroupss3
    LogDelivery
    WRITE permission on a bucket enables this group to write server access logs (see Server Access
    Logging (p 546)) to the bucket
    Note
    When using ACLs a grantee can be an AWS account or one of the predefined Amazon S3
    groups However the grantee cannot be an Identity and Access Management (IAM) user For
    more information about AWS users and permissions within IAM go to Using AWS Identity and
    Access Management
    Note
    When you grant other AWS accounts access to your resources be aware that the AWS
    accounts can delegate their permissions to users under their accounts This is known as
    crossaccount access For information about using crossaccount access see Creating a Role
    to Delegate Permissions to an IAM User in the IAM User Guide
    What Permissions Can I Grant
    The following table lists the set of permissions Amazon S3 supports in an ACL Note that the set
    of ACL permissions is same for object ACL and bucket ACL However depending on the context
    (bucket ACL or object ACL) these ACL permissions grant permissions for specific bucket or the object
    operations The table lists the permission and describes what they mean in the context of object and
    bucket permissions
    Permission When granted on a bucket When granted on an object
    READ Allows grantee to list the objects in the
    bucket
    Allows grantee to read the object data
    and its metadata
    WRITE Allows grantee to create overwrite and
    delete any object in the bucket
    Not applicable
    READ_ACP Allows grantee to read the bucket ACL Allows grantee to read the object ACL
    WRITE_ACP Allows grantee to write the ACL for the
    applicable bucket
    Allows grantee to write the ACL for the
    applicable object
    FULL_CONTROL Allows grantee the READ WRITE
    READ_ACP and WRITE_ACP
    permissions on the bucket
    Allows grantee the READ READ_ACP
    and WRITE_ACP permissions on the
    object
    Mapping of ACL Permissions and Access Policy Permissions
    As shown in the preceding table ACL allows only a finite set of permissions compared to the number
    of permissions you can set in an access policy (see Specifying Permissions in a Policy (p 312))
    Each of these permissions allow one or more Amazon S3 operations The following table shows how
    each of the ACL permissions map to the corresponding access policy permissions As you can see
    access policy allows more permissions than ACL does you use ACL to primarily grant basic read
    write permissions similar to file system permissions For more information about when to use ACL see
    Guidelines for Using the Available Access Policy Options (p 277)
    API Version 20060301
    366Amazon Simple Storage Service Developer Guide
    Access Control List (ACL) Overview
    ACL
    Permission
    Corresponding access policy
    permissions when the ACL
    permission is granted on a bucket
    Corresponding access policy
    permissions when the ACL
    permission is granted on an object
    READ s3ListBucket
    s3ListBucketVersions and
    s3ListBucketMultipartUploads
    s3GetObject
    s3GetObjectVersion and
    s3GetObjectTorrent
    WRITE s3PutObject and
    s3DeleteObject
    In addition when the grantee is
    the bucket owner granting WRITE
    permission in a bucket ACL allows the
    s3DeleteObjectVersion action
    to be performed on any version in that
    bucket
    Not applicable
    READ_ACP s3GetBucketAcl s3GetObjectAcl and
    s3GetObjectVersionAcl
    WRITE_ACP s3PutBucketAcl s3PutObjectAcl and
    s3PutObjectVersionAcl
    FULL_CONTROL It is equivalent to granting READ
    WRITE READ_ACP and WRITE_ACP
    ACL permissions Accordingly this
    ACL permission maps to combination
    of corresponding access policy
    permissions
    It is equivalent to granting READ
    READ_ACP and WRITE_ACP ACL
    permissions Accordingly this ACL
    permission maps to combination
    of corresponding access policy
    permissions
    Sample ACL
    The following sample ACL on a bucket identifies the resource owner and a set of grants The
    format is the XML representation of an ACL in the Amazon S3 REST API The bucket owner has
    FULL_CONTROL of the resource In addition the ACL shows how permissions are granted on a
    resource to two AWS accounts identified by canonical user ID and two of the predefined Amazon S3
    groups discussed in the preceding section



    OwnercanonicaluserID
    displayname



    xsitypeCanonicalUser>
    OwnercanonicaluserID
    displayname

    FULL_CONTROL



    xsitypeCanonicalUser>
    API Version 20060301
    367Amazon Simple Storage Service Developer Guide
    Access Control List (ACL) Overview
    user1canonicaluserID
    displayname

    WRITE


    xsitypeCanonicalUser>
    user2canonicaluserID
    displayname

    READ


    xsitypeGroup>
    httpacsamazonawscomgroupsglobalAllUsers

    READ


    xsitypeGroup>
    httpacsamazonawscomgroupss3LogDelivery

    WRITE



    Canned ACL
    Amazon S3 supports a set of predefined grants known as canned ACLs Each canned ACL has a
    predefined a set of grantees and permissions The following table lists the set of canned ACLs and the
    associated predefined grants
    Canned ACL Applies to Permissions added to ACL
    private Bucket and
    object
    Owner gets FULL_CONTROL No one else has access
    rights (default)
    publicread Bucket and
    object
    Owner gets FULL_CONTROL The AllUsers group ( see
    Who Is a Grantee (p 365)) gets READ access
    publicreadwrite Bucket and
    object
    Owner gets FULL_CONTROL The AllUsers group gets
    READ and WRITE access Granting this on a bucket is
    generally not recommended
    awsexecread Bucket and
    object
    Owner gets FULL_CONTROL Amazon EC2 gets READ
    access to GET an Amazon Machine Image (AMI) bundle
    from Amazon S3
    authenticatedread Bucket and
    object
    Owner gets FULL_CONTROL The AuthenticatedUsers
    group gets READ access
    API Version 20060301
    368Amazon Simple Storage Service Developer Guide
    Managing ACLs
    Canned ACL Applies to Permissions added to ACL
    bucketownerread Object Object owner gets FULL_CONTROL Bucket owner gets
    READ access If you specify this canned ACL when
    creating a bucket Amazon S3 ignores it
    bucketownerfull
    control
    Object Both the object owner and the bucket owner get
    FULL_CONTROL over the object If you specify this canned
    ACL when creating a bucket Amazon S3 ignores it
    logdeliverywrite Bucket The LogDelivery group gets WRITE and READ_ACP
    permissions on the bucket For more information on logs
    see (Server Access Logging (p 546))
    Note
    You can specify only one of these canned ACLs in your request
    You specify a canned ACL in your request using the xamzacl request header When Amazon S3
    receives a request with a canned ACL in the request it adds the predefined grants to the ACL of the
    resource
    How to Specify an ACL
    Amazon S3 APIs enable you to set an ACL when you create a bucket or an object Amazon S3
    also provides API to set an ACL on an existing bucket or an object These API provide you with the
    following methods to set an ACL
    • Set ACL using request headers— When you send a request to create a resource (bucket or
    object) you set an ACL using the request headers Using these headers you can either specify a
    canned ACL or specify grants explicitly (identifying grantee and permissions explicitly)
    • Set ACL using request body— When you send a request to set an ACL on a existing resource you
    can set the ACL either in the request header or in the body
    For more information see Managing ACLs (p 369)
    Managing ACLs
    Topics
    • Managing ACLs in the AWS Management Console (p 369)
    • Managing ACLs Using the AWS SDK for Java (p 370)
    • Managing ACLs Using the AWS SDK for NET (p 374)
    • Managing ACLs Using the REST API (p 379)
    There are several ways you can add grants to your resource ACL You can use the AWS Management
    Console which provides a UI to manage permissions without writing any code You can use the REST
    API or use one of the AWS SDKs These libraries further simplify your programming tasks
    Managing ACLs in the AWS Management Console
    AWS Management Console provides a UI for you to grant ACLbased access permissions to your
    buckets and objects The Properties pane includes the Permissions tab where you can grant ACL
    based access permissions The following screen shot shows the Permissions for a bucket
    API Version 20060301
    369Amazon Simple Storage Service Developer Guide
    Managing ACLs
    It shows the list of grants found in the bucket ACL For each grant it shows the grantee and a set of
    check boxes showing the permissions granted The permission names in the console are different than
    the ACL permission names The preceding illustration shows the mapping between the two
    The preceding illustration shows a grantee with FULL_CONTROL permissions note that all the check
    boxes are selected All the UI components shown except the Add bucket policy link relate to the
    ACLbased permissions The UI allows you to add or remove permissions To add permissions click
    Add more permissions and to delete a permission highlight the line and click X to the right of it
    When you are done updating permissions click Save to update the ACL The console sends the
    necessary request to Amazon S3 to update the ACL on the specific resource
    For stepbystep instructions go to Editing Object Permissions and Editing Bucket Permissions in the
    Amazon Simple Storage Service Console User Guide
    Managing ACLs Using the AWS SDK for Java
    Setting an ACL when Creating a Resource
    When creating a resource (buckets and objects) you can grant permissions (see Access Control List
    (ACL) Overview (p 364)) by adding an AccessControlList in your request For each permission
    you explicitly specify the grantee and the permission
    For example the following Java code snippet sends a PutObject request to upload an object In the
    request the code snippet specifies permissions to two AWS accounts and the Amazon S3 AllUsers
    group The PutObject call includes the object data in the request body and the ACL grants in the
    request headers (see PUT Object)
    String bucketName bucketname
    String keyName objectkey
    String uploadFileName filename
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
    AccessControlList acl new AccessControlList()
    aclgrantPermission(new
    CanonicalGrantee(d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example)
    PermissionReadAcp)
    aclgrantPermission(GroupGranteeAllUsers PermissionRead)
    aclgrantPermission(new EmailAddressGrantee(user@emailcom)
    PermissionWriteAcp)
    File file new File(uploadFileName)
    s3clientputObject(new PutObjectRequest(bucketName keyName
    file)withAccessControlList(acl))
    For more information about uploading objects see Working with Amazon S3 Objects (p 98)
    API Version 20060301
    370Amazon Simple Storage Service Developer Guide
    Managing ACLs
    In the preceding code snippet in granting each permission you explicitly identified a grantee and a
    permission Alternatively you can specify a canned (predefined) ACL (see Canned ACL (p 368))
    in your request when creating a resource The following Java code snippet creates a bucket and
    specifies a LogDeliveryWrite canned ACL in the request to grant write permission to the Amazon
    S3 LogDelivery group
    String bucketName bucketname
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
    s3clientcreateBucket(new CreateBucketRequest
    (bucketName)withCannedAcl(CannedAccessControlListLogDeliveryWrite))
    For information about the underlying REST API go to PUT Bucket
    Updating ACL on an Existing Resource
    You can set ACL on an existing object or a bucket You create an instance of the
    AccessControlList class and grant permissions and call the appropriate set ACL method The
    following Java code snippet calls the setObjectAcl method to set ACL on an existing object
    String bucketName bucketname
    String keyName objectkey
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
    AccessControlList acl new AccessControlList()
    aclgrantPermission(new
    CanonicalGrantee(d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example)
    PermissionReadAcp)
    aclgrantPermission(GroupGranteeAuthenticatedUsers PermissionRead)
    aclgrantPermission(new EmailAddressGrantee(user@emailcom)
    PermissionWriteAcp)
    Owner owner new Owner()
    ownersetId(852b113e7a2f25102679df27bb0ae12b3f85be6f290b936c4393484beExample)
    ownersetDisplayName(displayname)
    aclsetOwner(owner)
    s3clientsetObjectAcl(bucketName keyName acl)
    Note
    In the preceding code snippet you can optionally read an existing ACL first by calling the
    getObjectAcl method add new grants to it and then set the revised ACL on the resource
    Instead of granting permissions by explicitly specifying grantees and permissions explicitly you can
    also specify a canned ACL in your request The following Java code snippet sets the ACL on an
    existing object In the request the snippet specifies the canned ACL AuthenticatedRead to grant
    read access to the Amazon S3 Authenticated Users group
    String bucketName bucketname
    String keyName objectkey
    AmazonS3 s3client new AmazonS3Client(new ProfileCredentialsProvider())
    s3clientsetObjectAcl(bucketName keyName
    CannedAccessControlListAuthenticatedRead)
    API Version 20060301
    371Amazon Simple Storage Service Developer Guide
    Managing ACLs
    An Example
    The following Java code example first creates a bucket In the create request it specifies a public
    read canned ACL It then retrieves the ACL in an AccessControlList instance clears grants and
    adds new grants to the AccessControlList Finally it saves the updated AccessControlList
    that is it replaces the bucket ACL subresource
    The following Java code example performs the following tasks
    • Create a bucket In the request it specifies a logdeliverywrite canned ACL granting write
    permission to the LogDelivery Amazon S3 group
    • Read the ACL on the bucket
    • Clear existing permissions and add the new permission to the ACL
    • Call setBucketAcl to add the new ACL to the bucket
    Note
    To test the following code example you must update the code and provide your credentials
    and also provide the canonical user ID and email address of the accounts that you want to
    grant permissions to
    import javaioIOException
    import javautilArrayList
    import javautilCollection
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelAccessControlList
    import comamazonawsservicess3modelBucket
    import comamazonawsservicess3modelCannedAccessControlList
    import comamazonawsservicess3modelCanonicalGrantee
    import comamazonawsservicess3modelCreateBucketRequest
    import comamazonawsservicess3modelGrant
    import comamazonawsservicess3modelGroupGrantee
    import comamazonawsservicess3modelPermission
    import comamazonawsservicess3modelRegion
    public class ACLExample {
    private static String bucketName *** Provide bucket name ***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3Client new AmazonS3Client(new
    ProfileCredentialsProvider())

    Collection grantCollection new ArrayList()
    try {
    1 Create bucket with Canned ACL
    CreateBucketRequest createBucketRequest
    new CreateBucketRequest(bucketName
    RegionUS_Standard)withCannedAcl(CannedAccessControlListLogDeliveryWrite)


    Bucket resp s3ClientcreateBucket(createBucketRequest)
    API Version 20060301
    372Amazon Simple Storage Service Developer Guide
    Managing ACLs
    2 Update ACL on the existing bucket
    AccessControlList bucketAcl s3ClientgetBucketAcl(bucketName)


    (Optional) delete all grants
    bucketAclgetGrants()clear()

    Add grant owner
    Grant grant0 new Grant(
    new
    CanonicalGrantee(852b113e7a2f25102679df27bb0ae12b3f85be6f290b936c4393484beExample)
    PermissionFullControl)
    grantCollectionadd(grant0)

    Add grant using canonical user id
    Grant grant1 new Grant(
    new
    CanonicalGrantee(d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example)
    PermissionWrite)
    grantCollectionadd(grant1)

    Grant LogDelivery group permission to write to the bucket
    Grant grant3 new Grant(GroupGranteeLogDelivery
    PermissionWrite)
    grantCollectionadd(grant3)

    bucketAclgetGrants()addAll(grantCollection)
    Save (replace) ACL
    s3ClientsetBucketAcl(bucketName bucketAcl)

    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException which +
    means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException which
    means+
    the client encountered +
    a serious internal problem while trying to +
    communicate with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    API Version 20060301
    373Amazon Simple Storage Service Developer Guide
    Managing ACLs
    Managing ACLs Using the AWS SDK for NET
    Setting an ACL When Creating a Resource
    When creating a resource (buckets and objects) you can grant permissions by specifying a collection
    of Grants (see Access Control List (ACL) Overview (p 364)) in your request For each Grant you
    create an S3Grant object explicitly specifying the grantee and the permission
    For example the following C# code sample sends a PUT Bucket request to create a bucket and
    then a PutObject request to put a new object in the new bucket In the request the code specifies
    permissions for full control for the owner and WRITE permission for the Amazon S3 Log Delivery
    group The PutObject call includes the object data in the request body and the ACL grants in the
    request headers (see PUT Object)
    static string bucketName *** Provide existing bucket name ***
    static string newBucketName *** Provide a name for a new bucket ***
    static string newKeyName *** Provide a name for a new key ***
    IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)

    Retrieve ACL from one of the owner's buckets
    S3AccessControlList acl clientGetACL(new GetACLRequest
    {
    BucketName bucketName
    })AccessControlList

    Describe grant for full control for owner
    S3Grant grant1 new S3Grant
    {
    Grantee new S3Grantee { CanonicalUser aclOwnerId }
    Permission S3PermissionFULL_CONTROL
    }
    Describe grant for write permission for the LogDelivery group
    S3Grant grant2 new S3Grant
    {
    Grantee new S3Grantee { URI httpacsamazonawscomgroupss3
    LogDelivery }
    Permission S3PermissionWRITE
    }
    PutBucketRequest request new PutBucketRequest()
    {
    BucketName newBucketName
    BucketRegion S3RegionUS
    Grants new List { grant1 grant2 }
    }
    PutBucketResponse response clientPutBucket(request)
    PutObjectRequest objectRequest new PutObjectRequest()
    {
    ContentBody Object data for simple put
    BucketName newBucketName
    Key newKeyName
    Grants new List { grant1 }
    }
    PutObjectResponse objectResponse clientPutObject(objectRequest)
    API Version 20060301
    374Amazon Simple Storage Service Developer Guide
    Managing ACLs
    For more information about uploading objects see Working with Amazon S3 Objects (p 98)
    In the preceding code sample for each S3Grant you explicitly identify a grantee and permission
    Alternatively you can specify a canned (predefined) ACL (see Canned ACL (p 368)) in your
    request when creating a resource The following C# code sample creates an object and specifies
    a LogDeliveryWrite canned ACL in the request to grant the Log Delivery group WRITE and
    READ_ACP permissions on the bucket
    static string newBucketName *** Provide existing bucket name ***
    static string keyName *** Provide key name ***
    IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    PutBucketRequest request new PutBucketRequest()
    {
    BucketName newBucketName
    BucketRegion S3RegionUS
    Add canned ACL
    CannedACL S3CannedACLLogDeliveryWrite
    }
    PutBucketResponse response clientPutBucket(request)
    For information about the underlying REST API go to PUT Bucket
    Updating ACL on an Existing Resource
    You can set an ACL on an existing object or a bucket by calling the AmazonS3ClientPutACL
    method You create an instance of the S3AccessControlList class with a list of ACL grants and
    include the list in the PutACL request
    The following C# code sample reads an existing ACL first using the AmazonS3ClientGetACL
    method add new grants to it and then sets the revised ACL on the object
    static string bucketName *** Provide existing bucket name ***
    static string keyName *** Provide key name ***
    IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    Retrieve ACL for object
    S3AccessControlList acl clientGetACL(new GetACLRequest
    {
    BucketName bucketName
    Key keyName
    })AccessControlList
    Retrieve owner
    Owner owner aclOwner
    Clear existing grants
    aclGrantsClear()
    First add grant to reset owner's full permission
    (previous clear statement removed all permissions)
    S3Grant grant0 new S3Grant
    {
    Grantee new S3Grantee { CanonicalUser aclOwnerId }
    }
    API Version 20060301
    375Amazon Simple Storage Service Developer Guide
    Managing ACLs
    aclAddGrant(grant0Grantee S3PermissionFULL_CONTROL)
    Describe grant for permission using email address
    S3Grant grant1 new S3Grant
    {
    Grantee new S3Grantee { EmailAddress emailAddress }
    Permission S3PermissionWRITE_ACP
    }
    Describe grant for permission to the LogDelivery group
    S3Grant grant2 new S3Grant
    {
    Grantee new S3Grantee { URI httpacsamazonawscomgroupss3
    LogDelivery }
    Permission S3PermissionWRITE
    }
    Create new ACL
    S3AccessControlList newAcl new S3AccessControlList
    {
    Grants new List { grant1 grant2 }
    Owner owner
    }
    Set new ACL
    PutACLResponse response clientPutACL(new PutACLRequest
    {
    BucketName bucketName
    Key keyName
    AccessControlList newAcl
    })
    Instead of creating S3Grant objects and specifying grantee and permission explicitly you can also
    specify a canned ACL in your request The following C# code sample sets a canned ACL on a new
    bucket The sample request specifies an AuthenticatedRead canned ACL to grant read access to
    the Amazon S3 Authenticated Users group
    static string newBucketName *** Provide new bucket name ***
    IAmazonS3 client
    client new AmazonS3Client(AmazonRegionEndpointUSEast1)
    PutBucketRequest request new PutBucketRequest()
    {
    BucketName newBucketName
    BucketRegion S3RegionUS
    Add canned ACL
    CannedACL S3CannedACLAuthenticatedRead
    }
    PutBucketResponse response clientPutBucket(request)
    An Example
    The following C# code example performs the following tasks
    • Create a bucket In the request it specifies a logdeliverywrite canned ACL granting write
    permission to the LogDelivery Amazon S3 group
    • Read the ACL on the bucket
    API Version 20060301
    376Amazon Simple Storage Service Developer Guide
    Managing ACLs
    • Clear existing permissions and add new the permission to the ACL
    • Call PutACL request to add the new ACL to the bucket
    For instructions on how to create and test a working example see Running the Amazon S3 NET Code
    Examples (p 566)
    using System
    using SystemCollectionsSpecialized
    using SystemConfiguration
    using AmazonS3
    using AmazonS3Model
    using AmazonS3Util
    using SystemCollectionsGeneric
    namespace s3amazoncomdocsamples
    {
    class ManageACLs
    {
    static string bucketName *** Provide existing bucket name ***
    static string newBucketName *** Provide a name for a new bucket ***
    static string keyName *** Provide key name ***
    static string newKeyName *** Provide a name for a new key ***
    static string emailAddress *** Provide email address ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    try
    {
    using (client new AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    Add bucket (specify canned ACL)
    AddBucketWithCannedACL(newBucketName)
    Get ACL on a bucket
    GetBucketACL(bucketName)
    Add (replace) ACL on an object in a bucket
    AddACLToExistingObject(bucketName keyName)
    ConsoleWriteLine(Example complete)
    }
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(For service sign up go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    API Version 20060301
    377Amazon Simple Storage Service Developer Guide
    Managing ACLs
    Error occurred Message'{0}' when writing an object
    amazonS3ExceptionMessage)
    }
    }
    catch (Exception e)
    {
    ConsoleWriteLine(eMessage)
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void AddBucketWithCannedACL(string bucketName)
    {
    PutBucketRequest request new PutBucketRequest()
    {
    BucketName newBucketName
    BucketRegion S3RegionUS
    Add canned ACL
    CannedACL S3CannedACLLogDeliveryWrite
    }
    PutBucketResponse response clientPutBucket(request)
    }
    static void GetBucketACL(string bucketName)
    {
    GetACLResponse response clientGetACL(new GetACLRequest
    {
    BucketName bucketName
    })

    GetACLResponse response clientGetACL(request)
    S3AccessControlList accessControlList responseAccessControlList
    responseDispose()
    }
    static void AddACLToExistingObject(string bucketName string keyName)
    {
    Retrieve ACL for object
    S3AccessControlList acl clientGetACL(new GetACLRequest
    {
    BucketName bucketName
    Key keyName
    })AccessControlList
    Retrieve owner
    Owner owner aclOwner
    Clear existing grants
    aclGrantsClear()
    First add grant to reset owner's full permission
    (previous clear statement removed all permissions)
    S3Grant grant0 new S3Grant
    {
    Grantee new S3Grantee { CanonicalUser aclOwnerId }
    }
    aclAddGrant(grant0Grantee S3PermissionFULL_CONTROL)
    API Version 20060301
    378Amazon Simple Storage Service Developer Guide
    Managing ACLs
    Describe grant for permission using email address
    S3Grant grant1 new S3Grant
    {
    Grantee new S3Grantee { EmailAddress emailAddress }
    Permission S3PermissionWRITE_ACP
    }
    Describe grant for permission to the LogDelivery group
    S3Grant grant2 new S3Grant
    {
    Grantee new S3Grantee { URI httpacsamazonawscomgroups
    s3LogDelivery }
    Permission S3PermissionWRITE
    }
    Create new ACL
    S3AccessControlList newAcl new S3AccessControlList
    {
    Grants new List { grant1 grant2 }
    Owner owner
    }
    Set new ACL
    PutACLResponse response clientPutACL(new PutACLRequest
    {
    BucketName bucketName
    Key keyName
    AccessControlList newAcl
    })
    Get and print response
    ConsoleWriteLine(clientGetACL(new GetACLRequest()
    {
    BucketName bucketName
    Key keyName
    }
    ))
    }
    }
    }
    Managing ACLs Using the REST API
    For information on the REST API support for managing ACLs see the following sections in the
    Amazon Simple Storage Service API Reference
    • GET Bucket acl
    • PUT Bucket acl
    • GET Object acl
    • PUT Object acl
    • PUT Object
    • PUT Bucket
    • PUT Object Copy
    • Initiate Multipart Upload
    API Version 20060301
    379Amazon Simple Storage Service Developer Guide
    Data Encryption
    Protecting Data in Amazon S3
    Topics
    • Protecting Data Using Encryption (p 380)
    • Using Reduced Redundancy Storage (p 420)
    • Using Versioning (p 423)
    Amazon S3 provides a highly durable storage infrastructure designed for missioncritical and primary
    data storage Objects are redundantly stored on multiple devices across multiple facilities in an
    Amazon S3 region To help better ensure data durability Amazon S3 PUT and PUT Object copy
    operations synchronously store your data across multiple facilities before returning SUCCESS Once
    the objects are stored Amazon S3 maintains their durability by quickly detecting and repairing any lost
    redundancy
    Amazon S3 also regularly verifies the integrity of data stored using checksums If Amazon S3 detects
    data corruption it is repaired using redundant data In addition Amazon S3 calculates checksums on
    all network traffic to detect corruption of data packets when storing or retrieving data
    Amazon S3's standard storage is
    • Backed with the Amazon S3 Service Level Agreement
    • Designed to provide 99999999999 durability and 9999 availability of objects over a given year
    • Designed to sustain the concurrent loss of data in two facilities
    Amazon S3 further protects your data using versioning You can use versioning to preserve retrieve
    and restore every version of every object stored in your Amazon S3 bucket With versioning you can
    easily recover from both unintended user actions and application failures By default requests retrieve
    the most recently written version You can retrieve older versions of an object by specifying a version of
    the object in a request
    Protecting Data Using Encryption
    Topics
    • Protecting Data Using ServerSide Encryption (p 381)
    API Version 20060301
    380Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    • Protecting Data Using ClientSide Encryption (p 409)
    Data protection refers to protecting data while intransit (as it travels to and from Amazon S3) and at
    rest (while it is stored on disks in Amazon S3 data centers) You can protect data in transit by using
    SSL or by using clientside encryption You have the following options of protecting data at rest in
    Amazon S3
    • Use ServerSide Encryption – You request Amazon S3 to encrypt your object before saving it on
    disks in its data centers and decrypt it when you download the objects
    • Use ClientSide Encryption – You can encrypt data clientside and upload the encrypted data to
    Amazon S3 In this case you manage the encryption process the encryption keys and related tools
    Protecting Data Using ServerSide Encryption
    Serverside encryption is about data encryption at rest—that is Amazon S3 encrypts your data at the
    object level as it writes it to disks in its data centers and decrypts it for you when you access it As
    long as you authenticate your request and you have access permissions there is no difference in the
    way you access encrypted or unencrypted objects For example if you share your objects using a
    presigned URL that URL works the same way for both encrypted and unencrypted objects
    You have three options depending on how you choose to manage the encryption keys
    • Use ServerSide Encryption with Amazon S3Managed Keys (SSES3) – Each object is
    encrypted with a unique key employing strong multifactor encryption As an additional safeguard it
    encrypts the key itself with a master key that it regularly rotates Amazon S3 serverside encryption
    uses one of the strongest block ciphers available 256bit Advanced Encryption Standard (AES256)
    to encrypt your data For more information see Protecting Data Using ServerSide Encryption with
    Amazon S3Managed Encryption Keys (SSES3) (p 387)
    • Use ServerSide Encryption with AWS KMSManaged Keys (SSEKMS) – Similar to SSE
    S3 but with some additional benefits along with some additional charges for using this service
    There are separate permissions for the use of an envelope key (that is a key that protects your
    data's encryption key) that provides added protection against unauthorized access of your objects
    in S3 SSEKMS also provides you with an audit trail of when your key was used and by whom
    Additionally you have the option to create and manage encryption keys yourself or use a default
    key that is unique to you the service you're using and the region you're working in For more
    information see Protecting Data Using ServerSide Encryption with AWS KMS–Managed Keys
    (SSEKMS) (p 381)
    • Use ServerSide Encryption with CustomerProvided Keys (SSEC) – You manage the
    encryption keys and Amazon S3 manages the encryption as it writes to disks and decryption when
    you access your objects For more information see Protecting Data Using ServerSide Encryption
    with CustomerProvided Encryption Keys (SSEC) (p 395)
    Note
    When you list objects in your bucket the list API will return a list of all objects regardless of
    whether they are encrypted
    Protecting Data Using ServerSide Encryption with AWS KMS–
    Managed Keys (SSEKMS)
    Serverside encryption is about protecting data at rest AWS Key Management Service (AWS KMS) is
    a service that combines secure highly available hardware and software to provide a key management
    system scaled for the cloud AWS KMS uses customer master keys (CMKs) to encrypt your Amazon
    S3 objects You use AWS KMS via the Encryption Keys section in the IAM console or via AWS KMS
    APIs to centrally create encryption keys define the policies that control how keys can be used and
    API Version 20060301
    381Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    audit key usage to prove they are being used correctly You can use these keys to protect your data in
    Amazon S3 buckets
    The first time you add an SSEKMS–encrypted object to a bucket in a region a default CMK is created
    for you automatically This key is used for SSEKMS encryption unless you select a CMK that you
    created separately using AWS Key Management Service Creating your own CMK gives you more
    flexibility including the ability to create rotate disable and define access controls and to audit the
    encryption keys used to protect your data
    For more information see What is AWS Key Management Service in the AWS Key Management
    Service Developer Guide If you use AWS KMS there are additional charges for using AWSKMS
    keys For more information see AWS Key Management Service Pricing
    Note
    If you are uploading or accessing objects encrypted by SSEKMS you need to use AWS
    Signature Version 4 for added security For more information on how to do this using an AWS
    SDK see Specifying Signature Version in Request Authentication
    The highlights of SSEKMS are
    • You can choose to create and manage encryption keys yourself or you can choose to use your
    default service key uniquely generated on a customer by service by region level
    • The ETag in the response is not the MD5 of the object data
    • The data keys used to encrypt your data are also encrypted and stored alongside the data they
    protect
    • Auditable master keys can be created rotated and disabled from the IAM console
    • The security controls in AWS KMS can help you meet encryptionrelated compliance requirements
    Amazon S3 supports bucket policies that you can use if you require serverside encryption for all
    objects that are stored in your bucket For example the following bucket policy denies upload object
    (s3PutObject) permission to everyone if the request does not include the xamzserverside
    encryption header requesting serverside encryption with SSEKMS
    {
    Version20121017
    IdPutObjPolicy
    Statement[{
    SidDenyUnEncryptedObjectUploads
    EffectDeny
    Principal*
    Actions3PutObject
    Resourcearnawss3YourBucket*
    Condition{
    StringNotEquals{
    s3xamzserversideencryptionawskms
    }
    }
    }
    ]
    }
    Amazon S3 also supports the s3xamzserversideencryptionawskmskeyid condition
    key which you can use to require a specific KMS key for object encryption The KMS key you specify
    in the policy must use the arnawskmsregionacctidkeykeyid format
    Note
    When you upload an object you can specify the KMS key using the xamzserverside
    encryptionawskmskeyid header If the header is not present in the request Amazon
    API Version 20060301
    382Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    S3 assumes the default KMS key Regardless the KMS key ID that Amazon S3 uses for
    object encryption must match the KMS key ID in the policy otherwise Amazon S3 denies the
    request
    Important
    All GET and PUT requests for an object protected by AWS KMS will fail if they are not made
    via SSL or by using SigV4
    SSEKMS encrypts only the object data Any object metadata is not encrypted
    Using AWS Key Management Service in the Amazon S3 Management
    Console
    For more information about using KMSManaged Encryption Keys in the Amazon S3 Management
    Console go to Uploading Objects into Amazon S3 in the Amazon Simple Storage Service User Guide
    API Support for AWS Key Management Service in Amazon S3
    The object creation REST APIs (see Specifying the AWS Key Management Service in Amazon S3
    Using the REST API (p 386)) provide a request header xamzserversideencryption that
    you can use to request SSEKMS with the value of awskms There's also xamzserverside
    encryptionawskmskeyid which specifies the ID of the AWS KMS master encryption key
    that was used for the object The Amazon S3 API also supports encryption context with the xamz
    serversideencryptioncontext header
    The encryption context can be any value that you want provided that the header adheres to the
    Base64encoded JSON format However because the encryption context is not encrypted and
    because it is logged if AWS CloudTrail logging is turned on the encryption context should not include
    sensitive information We further recommend that your context describe the data being encrypted or
    decrypted so that you can better understand the CloudTrail events produced by AWS KMS For more
    information see Encryption Context in the AWS Key Management Service Developer Guide
    Also Amazon S3 may append a predefined key of awss3arn with the value equal to the object's ARN
    for the encryption context that you provide This only happens if the key awss3arn is not already in the
    encryption context that you provided in which case this predefined key is appended when Amazon S3
    processes your Put requests If this awss3arn key is already present in your encryption context the
    key is not appended a second time to your encryption context
    Having this predefined key as a part of your encryption context means that you can track relevant
    requests in CloudTrail so you’ll always be able to see which S3 object's ARN was used with which
    encryption key In addition this predefined key as a part of your encryption context guarantees that the
    encryption context is not identical between different S3 objects which provides additional security for
    your objects Your full encryption context will be validated to have the value equal to the object's ARN
    The following Amazon S3 APIs support these request headers
    • PUT operation — When uploading data using the PUT API (see PUT Object) you can specify these
    request headers
    • Initiate Multipart Upload — When uploading large objects using the multipart upload API you can
    specify these headers You specify these headers in the initiate request (see Initiate Multipart
    Upload)
    • POST operation — When using a POST operation to upload an object (see POST Object) instead of
    the request headers you provide the same information in the form fields
    • COPY operation — When you copy an object (see PUT Object Copy) you have both a source
    object and a target object When you pass SSEKMS headers with the COPY operation they will be
    applied only to the target object
    API Version 20060301
    383Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    The AWS SDKs also provide wrapper APIs for you to request SSEKMS with Amazon S3
    Specifying the AWS Key Management Service in Amazon S3 Using the AWS
    SDKs
    Topics
    • AWS SDK for Java (p 384)
    • AWS SDK for NET (p 385)
    When using AWS SDKs you can request Amazon S3 to use AWS Key Management Service (AWS
    KMS)–managed encryption keys This section provides examples of using the AWS SDKs for Java
    and NET For information about other SDKs go to Sample Code and Libraries
    AWS SDK for Java
    This section explains various Amazon S3 operations using the AWS SDK for Java and how you use
    the AWS KMS–managed encryption keys
    Put Operation
    When uploading an object using the AWS SDK for Java you can request Amazon S3 to use an AWS
    KMS–managed encryption key by adding the SSEAwsKeyManagementParams property as shown in
    the following request
    PutObjectRequest putRequest new PutObjectRequest(bucketName
    keyName file)withSSEAwsKeyManagementParams(new
    SSEAwsKeyManagementParams())
    In this case Amazon S3 uses the default master key (see Protecting Data Using ServerSide
    Encryption with AWS KMS–Managed Keys (SSEKMS) (p 381)) You can optionally create your own
    key and specify that in the request
    PutObjectRequest putRequest new PutObjectRequest(bucketName
    keyName file)withSSEAwsKeyManagementParams(new
    SSEAwsKeyManagementParams(keyID))
    For more information about creating keys go to Programming the AWS KMS API in the AWS Key
    Management Service Developer Guide
    For working code examples of uploading an object see the following topics You will need to update
    those code examples and provide encryption information as shown in the preceding code fragment
    • For uploading an object in a single operation see Upload an Object Using the AWS SDK for
    Java (p 157)
    • For a multipart upload see the following topics
    • Using highlevel multipart upload API see Upload a File (p 172)
    • If you are using the lowlevel multipart upload API see Upload a File (p 177)
    Copy Operation
    When copying objects you add the same request properties (ServerSideEncryptionMethod
    and ServerSideEncryptionKeyManagementServiceKeyId) to request Amazon S3 to use an
    AWS KMS–managed encryption key For more information about copying objects see Copying
    Objects (p 212)
    API Version 20060301
    384Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Presigned URLs
    When creating a presigned URL for an object encrypted using an AWS KMS–managed encryption
    key you must explicitly specify Signature Version 4
    ClientConfiguration clientConfiguration new ClientConfiguration()
    clientConfigurationsetSignerOverride(AWSS3V4SignerType)
    AmazonS3Client s3client new AmazonS3Client(
    new ProfileCredentialsProvider() clientConfiguration)

    For a code example see Generate a Presigned Object URL using AWS SDK for Java (p 152)
    AWS SDK for NET
    This section explains various Amazon S3 operations using the AWS SDK for NET and how you use
    the AWS KMS–managed encryption keys
    Put Operation
    When uploading an object using the AWS SDK for NET you can request Amazon S3 to use an AWS
    KMS–managed encryption key by adding the ServerSideEncryptionMethod property as shown in
    the following request
    PutObjectRequest putRequest new PutObjectRequest
    {
    BucketName bucketName
    Key keyName
    other properties
    ServerSideEncryptionMethod ServerSideEncryptionMethodAWSKMS
    }
    In this case Amazon S3 uses the default master key (see Protecting Data Using ServerSide
    Encryption with AWS KMS–Managed Keys (SSEKMS) (p 381)) You can optionally create your own
    key and specify that in the request
    PutObjectRequest putRequest1 new PutObjectRequest
    {
    BucketName bucketName
    Key keyName
    other properties
    ServerSideEncryptionMethod ServerSideEncryptionMethodAWSKMS
    ServerSideEncryptionKeyManagementServiceKeyId keyId
    }
    For more information about creating keys see Programming the AWS KMS API in the AWS Key
    Management Service Developer Guide
    For working code examples of uploading an object see the following topics You will need to update
    these code examples and provide encryption information as shown in the preceding code fragment
    • For uploading an object in a single operation see Upload an Object Using the AWS SDK
    for NET (p 159)
    • For multipart upload see the following topics
    • Using highlevel multipart upload API see Upload a File (p 181)
    API Version 20060301
    385Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    • Using lowlevel multipart upload API see Upload a File (p 190)
    Copy Operation
    When copying objects you add the same request properties (ServerSideEncryptionMethod
    and ServerSideEncryptionKeyManagementServiceKeyId) to request Amazon S3 to use an
    AWS KMS–managed encryption key For more information about copying objects see Copying
    Objects (p 212)
    Presigned URLs
    When creating a presigned URL for an object encrypted using an AWS KMS–managed encryption
    key you must explicitly specify Signature Version 4
    AWSConfigsS3ConfigUseSignatureVersion4 true
    For a code example see Generate a Presigned Object URL using AWS SDK for NET (p 155)
    Specifying the AWS Key Management Service in Amazon S3 Using the REST
    API
    At the time of object creation—that is when you are uploading a new object or making a copy of
    an existing object—you can specify the use of serverside encryption with AWS KMS–managed
    encryption keys (SSEKMS) to encrypt your data by adding the xamzserversideencryption
    header to the request Set the value of the header to the encryption algorithm awskms Amazon S3
    confirms that your object is stored using SSEKMS by returning the response header xamzserver
    sideencryption
    The following REST upload APIs accept the xamzserversideencryption request header
    • PUT Object
    • PUT Object Copy
    • POST Object
    • Initiate Multipart Upload
    When uploading large objects using the multipart upload API you can specify SSEKMS by adding the
    xamzserversideencryption header to the Initiate Multipart Upload request with the value of
    awskms When copying an existing object regardless of whether the source object is encrypted or
    not the destination object is not encrypted unless you explicitly request serverside encryption
    The response headers of the following REST APIs return the xamzserversideencryption
    header when an object is stored using serverside encryption
    • PUT Object
    • PUT Object Copy
    • POST Object
    • Initiate Multipart Upload
    • Upload Part
    • Upload Part Copy
    • Complete Multipart Upload
    • Get Object
    • Head Object
    API Version 20060301
    386Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Note
    Encryption request headers should not be sent for GET requests and HEAD requests if your
    object uses SSEKMS or you’ll get an HTTP 400 BadRequest error
    Protecting Data Using ServerSide Encryption with Amazon
    S3Managed Encryption Keys (SSES3)
    Serverside encryption is about protecting data at rest Serverside encryption with Amazon S3
    managed encryption keys (SSES3) employs strong multifactor encryption Amazon S3 encrypts each
    object with a unique key As an additional safeguard it encrypts the key itself with a master key that it
    regularly rotates Amazon S3 serverside encryption uses one of the strongest block ciphers available
    256bit Advanced Encryption Standard (AES256) to encrypt your data
    Amazon S3 supports bucket policies that you can use if you require serverside encryption for all
    objects that are stored in your bucket For example the following bucket policy denies upload object
    (s3PutObject) permission to everyone if the request does not include the xamzserverside
    encryption header requesting serverside encryption
    {
    Version 20121017
    Id PutObjPolicy
    Statement [
    {
    Sid DenyIncorrectEncryptionHeader
    Effect Deny
    Principal *
    Action s3PutObject
    Resource arnawss3YourBucket*
    Condition {
    StringNotEquals {
    s3xamzserversideencryption AES256
    }
    }
    }
    {
    Sid DenyUnEncryptedObjectUploads
    Effect Deny
    Principal *
    Action s3PutObject
    Resource arnawss3YourBucket*
    Condition {
    Null {
    s3xamzserversideencryption true
    }
    }
    }
    ]
    }
    Serverside encryption encrypts only the object data Any object metadata is not encrypted
    API Support for ServerSide Encryption
    The object creation REST APIs (see Specifying ServerSide Encryption Using the REST
    API (p 394)) provide a request header xamzserversideencryption that you can use to
    request serverside encryption
    API Version 20060301
    387Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    The following Amazon S3 APIs support these headers
    • PUT operation — When uploading data using the PUT API (see PUT Object) you can specify these
    request headers
    • Initiate Multipart Upload — When uploading large objects using the multipart upload API you can
    specify these headers You specify these headers in the initiate request (see Initiate Multipart
    Upload)
    • POST operation — When using a POST operation to upload an object (see POST Object) instead of
    the request headers you provide the same information in the form fields
    • COPY operation — When you copy an object (see PUT Object Copy) you have both a source
    object and a target object
    The AWS SDKs also provide wrapper APIs for you to request serverside encryption You can also use
    the AWS Management Console to upload objects and request serverside encryption
    Note
    You can't enforce whether or not objects are encrypted with SSES3 when they are uploaded
    using presigned URLs This is because the only way you can specify serverside encryption
    is through the AWS Management Console or through an HTTP request header For more
    information see Specifying Conditions in a Policy (p 315)
    Specifying ServerSide Encryption Using the AWS SDK for Java
    When using the AWS SDK for Java to upload an object you can use the ObjectMetadata property
    of the PutObjectRequest to set the xamzserversideencryption request header (see
    Specifying ServerSide Encryption Using the REST API (p 394)) When you call the PutObject
    method of the AmazonS3 client as shown in the following Java code sample Amazon S3 encrypts and
    saves the data
    File file new File(uploadFileName)
    PutObjectRequest putRequest new PutObjectRequest(
    bucketName keyName file)

    Request serverside encryption
    ObjectMetadata objectMetadata new ObjectMetadata()
    objectMetadatasetSSEAlgorithm(ObjectMetadataAES_256_SERVER_SIDE_ENCRYPTION)

    putRequestsetMetadata(objectMetadata)
    PutObjectResult response s3clientputObject(putRequest)
    Systemoutprintln(Uploaded object encryption status is +
    responsegetSSEAlgorithm())
    In response Amazon S3 returns the encryption algorithm used for encrypting your object data which
    you can check using the getSSEAlgorithm method
    For a working sample that shows how to upload an object see Upload an Object Using the AWS SDK
    for Java (p 157) For serverside encryption add the ObjectMetadata property to your request
    When uploading large objects using multipart upload API you can request serverside encryption for
    the object that you are uploading
    • When using the lowlevel multipart upload API (see Upload a File (p 177)) to
    upload a large object you can specify serverside encryption when you initiate the
    multipart upload That is you add the ObjectMetadata property by calling the
    InitiateMultipartUploadRequestsetObjectMetadata method
    API Version 20060301
    388Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    • When using the highlevel multipart upload API (see Using the AWS Java SDK for Multipart Upload
    (HighLevel API) (p 172)) the TransferManager class provides methods to upload objects You
    can call any of the upload methods that take ObjectMetadata as a parameter
    Determining the Encryption Algorithm Used
    To determine the encryption state of an existing object you can retrieve the object metadata as shown
    in the following Java code sample
    GetObjectMetadataRequest request2
    new GetObjectMetadataRequest(bucketName keyName)

    ObjectMetadata metadata s3clientgetObjectMetadata(request2)
    Systemoutprintln(Encryption algorithm used +
    metadatagetSSEAlgorithm())
    If serverside encryption is not used for the object that is stored in Amazon S3 the method returns null
    Changing ServerSide Encryption of an Existing Object (Copy Operation)
    To change the encryption state of an existing object you make a copy of the object and delete the
    source object Note that by default the copy API will not encrypt the target unless you explicitly
    request serverside encryption You can request the encryption of the target object by using the
    ObjectMetadata property to specify serverside encryption in the CopyObjectRequest as shown in
    the following Java code sample
    CopyObjectRequest copyObjRequest new CopyObjectRequest(
    sourceBucket sourceKey targetBucket targetKey)

    Request serverside encryption
    ObjectMetadata objectMetadata new ObjectMetadata()
    objectMetadatasetSSEAlgorithm(ObjectMetadataAES_256_SERVER_SIDE_ENCRYPTION)

    copyObjRequestsetNewObjectMetadata(objectMetadata)

    CopyObjectResult response s3clientcopyObject(copyObjRequest)
    Systemoutprintln(Copied object encryption status is +
    responsegetSSEAlgorithm())
    For a working sample of how to copy an object see Copy an Object Using the AWS SDK for
    Java (p 214) You can specify serverside encryption in the CopyObjectRequest object as shown in
    the preceding code sample
    Specifying ServerSide Encryption Using the AWS SDK for NET
    When using the AWS SDK for NET to upload an object you can use the
    WithServerSideEncryptionMethod property of PutObjectRequest to set the xamzserver
    sideencryption request header (see Specifying ServerSide Encryption Using the REST
    API (p 394)) When you call the PutObject method of the AmazonS3 client as shown in the
    following C# code sample Amazon S3 encrypts and saves the data
    static AmazonS3 client
    client new AmazonS3Client(accessKeyID secretAccessKeyID)
    API Version 20060301
    389Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    PutObjectRequest request new PutObjectRequest()
    requestWithContentBody(Object data for simple put)
    WithBucketName(bucketName)
    WithKey(keyName)
    WithServerSideEncryptionMethod(ServerSideEncryptionMethodAES256)
    S3Response response clientPutObject(request)
    Check the response header to determine if the object is encrypted
    ServerSideEncryptionMethod destinationObjectEncryptionStatus
    responseServerSideEncryptionMethod
    In response Amazon S3 returns the encryption algorithm that is used to encrypt your object data
    which you can check using the ServerSideEncryptionMethod property
    For a working sample of how to upload an object see Upload an Object Using the AWS SDK
    for NET (p 159) For serverside encryption set the ServerSideEncryptionMethod property by
    calling the WithServerSideEncryptionMethod method
    To upload large objects using the multipart upload API you can specify serverside encryption for the
    objects that you are uploading
    • When using the lowlevel multipart upload API (see Using the AWS NET SDK for
    Multipart Upload (LowLevel API) (p 190)) to upload a large object you can specify
    serverside encryption in your InitiateMultipartUpload request That is you set the
    ServerSideEncryptionMethod property to your InitiateMultipartUploadRequest by
    calling the WithServerSideEncryptionMethod method
    • When using the highlevel multipart upload API (see Using the AWS NET SDK for Multipart
    Upload (HighLevel API) (p 181)) the TransferUtility class provides methods (Upload and
    UploadDirectory) to upload objects In this case you can request serverside encryption using
    the TransferUtilityUploadRequest and TransferUtilityUploadDirectoryRequest
    objects
    Determining the Encryption Algorithm Used
    To determine the encryption state of an existing object you can retrieve the object metadata as shown
    in the following C# code sample
    AmazonS3 client
    client new AmazonS3Client(accessKeyID secretAccessKeyID)
    ServerSideEncryptionMethod objectEncryption
    GetObjectMetadataRequest metadataRequest new GetObjectMetadataRequest()
    WithBucketName(bucketName)
    WithKey(keyName)
    objectEncryption clientGetObjectMetadata(metadataRequest)
    ServerSideEncryptionMethod
    The encryption algorithm is specified with an enum If the stored object is not encrypted (default
    behavior) then the ServerSideEncryptionMethod property of the object will default to None
    Changing ServerSide Encryption of an Existing Object (Copy Operation)
    To change the encryption state of an existing object you can make a copy of the object and delete
    the source object Note that by default the copy API will not encrypt the target unless you explicitly
    API Version 20060301
    390Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    request serverside encryption of the destination object The following C# code sample makes a copy
    of an object The request explicitly specifies serverside encryption for the destination object
    AmazonS3 client
    client new AmazonS3Client(accessKeyID secretAccessKeyID)
    CopyObjectResponse response clientCopyObject(new CopyObjectRequest()
    WithSourceBucket(sourceBucketName)
    WithSourceKey(sourceObjetKey)
    WithDestinationBucket(targetBucketName)
    WithDestinationKey(targetObjectKey)

    WithServerSideEncryptionMethod(ServerSideEncryptionMethodAES256)
    )
    Check the response header to determine if the object is encrypted
    ServerSideEncryptionMethod destinationObjectEncryptionStatus
    responseServerSideEncryptionMethod

    For a working sample of how to copy an object see Copy an Object Using the AWS SDK
    for NET (p 215) You can specify serverside encryption in the CopyObjectRequest object as shown
    in the preceding code sample
    Specifying ServerSide Encryption Using the AWS SDK for PHP
    This topic guides you through using classes from the AWS SDK for PHP to add serverside encryption
    to objects you are uploading to Amazon S3
    Note
    This topic assumes that you are already following the instructions for Using the AWS SDK
    for PHP and Running PHP Examples (p 566) and have the AWS SDK for PHP properly
    installed
    You can use the Aws\S3\S3ClientputObject() method to upload an object to Amazon S3 For
    a working sample of how to upload an object see Upload an Object Using the AWS SDK for
    PHP (p 161)
    To add the xamzserversideencryption request header (see Specifying ServerSide
    Encryption Using the REST API (p 394)) to your upload request specify the array parameter's
    ServerSideEncryption key with the value AES256 as shown in the following PHP code sample
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'
    filepath should be absolute path to a file on disk
    filepath '*** Your File Path ***'
    Instantiate the client
    s3 S3Clientfactory()
    Upload a file with serverside encryption
    result s3>putObject(array(
    'Bucket' > bucket
    'Key' > keyname
    'SourceFile' > filepath
    'ServerSideEncryption' > 'AES256'
    ))
    API Version 20060301
    391Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    In response Amazon S3 returns the xamzserversideencryption header with the value of the
    encryption algorithm used to encrypt your object data
    To upload large objects using the multipart upload API you can specify serverside encryption for the
    objects that you are uploading
    • When using the lowlevel multipart upload API (see Using the AWS PHP SDK for Multipart
    Upload (LowLevel API) (p 200)) you can specify serverside encryption when you call the Aws
    \S3\S3ClientcreateMultipartUpload() method To add the xamzserversideencryption
    request header to your request specify the array parameter's ServerSideEncryption key with
    the value AES256
    • When using the highlevel multipart upload you can specify serverside encryption
    using the Aws\S3\Model\MultipartUpload\UploadBuildersetOption() method like
    setOption('ServerSideEncryption''AES256') For an example of using the setOption()
    method with the highlevel UploadBuilder see Using the AWS PHP SDK for Multipart Upload (High
    Level API) (p 196)
    Determining Encryption Algorithm Used
    To determine the encryption state of an existing object retrieve the object metadata by calling the Aws
    \S3\S3ClientheadObject() method as shown in the following PHP code sample
    use Aws\S3\S3Client
    bucket '*** Your Bucket Name ***'
    keyname '*** Your Object Key ***'

    Instantiate the client
    s3 S3Clientfactory()
    Check which serverside encryption algorithm is used
    result s3>headObject(array(
    'Bucket' > bucket
    'Key' > keyname
    ))
    echo result['ServerSideEncryption']
    Changing ServerSide Encryption of an Existing Object (Copy Operation)
    To change the encryption state of an existing object make a copy of the object using the Aws
    \S3\S3ClientcopyObject() method and delete the source object Note that by default copyObject()
    will not encrypt the target unless you explicitly request serverside encryption of the destination object
    using the array parameter's ServerSideEncryption key with the value AES256 The following PHP
    code sample makes a copy of an object and adds serverside encryption to the copied object
    use Aws\S3\S3Client
    sourceBucket '*** Your Source Bucket Name ***'
    sourceKeyname '*** Your Source Object Key ***'
    targetBucket '*** Your Target Bucket Name ***'
    targetKeyname '*** Your Target Object Key ***'
    Instantiate the client
    s3 S3Clientfactory()
    Copy an object and add serverside encryption
    API Version 20060301
    392Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    result s3>copyObject(array(
    'Bucket' > targetBucket
    'Key' > targetKeyname
    'CopySource' > {sourceBucket}{sourceKeyname}
    'ServerSideEncryption' > 'AES256'
    ))
    For a working sample of how to copy an object see Copy an Object Using the AWS SDK for
    PHP (p 218)
    Related Resources
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3Clientfactory() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientcopyObject() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientcreateMultipartUpload() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientheadObject() Method
    • AWS SDK for PHP for Amazon S3 Aws\S3\S3ClientputObject() Method
    • Aws\S3\Model\MultipartUpload\UploadBuildersetOption() Method
    • AWS SDK for PHP for Amazon S3
    • AWS SDK for PHP Documentation
    Specifying ServerSide Encryption Using the AWS SDK for Ruby
    When using the AWS SDK for Ruby to upload an object you can specify that the object be stored at
    rest encrypted by specifying an options hash server_side_encryption in the #write instance
    method When you read the object back it is automatically decrypted
    The following Ruby script sample demonstrates how to specify that a file uploaded to Amazon S3 be
    encrypted at rest
    # Upload a file and set serverside encryption
    key_name Filebasename(file_name)
    s3buckets[bucket_name]objects[key_name]write(file >
    file_name server_side_encryption > aes256)
    For a working sample that shows how to upload an object see Upload an Object Using the AWS SDK
    for Ruby (p 163)
    Determining the Encryption Algorithm Used
    To check the encryption algorithm that is used for encrypting an object data at rest use the
    #server_side_encryption method of the S3Object instance The following code sample
    demonstrates how to determine the encryption state of an existing object
    # Determine serverside encryption of an object
    enc s3buckets[bucket_name]objects[key_name]server_side_encryption
    enc_state (enc nil) enc not set
    puts Encryption of #{key_name} is #{enc_state}
    If serverside encryption is not used for the object that is stored in Amazon S3 the method returns null
    API Version 20060301
    393Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Changing ServerSide Encryption of an Existing Object (Copy Operation)
    To change the encryption state of an existing object make a copy of the object and delete the source
    object The Ruby API S3Object class has #copy_from and #copy_to methods that you can use to
    copy objects Note that by default the copy methods will not encrypt the target unless you explicitly
    request serverside encryption You can request the encryption of the target object by specifying the
    server_side_encryption value in the options hash argument as shown in the following Ruby code
    sample The code sample demonstrates how to use the #copy_to method
    s3 AWSS3new
    # Upload a file and set serverside encryption
    bucket1 s3buckets[source_bucket]
    bucket2 s3buckets[target_bucket]
    obj1 bucket1objects[source_key]
    obj2 bucket2objects[target_key]
    obj1copy_to(obj2 server_side_encryption > aes256)
    For a working sample of how to copy an object see Copy an Object Using the AWS SDK for
    Ruby (p 221)
    Specifying ServerSide Encryption Using the REST API
    At the time of object creation—that is when you are uploading a new object or making a copy of an
    existing object—you can specify if you want Amazon S3 to encrypt your data by adding the xamz
    serversideencryption header to the request Set the value of the header to the encryption
    algorithm AES256 that Amazon S3 supports Amazon S3 confirms that your object is stored using
    serverside encryption by returning the response header xamzserversideencryption
    The following REST upload APIs accept the xamzserversideencryption request header
    • PUT Object
    • PUT Object Copy
    • POST Object
    • Initiate Multipart Upload
    When uploading large objects using the multipart upload API you can specify serverside encryption
    by adding the xamzserversideencryption header to the Initiate Multipart Upload request
    When copying an existing object regardless of whether the source object is encrypted or not the
    destination object is not encrypted unless you explicitly request serverside encryption
    The response headers of the following REST APIs return the xamzserversideencryption
    header when an object is stored using serverside encryption
    • PUT Object
    • PUT Object Copy
    • POST Object
    • Initiate Multipart Upload
    • Upload Part
    • Upload Part Copy
    • Complete Multipart Upload
    • Get Object
    • Head Object
    API Version 20060301
    394Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Note
    Encryption request headers should not be sent for GET requests and HEAD requests if your
    object uses SSES3 or you’ll get an HTTP 400 BadRequest error
    Specifying ServerSide Encryption Using the AWS Management Console
    When uploading an object using the AWS Management Console you can specify serverside
    encryption For an example of how to upload an object go to Uploading Objects into Amazon S3
    When you copy an object using the AWS Management Console the console copies the object as is
    That is if the copy source is encrypted the target object will be encrypted For an example of how to
    copy an object using the console go to Copying an Object The console also allows you to update
    properties of one or more objects For example you can select one or more objects and select server
    side encryption
    Protecting Data Using ServerSide Encryption with Customer
    Provided Encryption Keys (SSEC)
    Serverside encryption is about protecting data at rest Using serverside encryption with customer
    provided encryption keys (SSEC) allows you to set your own encryption keys With the encryption key
    you provide as part of your request Amazon S3 manages both the encryption as it writes to disks and
    decryption when you access your objects Therefore you don't need to maintain any code to perform
    data encryption and decryption The only thing you do is manage the encryption keys you provide
    When you upload an object Amazon S3 uses the encryption key you provide to apply AES256
    encryption to your data and removes the encryption key from memory
    Important
    Amazon S3 does not store the encryption key you provide Instead we store a randomly
    salted HMAC value of the encryption key in order to validate future requests The salted
    HMAC value cannot be used to derive the value of the encryption key or to decrypt the
    contents of the encrypted object That means if you lose the encryption key you lose the
    object
    When you retrieve an object you must provide the same encryption key as part of your request
    Amazon S3 first verifies that the encryption key you provided matches and then decrypts the object
    before returning the object data to you
    The highlights of SSEC are
    • You must use https
    Important
    Amazon S3 will reject any requests made over http when using SSEC For security
    considerations we recommend you consider any key you send erroneously using http to be
    compromised You should discard the key and rotate as appropriate
    • The ETag in the response is not the MD5 of the object data
    • You manage a mapping of which encryption key was used to encrypt which object Amazon S3 does
    not store encryption keys You are responsible for tracking which encryption key you provided for
    which object
    • If your bucket is versioningenabled each object version you upload using this feature can have
    its own encryption key You are responsible for tracking which encryption key was used for which
    object version
    • Because you manage encryption keys on the client side you manage any additional safeguards
    such as key rotation on the client side
    Caution
    If you lose the encryption key any GET request for an object without its encryption key will
    fail and you lose the object
    API Version 20060301
    395Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Using SSEC
    When using serverside encryption with customerprovided encryption keys (SSEC) you must provide
    encryption key information using the following request headers
    Name Description
    xamzserver
    sideencryption
    customeralgorithm
    Use this header to specify the encryption algorithm The header value
    must be AES256
    xamzserver
    sideencryption
    customerkey
    Use this header to provide the 256bit base64encoded encryption key
    for Amazon S3 to use to encrypt or decrypt your data
    xamzserver
    sideencryption
    customerkeyMD5
    Use this header to provide the base64encoded 128bit MD5 digest
    of the encryption key according to RFC 1321 Amazon S3 uses this
    header for a message integrity check to ensure the encryption key was
    transmitted without error
    You can use AWS SDK wrapper libraries to add these headers to your request If you need to you can
    make the Amazon S3 REST API calls directly in your application
    Note
    You cannot use the Amazon S3 console to upload an object and request SSEC You also
    cannot use the console to update (for example change the storage class or add metadata) an
    existing object stored using SSEC
    The following Amazon S3 APIs support these headers
    • GET operation — When retrieving objects using the GET API (see GET Object) you can specify the
    request headers Torrents are not supported for objects encrypted using SSEC
    • HEAD operation — To retrieve object metadata using the HEAD API (see HEAD Object) you can
    specify these request headers
    • PUT operation — When uploading data using the PUT API (see PUT Object) you can specify these
    request headers
    • Multipart Upload — When uploading large objects using the multipart upload API you can specify
    these headers You specify these headers in the initiate request (see Initiate Multipart Upload) and
    each subsequent part upload request (Upload Part) For each part upload request the encryption
    information must be the same as what you provided in the initiate multipart upload request
    • POST operation — When using a POST operation to upload an object (see POST Object) instead of
    the request headers you provide the same information in the form fields
    • Copy operation — When you copy an object (see PUT Object Copy) you have both a source
    object and a target object Accordingly you have the following to consider
    • If you want the target object encrypted using serverside encryption with AWSmanaged keys you
    must provide the xamzserversideencryption request header
    • If you want the target object encrypted using SSEC you must provide encryption information
    using the three headers described in the preceding table
    • If the source object is encrypted using SSEC you must provide encryption key information using
    the following headers so that Amazon S3 can decrypt the object for copying
    Name Description
    xamzcopysource
    serverside
    Include this header to specify the algorithm Amazon S3 should use to
    decrypt the source object This value must be AES256
    API Version 20060301
    396Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Name Description
    encryption
    customeralgorithm
    xamzcopysource
    serverside
    encryption
    customerkey
    Include this header to provide the base64encoded encryption key for
    Amazon S3 to use to decrypt the source object This encryption key
    must be the one that you provided Amazon S3 when you created the
    source object otherwise Amazon S3 will not be able to decrypt the
    object
    xamzcopy
    sourceserver
    sideencryption
    customerkeyMD5
    Include this header to provide the base64encoded 128bit MD5
    digest of the encryption key according to RFC 1321
    Presigned URL and SSEC
    You can generate a presigned URL that can be used for operations such as upload a new object
    retrieve an existing object or object metadata Presigned URLs support the SSEC as follows
    • When creating a presigned URL you must specify the algorithm using the xamzserverside
    encryptioncustomeralgorithm in the signature calculation
    • When using the presigned URL to upload a new object retrieve an existing object or retrieve only
    object metadata you must provide all the encryption headers in your client application
    For more information see the following topics
    • Specifying ServerSide Encryption with CustomerProvided Encryption Keys Using the AWS Java
    SDK (p 397)
    • Specifying ServerSide Encryption with CustomerProvided Encryption Keys Using the NET
    SDK (p 403)
    • Specifying ServerSide Encryption with CustomerProvided Encryption Keys Using the REST
    API (p 409)
    Specifying ServerSide Encryption with CustomerProvided Encryption Keys
    Using the AWS Java SDK
    The following Java code example illustrates serverside encryption with customerprovided keys (SSE
    C) (see Protecting Data Using ServerSide Encryption with CustomerProvided Encryption Keys (SSE
    C) (p 395)) The example performs the following operations each operation shows how you specify
    SSEC related headers in the request
    • Put object – upload an object requesting serverside encryption using a customerprovided
    encryption key
    • Get object – download the object that you uploaded in the previous step The example shows that in
    the Get request you must provide the same encryption information that you provided at the time you
    uploaded the object so that Amazon S3 can decrypt the object before returning it
    • Get object metadata – The request shows the same encryption information that you specified when
    creating the object is required to retrieve the object's metadata
    • Copy object – This example makes a copy of the previously uploaded object Because the source
    object is stored using SSEC you must provide the encryption information in your copy request
    By default the object copy will not be encrypted But in this example you request that Amazon S3
    store the object copy encrypted by using SSEC and therefore you must provide SSEC encryption
    information for the target as well
    API Version 20060301
    397Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Note
    This example shows how to upload an object in a single operation When using the multipart
    upload API to upload large objects you provide the same encryption information that you
    provide in your request as shown in the following example For multipart upload AWS SDK for
    Java examples see Using the AWS Java SDK for Multipart Upload (HighLevel API) (p 172)
    and Using the AWS Java SDK for Multipart Upload (LowLevel API) (p 177)
    The AWS SDK for Java provides the SSECustomerKey class for you to add the required encryption
    information (see Using SSEC (p 396)) in your request You are required to provide only the
    encryption key The Java SDK sets the values for the MD5 digest of the encryption key and the
    algorithm
    For information about how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioBufferedReader
    import javaioFile
    import javaioIOException
    import javaioInputStreamReader
    import javasecurityNoSuchAlgorithmException
    import javasecuritySecureRandom
    import javaxcryptoKeyGenerator
    import javaxcryptoSecretKey
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelCopyObjectRequest
    import comamazonawsservicess3modelGetObjectMetadataRequest
    import comamazonawsservicess3modelGetObjectRequest
    import comamazonawsservicess3modelObjectMetadata
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3modelS3Object
    import comamazonawsservicess3modelS3ObjectInputStream
    import comamazonawsservicess3modelSSECustomerKey
    public class ServerSideEncryptionUsingClientSideEncryptionKey {
    private static String bucketName *** Provide bucket name ***
    private static String keyName *** Provide key ***
    private static String uploadFileName *** Provide file name ***
    private static String targetKeyName *** provide target key ***
    private static AmazonS3 s3client
    public static void main(String[] args) throws IOException
    NoSuchAlgorithmException {
    s3client new AmazonS3Client(new ProfileCredentialsProvider())
    try {
    Systemoutprintln(Uploading a new object to S3 from a file\n)
    File file new File(uploadFileName)
    Create encryption key
    SecretKey secretKey generateSecretKey()
    SSECustomerKey sseKey new SSECustomerKey(secretKey)
    1 Upload object
    uploadObject(file sseKey)
    API Version 20060301
    398Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    2 Download object
    downloadObject(sseKey)
    3 Get object metadata (and verify AES256 encryption)
    retrieveObjectMetadata(sseKey)
    4 Copy object (both source and object use SSEC)
    copyObject(sseKey)
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException which +
    means your request made it +
    to Amazon S3 but was rejected with an error response +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException which +
    means the client encountered +
    an internal error while trying to +
    communicate with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    private static void copyObject(SSECustomerKey sseKey) {
    Create new encryption key for target so it is saved using ssec
    SecretKey secretKey2 generateSecretKey()
    SSECustomerKey newSseKey new SSECustomerKey(secretKey2)
    CopyObjectRequest copyRequest new CopyObjectRequest(bucketName
    keyName bucketName targetKeyName)
    withSourceSSECustomerKey(sseKey)
    withDestinationSSECustomerKey(newSseKey)
    s3clientcopyObject(copyRequest)
    Systemoutprintln(Object copied)
    }
    private static void retrieveObjectMetadata(SSECustomerKey sseKey) {
    GetObjectMetadataRequest getMetadataRequest new
    GetObjectMetadataRequest(bucketName keyName)
    withSSECustomerKey(sseKey)
    ObjectMetadata objectMetadata
    s3clientgetObjectMetadata(getMetadataRequest)
    Systemoutprintln(object size +
    objectMetadatagetContentLength())
    Systemoutprintln(Metadata retrieved)
    }
    private static PutObjectRequest uploadObject(File file SSECustomerKey
    sseKey) {
    1 Upload Object
    API Version 20060301
    399Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    PutObjectRequest putObjectRequest new PutObjectRequest(bucketName
    keyName file)
    withSSECustomerKey(sseKey)
    s3clientputObject(putObjectRequest)
    Systemoutprintln(Object uploaded)
    return putObjectRequest
    }
    private static void downloadObject(SSECustomerKey sseKey) throws
    IOException {
    Get a range of bytes from an object
    GetObjectRequest getObjectRequest new GetObjectRequest(bucketName
    keyName)
    withSSECustomerKey(sseKey)
    S3Object s3Object s3clientgetObject(getObjectRequest)
    Systemoutprintln(Printing bytes retrieved)
    displayTextInputStream(s3ObjectgetObjectContent())
    }
    private static void displayTextInputStream(S3ObjectInputStream input)
    throws IOException {
    Read one text line at a time and display
    BufferedReader reader new BufferedReader(new
    InputStreamReader(input))
    while (true) {
    String line readerreadLine()
    if (line null) break
    Systemoutprintln( + line)
    }
    Systemoutprintln()
    }
    private static SecretKey generateSecretKey() {
    try {
    KeyGenerator generator KeyGeneratorgetInstance(AES)
    generatorinit(256 new SecureRandom())
    return generatorgenerateKey()
    } catch (Exception e) {
    eprintStackTrace()
    Systemexit(1)
    return null
    }
    }
    }
    Other Amazon S3 Operations and SSEC
    The example in the preceding section shows how to request serverside encryption with customer
    provided keys (SSEC) in the PUT GET Head and Copy operations This section describes other
    APIs that support SSEC
    To upload large objects you can use multipart upload API (see Uploading Objects Using Multipart
    Upload API (p 165)) You can use either highlevel or lowlevel APIs to upload large objects These
    APIs support encryptionrelated headers in the request
    API Version 20060301
    400Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    • When using the highlevel TransferUtility API you provide the encryptionspecific headers in the
    TransferManager (see Using the AWS Java SDK for Multipart Upload (HighLevel API) (p 172))
    • When using the lowlevel API you provide encryptionrelated information in the initiate multipart
    upload request followed by identical encryption information in the subsequent upload part requests
    You do not need to provide any encryptionspecific headers in your complete multipart upload
    request For examples see Using the AWS Java SDK for Multipart Upload (LowLevel API) (p 177)
    The following example uses TransferManager to create objects and shows how to provide SSEC
    related information The example does the following
    • Create an object using the TransferManagerupload method In the PutObjectRequest
    instance you provide encryption key information to request that Amazon S3 store the object
    encrypted using the customerprovided encryption key
    • Make a copy of the object by calling the TransferManagercopy method In the
    CopyObjectRequest this example requests Amazon S3 to store the object copy also encrypted
    using a customerprovided encryption key Because the source object is encrypted using SSEC
    the CopyObjectRequest also provides the encryption key of the source object so Amazon S3
    can decrypt the object before it can copy
    import javaioFile
    import javasecuritySecureRandom
    import javaxcryptoKeyGenerator
    import javaxcryptoSecretKey
    import comamazonawsAmazonClientException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3modelCopyObjectRequest
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3modelSSECustomerKey
    import comamazonawsservicess3transferCopy
    import comamazonawsservicess3transferTransferManager
    import comamazonawsservicess3transferUpload
    public class ServerSideEncryptionCopyObjectUsingHLwithSSEC {
    public static void main(String[] args) throws Exception {
    String existingBucketName *** Provide existing bucket name ***
    String fileToUpload *** file path ***
    String keyName *** New object key ***
    String targetKeyName *** Key name for object copy ***

    TransferManager tm new TransferManager(new
    ProfileCredentialsProvider())

    1 first create an object from a file
    PutObjectRequest putObjectRequest new
    PutObjectRequest(existingBucketName keyName new File(fileToUpload))

    we want object stored using SSEC So we create encryption key
    SecretKey secretKey1 generateSecretKey()
    SSECustomerKey sseCustomerEncryptionKey1 new
    SSECustomerKey(secretKey1)

    putObjectRequestsetSSECustomerKey(sseCustomerEncryptionKey1)
    now create object
    Upload upload tmupload(existingBucketName keyName new
    File(sourceFile))
    API Version 20060301
    401Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Upload upload tmupload(putObjectRequest)
    try {
    Or you can block and wait for the upload to finish
    uploadwaitForCompletion()
    tmgetAmazonS3Client()putObject(putObjectRequest)
    Systemoutprintln(Object created)
    } catch (AmazonClientException amazonClientException) {
    Systemoutprintln(Unable to upload file upload was aborted)
    amazonClientExceptionprintStackTrace()
    }
    2 Now make object copy (in the same bucket) Store target using
    ssec
    CopyObjectRequest copyObjectRequest new
    CopyObjectRequest(existingBucketName keyName existingBucketName
    targetKeyName)

    SecretKey secretKey2 generateSecretKey()
    SSECustomerKey sseTargetObjectEncryptionKey new
    SSECustomerKey(secretKey2)



    copyObjectRequestsetSourceSSECustomerKey(sseCustomerEncryptionKey1)

    copyObjectRequestsetDestinationSSECustomerKey(sseTargetObjectEncryptionKey)


    TransferManager processes all transfers asynchronously
    so this call will return immediately
    Copy copy tmcopy(copyObjectRequest)
    try {
    Or you can block and wait for the upload to finish
    copywaitForCompletion()
    Systemoutprintln(Copy complete)
    } catch (AmazonClientException amazonClientException) {
    Systemoutprintln(Unable to upload file upload was aborted)
    amazonClientExceptionprintStackTrace()
    }
    }

    private static SecretKey generateSecretKey() {
    KeyGenerator generator
    try {
    generator KeyGeneratorgetInstance(AES)
    generatorinit(256 new SecureRandom())
    return generatorgenerateKey()
    } catch (Exception e) {
    eprintStackTrace()
    Systemexit(1)
    return null
    }
    }

    }
    API Version 20060301
    402Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Specifying ServerSide Encryption with CustomerProvided Encryption Keys
    Using the NET SDK
    The following C# code example illustrates serverside encryption with customerprovided keys (SSE
    C) (see Protecting Data Using ServerSide Encryption with CustomerProvided Encryption Keys (SSE
    C) (p 395)) The example performs the following operations each operation shows how you specify
    SSEC–related headers in the request
    • Put object – upload an object requesting serverside encryption using customerprovided encryption
    keys
    • Get object – download the object uploaded in the previous step It shows that the request must
    provide the same encryption information for Amazon S3 to decrypt the object so that it can return it
    to you
    • Get object metadata – The request shows that the same encryption information you specified when
    creating the object is required to retrieve the object metadata
    • Copy object – This example makes a copy of the previously uploaded object Because the source
    object is stored using SSEC you must provide encryption information in your copy request By
    default the object copy will not be encrypted But in this example you request that Amazon S3 store
    the object copy encrypted using SSEC and therefore you provide encryptionrelated information for
    the target as well
    Note
    When using multipart upload API to upload large objects you provide the same encryption
    information that you provide in your request as shown in the following example For multipart
    upload NET SDK examples see Using the AWS NET SDK for Multipart Upload (HighLevel
    API) (p 181) and Using the AWS NET SDK for Multipart Upload (LowLevel API) (p 190)
    For information about how to create and test a working sample see Running the Amazon S3 NET
    Code Examples (p 566)
    using System
    using SystemIO
    using SystemSecurityCryptography
    using AmazonS3
    using AmazonS3Model
    using MicrosoftVisualStudioTestToolsUnitTesting
    namespace s3amazoncomdocsamples
    {
    class SSEClientEncryptionKeyObjectOperations
    {
    static string bucketName *** bucket name ***
    static string keyName *** object key name for new object
    ***
    static string copyTargetKeyName *** copy operation target object
    key name ***
    static IAmazonS3 client
    public static void Main(string[] args)
    {
    using (client new
    AmazonS3Client(AmazonRegionEndpointUSWest2))
    {
    try
    {
    Create encryption key
    API Version 20060301
    403Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    Aes aesEncryption AesCreate()
    aesEncryptionKeySize 256
    aesEncryptionGenerateKey()
    string base64Key
    ConvertToBase64String(aesEncryptionKey)
    1 Upload object
    PutObjectRequest putObjectRequest
    UploadObject(base64Key)
    2 Download object (and also verify content is same as
    what you uploaded)
    DownloadObject(base64Key putObjectRequest)
    3 Get object metadata (and also verify AES256
    encryption)
    GetObjectMetadata(base64Key)
    4 Copy object (both source and target objects use
    serverside encryption with
    customerprovided encryption key
    CopyObject(aesEncryption base64Key)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&

    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||

    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS
    Credentials)
    ConsoleWriteLine(
    For service sign up go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when writing an
    object
    amazonS3ExceptionMessage)
    }
    }
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    private static void CopyObject(Aes aesEncryption string base64Key)
    {
    aesEncryptionGenerateKey()
    string copyBase64Key ConvertToBase64String(aesEncryptionKey)
    CopyObjectRequest copyRequest new CopyObjectRequest
    {
    SourceBucket bucketName
    SourceKey keyName
    DestinationBucket bucketName
    API Version 20060301
    404Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    DestinationKey copyTargetKeyName
    Source object encryption information
    CopySourceServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    CopySourceServerSideEncryptionCustomerProvidedKey
    base64Key
    Target object encryption information
    ServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    ServerSideEncryptionCustomerProvidedKey copyBase64Key
    }
    clientCopyObject(copyRequest)
    }
    private static void DownloadObject(string base64Key PutObjectRequest
    putObjectRequest)
    {
    GetObjectRequest getObjectRequest new GetObjectRequest
    {
    BucketName bucketName
    Key keyName
    Provide encryption information of the object stored in S3
    ServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    ServerSideEncryptionCustomerProvidedKey base64Key
    }
    using (GetObjectResponse getResponse
    clientGetObject(getObjectRequest))
    using (StreamReader reader new
    StreamReader(getResponseResponseStream))
    {
    string content readerReadToEnd()
    AssertAreEqual(putObjectRequestContentBody content)
    AssertAreEqual(ServerSideEncryptionCustomerMethodAES256
    getResponseServerSideEncryptionCustomerMethod)
    }
    }
    private static void GetObjectMetadata(string base64Key)
    {
    GetObjectMetadataRequest getObjectMetadataRequest new
    GetObjectMetadataRequest
    {
    BucketName bucketName
    Key keyName
    Object stored in S3 is encrypted So provide necessary
    encryption information
    ServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    ServerSideEncryptionCustomerProvidedKey base64Key
    }
    GetObjectMetadataResponse getObjectMetadataResponse
    clientGetObjectMetadata(getObjectMetadataRequest)
    AssertAreEqual(ServerSideEncryptionCustomerMethodAES256
    getObjectMetadataResponseServerSideEncryptionCustomerMethod)
    }
    API Version 20060301
    405Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    private static PutObjectRequest UploadObject(string base64Key)
    {
    PutObjectRequest putObjectRequest new PutObjectRequest
    {
    BucketName bucketName
    Key keyName
    ContentBody sample text
    ServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    ServerSideEncryptionCustomerProvidedKey base64Key
    }
    PutObjectResponse putObjectResponse
    clientPutObject(putObjectRequest)
    return putObjectRequest
    }
    }
    }
    Other Amazon S3 Operations and SSEC
    The example in the preceding section shows how to request serverside encryption with customer
    provided key (SSEC) in the PUT GET Head and Copy operations This section describes other APIs
    that support SSEC
    To upload large objects you can use multipart upload API (see Uploading Objects Using Multipart
    Upload API (p 165)) You can use either highlevel or lowlevel APIs to upload large objects These
    APIs support encryptionrelated headers in the request
    • When using highlevel TransferUtility API you provide the encryptionspecific headers in the
    TransferUtilityUploadRequest as shown For code examples see Using the AWS NET SDK
    for Multipart Upload (HighLevel API) (p 181)
    TransferUtilityUploadRequest request new TransferUtilityUploadRequest()
    {
    FilePath filePath
    BucketName existingBucketName
    Key keyName
    Provide encryption information
    ServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    ServerSideEncryptionCustomerProvidedKey base64Key
    }
    • When using the lowlevel API you provide encryptionrelated information in the initiate multipart
    upload request followed by identical encryption information in the subsequent upload part requests
    You do not need to provide any encryptionspecific headers in your complete multipart upload
    request For examples see Using the AWS NET SDK for Multipart Upload (LowLevel API) (p 190)
    The following is a lowlevel multipart upload example that makes a copy of an existing large object
    In the example the object to be copied is stored in Amazon S3 using SSEC and you want to save
    the target object also using SSEC In the example you do the following
    • Initiate a multipart upload request by providing an encryption key and related information
    • Provide source and target object encryption keys and related information in the
    CopyPartRequest
    • Obtain the size of the source object to be copied by retrieving the object metadata
    • Upload the objects in 5 MB parts
    API Version 20060301
    406Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    using System
    using SystemCollectionsGeneric
    using SystemSecurityCryptography
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class SSECLowLevelMPUcopyObject
    {
    static string existingBucketName *** bucket name ***
    static string sourceKeyName *** key name ***
    static string targetKeyName *** key name ***
    static void Main(string[] args)
    {
    IAmazonS3 s3Client new
    AmazonS3Client(AmazonRegionEndpointUSEast1)
    List uploadResponses new
    List()
    Aes aesEncryption AesCreate()
    aesEncryptionKeySize 256
    aesEncryptionGenerateKey()
    string base64Key ConvertToBase64String(aesEncryptionKey)
    1 Initialize
    InitiateMultipartUploadRequest initiateRequest new
    InitiateMultipartUploadRequest
    {
    BucketName existingBucketName
    Key targetKeyName
    ServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    ServerSideEncryptionCustomerProvidedKey base64Key
    }
    InitiateMultipartUploadResponse initResponse
    s3ClientInitiateMultipartUpload(initiateRequest)
    2 Upload Parts
    long partSize 5 * (long)MathPow(2 20) 5 MB
    long firstByte 0
    long lastByte partSize
    try
    {
    First find source object size Because object is stored
    encrypted with
    customer provided key you need to provide encryption
    information in your request
    GetObjectMetadataRequest getObjectMetadataRequest new
    GetObjectMetadataRequest()
    {
    BucketName existingBucketName
    Key sourceKeyName
    API Version 20060301
    407Amazon Simple Storage Service Developer Guide
    ServerSide Encryption
    ServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    ServerSideEncryptionCustomerProvidedKey ***source
    object encryption key ***
    }
    GetObjectMetadataResponse getObjectMetadataResponse
    s3ClientGetObjectMetadata(getObjectMetadataRequest)
    long filePosition 0
    for (int i 1 filePosition <
    getObjectMetadataResponseContentLength i++)
    {
    CopyPartRequest copyPartRequest new CopyPartRequest
    {
    UploadId initResponseUploadId
    Source
    SourceBucket existingBucketName
    SourceKey sourceKeyName
    Source object is stored using SSEC Provide
    encryption information
    CopySourceServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    CopySourceServerSideEncryptionCustomerProvidedKey
    ***source object encryption key ***
    FirstByte firstByte
    If the last part is smaller then our normal part
    size then use the remaining size
    LastByte lastByte >
    getObjectMetadataResponseContentLength
    getObjectMetadataResponseContentLength 1
    lastByte
    Target
    DestinationBucket existingBucketName
    DestinationKey targetKeyName
    PartNumber i
    Ecnryption information for the target object
    ServerSideEncryptionCustomerMethod
    ServerSideEncryptionCustomerMethodAES256
    ServerSideEncryptionCustomerProvidedKey base64Key
    }
    uploadResponsesAdd(s3ClientCopyPart(copyPartRequest))
    filePosition + partSize
    firstByte + partSize
    lastByte + partSize
    }
    Step 3 complete
    CompleteMultipartUploadRequest completeRequest new
    CompleteMultipartUploadRequest
    {
    BucketName existingBucketName
    Key targetKeyName
    UploadId initResponseUploadId
    }
    completeRequestAddPartETags(uploadResponses)
    CompleteMultipartUploadResponse completeUploadResponse
    API Version 20060301
    408Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    s3ClientCompleteMultipartUpload(completeRequest)
    }
    catch (Exception exception)
    {
    ConsoleWriteLine(Exception occurred {0}
    exceptionMessage)
    AbortMultipartUploadRequest abortMPURequest new
    AbortMultipartUploadRequest
    {
    BucketName existingBucketName
    Key targetKeyName
    UploadId initResponseUploadId
    }
    s3ClientAbortMultipartUpload(abortMPURequest)
    }
    }
    }
    }
    Specifying ServerSide Encryption with CustomerProvided Encryption Keys
    Using the REST API
    The following Amazon S3 REST APIs support headers related to serverside encryption with customer
    provided encryption keys For more information about these headers see Using SSEC (p 396)
    • GET Object
    • HEAD Object
    • PUT Object
    • PUT Object Copy
    • POST Object
    • Initiate Multipart Upload
    • Upload Part
    • Upload Part Copy
    Protecting Data Using ClientSide Encryption
    Clientside encryption refers to encrypting data before sending it to Amazon S3 You have the following
    two options for using data encryption keys
    • Use an AWS KMSmanaged customer master key
    • Use a clientside master key
    Option 1 Using an AWS KMS–Managed Customer Master Key
    (CMK)
    When using an AWS KMSmanaged customer master key for clientside data encryption you don't
    have to worry about providing any encryption keys to the Amazon S3 encryption client (for example
    the AmazonS3EncryptionClient in the AWS SDK for Java) Instead you provide only an AWS KMS
    customer master key ID (CMK ID) and the client does the rest This is how it works
    API Version 20060301
    409Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    • When uploading an object – Using the CMK ID the client first sends a request to AWS KMS for a
    key that it can use to encrypt your object data In response AWS KMS returns a randomly generated
    data encryption key In fact AWS KMS returns two versions of the data encryption key
    • A plain text version that the client uses to encrypt the object data
    • A cipher blob of the same data encryption key that the client uploads to Amazon S3 as object
    metadata
    Note
    The client obtains a unique data encryption key for each object it uploads
    For a working example see Example ClientSide Encryption (Option 1 Using an AWS KMS–
    Managed Customer Master Key (AWS SDK for Java)) (p 411)
    • When downloading an object – The client first downloads the encrypted object from Amazon S3
    along with the cipher blob version of the data encryption key stored as object metadata The client
    then sends the cipher blob to AWS KMS to get the plain text version of the same so that it can
    decrypt the object data
    For more information about AWS KMS go to What is the AWS Key Management Service in the AWS
    Key Management Service Developer Guide
    Option 2 Using a ClientSide Master Key
    This section shows how to provide your clientside master key in the clientside data encryption
    process
    Important
    Your clientside master keys and your unencrypted data are never sent to AWS therefore it is
    important that you safely manage your encryption keys If you lose them you won't be able to
    decrypt your data
    This is how it works
    • When uploading an object – You provide a clientside master key to the Amazon S3 encryption
    client (for example AmazonS3EncryptionClient when using the AWS SDK for Java) The client
    uses this master key only to encrypt the data encryption key that it generates randomly The process
    works like this
    1 The Amazon S3 encryption client locally generates a onetimeuse symmetric key (also known as
    a data encryption key or data key) It uses this data key to encrypt the data of a single S3 object
    (for each object the client generates a separate data key)
    2 The client encrypts the data encryption key using the master key you provide
    The client uploads the encrypted data key and its material description as part of the object
    metadata The material description helps the client later determine which clientside master key to
    use for decryption (when you download the object the client decrypts it)
    3 The client then uploads the encrypted data to Amazon S3 and also saves the encrypted data key
    as object metadata (xamzmetaxamzkey) in Amazon S3 by default
    • When downloading an object – The client first downloads the encrypted object from Amazon S3
    along with the metadata Using the material description in the metadata the client first determines
    which master key to use to decrypt the encrypted data key Using that master key the client decrypts
    the data key and uses it to decrypt the object
    The clientside master key you provide can be either a symmetric key or a publicprivate key pair For
    examples see Examples ClientSide Encryption (Option 2 Using a ClientSide Master Key (AWS SDK
    for Java)) (p 412)
    For more information see the ClientSide Data Encryption with the AWS SDK for Java and Amazon
    S3 article
    API Version 20060301
    410Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    The following AWS SDKs support clientside encryption
    • AWS SDK for Java
    • AWS SDK for NET
    • AWS SDK for Ruby
    Example ClientSide Encryption (Option 1 Using an AWS
    KMS–Managed Customer Master Key (AWS SDK for Java))
    The following Java code example uploads an object to Amazon S3 The example uses a KMS
    managed customer master key (CMK) to encrypt data on the clientside before uploading to Amazon
    S3 You will need the CMK ID in the code
    For more information about how clientside encryption using a KMSmanaged CMK works see Option
    1 Using an AWS KMS–Managed Customer Master Key (CMK) (p 409)
    For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564) You will need to update the code by providing your bucket name and a CMK ID
    import javaioByteArrayInputStream
    import javautilArrays
    import junitframeworkAssert
    import orgapachecommonsioIOUtils
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsregionsRegion
    import comamazonawsregionsRegions
    import comamazonawsservicess3AmazonS3EncryptionClient
    import comamazonawsservicess3modelCryptoConfiguration
    import comamazonawsservicess3modelKMSEncryptionMaterialsProvider
    import comamazonawsservicess3modelObjectMetadata
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3modelS3Object
    public class testKMSkeyUploadObject {
    private static AmazonS3EncryptionClient encryptionClient
    public static void main(String[] args) throws Exception {
    String bucketName ***bucket name***
    String objectKey ExampleKMSEncryptedObject
    String kms_cmk_id ***AWS KMS customer master key ID***

    KMSEncryptionMaterialsProvider materialProvider new
    KMSEncryptionMaterialsProvider(kms_cmk_id)

    encryptionClient new AmazonS3EncryptionClient(new
    ProfileCredentialsProvider() materialProvider
    new CryptoConfiguration()withKmsRegion(RegionsUS_EAST_1))
    withRegion(RegiongetRegion(RegionsUS_EAST_1))

    Upload object using the encryption client
    byte[] plaintext Hello World S3 Clientside Encryption Using
    Asymmetric Master Key
    API Version 20060301
    411Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    getBytes()
    Systemoutprintln(plaintext's length + plaintextlength)
    encryptionClientputObject(new PutObjectRequest(bucketName
    objectKey
    new ByteArrayInputStream(plaintext) new ObjectMetadata()))
    Download the object
    S3Object downloadedObject encryptionClientgetObject(bucketName
    objectKey)
    byte[] decrypted IOUtilstoByteArray(downloadedObject
    getObjectContent())

    Verify same data
    AssertassertTrue(Arraysequals(plaintext decrypted))
    }
    }
    Examples ClientSide Encryption (Option 2 Using a Client
    Side Master Key (AWS SDK for Java))
    This section provides code examples of clientside encryption As described in the overview (see
    Protecting Data Using ClientSide Encryption (p 409)) the clientside master key you provide can
    be either a symmetric key or a publicprivate key pair This section provides examples of both types
    of master keys symmetric master key (256bit Advanced Encryption Standard (AES) secret key) and
    asymmetric master key (1024bit RSA key pair)
    Topics
    • Example 1 Encrypt and Upload a File Using a ClientSide Symmetric Master Key (p 412)
    • Example 2 Encrypt and Upload a File to Amazon S3 Using a ClientSide Asymmetric Master
    Key (p 416)
    Note
    If you get a cipher encryption error message when you use the encryption API for the first
    time your version of the JDK may have a Java Cryptography Extension (JCE) jurisdiction
    policy file that limits the maximum key length for encryption and decryption transformations to
    128 bits The AWS SDK requires a maximum key length of 256 bits To check your maximum
    key length use the getMaxAllowedKeyLength method of the javaxcryptoCipher
    class To remove the key length restriction install the Java Cryptography Extension (JCE)
    Unlimited Strength Jurisdiction Policy Files at the Java SE download page
    Example 1 Encrypt and Upload a File Using a ClientSide Symmetric Master
    Key
    This section provides example code using the AWS SDK for Java to do the following
    • First create a 256bit AES symmetric master key and save it to a file
    • Upload an object to Amazon S3 using an S3 encryption client that first encrypts sample data on the
    clientside The example also downloads the object and verifies that the data is the same
    Example 1a Creating a Symmetric Master Key
    Run this code to first generate a 256bit AES symmetric master key for encrypted uploads to Amazon
    S3 The example saves the master key to a file (secretkey) in a temp directory (on Windows it is the
    c\Users\\AppData\Local\Tmp folder
    API Version 20060301
    412Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    For instructions on how to create and test a working sample see Using the AWS SDK for
    Java (p 563)
    import javaioFile
    import javaioFileInputStream
    import javaioFileOutputStream
    import javaioIOException
    import javasecurityInvalidKeyException
    import javasecurityNoSuchAlgorithmException
    import javasecurityspecInvalidKeySpecException
    import javasecurityspecX509EncodedKeySpec
    import javautilArrays
    import javaxcryptoKeyGenerator
    import javaxcryptoSecretKey
    import javaxcryptospecSecretKeySpec
    import orgjunitAssert
    public class GenerateSymmetricMasterKey {
    private static final String keyDir
    SystemgetProperty(javaiotmpdir)
    private static final String keyName secretkey

    public static void main(String[] args) throws Exception {

    Generate symmetric 256 bit AES key
    KeyGenerator symKeyGenerator KeyGeneratorgetInstance(AES)
    symKeyGeneratorinit(256)
    SecretKey symKey symKeyGeneratorgenerateKey()

    Save key
    saveSymmetricKey(keyDir symKey)

    Load key
    SecretKey symKeyLoaded loadSymmetricAESKey(keyDir AES)

    AssertassertTrue(Arraysequals(symKeygetEncoded()
    symKeyLoadedgetEncoded()))
    }
    public static void saveSymmetricKey(String path SecretKey secretKey)
    throws IOException {
    X509EncodedKeySpec x509EncodedKeySpec new X509EncodedKeySpec(
    secretKeygetEncoded())
    FileOutputStream keyfos new FileOutputStream(path + + keyName)
    keyfoswrite(x509EncodedKeySpecgetEncoded())
    keyfosclose()
    }

    public static SecretKey loadSymmetricAESKey(String path String
    algorithm)
    throws IOException NoSuchAlgorithmException
    InvalidKeySpecException InvalidKeyException{
    Read private key from file
    File keyFile new File(path + + keyName)
    FileInputStream keyfis new FileInputStream(keyFile)
    byte[] encodedPrivateKey new byte[(int)keyFilelength()]
    API Version 20060301
    413Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    keyfisread(encodedPrivateKey)
    keyfisclose()
    Generate secret key
    return new SecretKeySpec(encodedPrivateKey AES)
    }
    }
    This code example is for demonstration purposes only For production use you should consult your
    security engineer on how to obtain or generate the clientside master key
    Example 1b Uploading a File to Amazon S3 Using a Symmetric Key
    Run this code to encrypt sample data using a symmetric master key created by the preceding code
    example The example uses an S3 encryption client to encrypt the data on the clientside and then
    upload it to Amazon S3
    For instructions on how to create and test a working sample see Using the AWS SDK for
    Java (p 563)
    import javaioByteArrayInputStream
    import javautilArrays
    import javautilIterator
    import javautilUUID
    import javaxcryptoSecretKey
    import orgapachecommonsioIOUtils
    import orgjodatimeDateTime
    import orgjodatimeformatDateTimeFormat
    import orgjunitAssert
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3EncryptionClient
    import comamazonawsservicess3modelEncryptionMaterials
    import comamazonawsservicess3modelListVersionsRequest
    import comamazonawsservicess3modelObjectListing
    import comamazonawsservicess3modelObjectMetadata
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3modelS3Object
    import comamazonawsservicess3modelS3ObjectSummary
    import comamazonawsservicess3modelS3VersionSummary
    import comamazonawsservicess3modelStaticEncryptionMaterialsProvider
    import comamazonawsservicess3modelVersionListing
    public class S3ClientSideEncryptionWithSymmetricMasterKey {
    private static final String masterKeyDir
    SystemgetProperty(javaiotmpdir)
    private static final String bucketName UUIDrandomUUID() +
    + DateTimeFormatforPattern(yyMMddhhmmss)print(new
    DateTime())
    private static final String objectKey UUIDrandomUUID()toString()
    public static void main(String[] args) throws Exception {
    SecretKey mySymmetricKey GenerateSymmetricMasterKey
    loadSymmetricAESKey(masterKeyDir AES)
    EncryptionMaterials encryptionMaterials new EncryptionMaterials(
    API Version 20060301
    414Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    mySymmetricKey)
    AmazonS3EncryptionClient encryptionClient new
    AmazonS3EncryptionClient(
    new ProfileCredentialsProvider()
    new StaticEncryptionMaterialsProvider(encryptionMaterials))
    Create the bucket
    encryptionClientcreateBucket(bucketName)
    Upload object using the encryption client
    byte[] plaintext Hello World S3 Clientside Encryption Using
    Asymmetric Master Key
    getBytes()
    Systemoutprintln(plaintext's length + plaintextlength)
    encryptionClientputObject(new PutObjectRequest(bucketName
    objectKey
    new ByteArrayInputStream(plaintext) new ObjectMetadata()))
    Download the object
    S3Object downloadedObject encryptionClientgetObject(bucketName
    objectKey)
    byte[] decrypted IOUtilstoByteArray(downloadedObject
    getObjectContent())

    Verify same data
    AssertassertTrue(Arraysequals(plaintext decrypted))
    deleteBucketAndAllContents(encryptionClient)
    }
    private static void deleteBucketAndAllContents(AmazonS3 client) {
    Systemoutprintln(Deleting S3 bucket + bucketName)
    ObjectListing objectListing clientlistObjects(bucketName)
    while (true) {
    for ( Iterator<> iterator
    objectListinggetObjectSummaries()iterator() iteratorhasNext() ) {
    S3ObjectSummary objectSummary (S3ObjectSummary)
    iteratornext()
    clientdeleteObject(bucketName objectSummarygetKey())
    }
    if (objectListingisTruncated()) {
    objectListing clientlistNextBatchOfObjects(objectListing)
    } else {
    break
    }
    }
    VersionListing list clientlistVersions(new
    ListVersionsRequest()withBucketName(bucketName))
    for ( Iterator<> iterator listgetVersionSummaries()iterator()
    iteratorhasNext() ) {
    S3VersionSummary s (S3VersionSummary)iteratornext()
    clientdeleteVersion(bucketName sgetKey() sgetVersionId())
    }
    clientdeleteBucket(bucketName)
    }
    }
    API Version 20060301
    415Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    Example 2 Encrypt and Upload a File to Amazon S3 Using a ClientSide
    Asymmetric Master Key
    This section provides example code using the AWS SDK for Java to first create a 1024bit RSA key
    pair The example then uses that key pair as the clientside master key for the purpose of encrypting
    and upload a file
    This is how it works
    • First create a 1024bit RSA key pair (asymmetric master key) and save it to a file
    • Upload an object to Amazon S3using an S3 encryption client that encrypts sample data on the client
    side The example also downloads the object and verifies that the data is the same
    Example 2a Creating a 1024bit RSA Key Pair
    Run this code to first generate a 1024bit key pair (asymmetric master key) The example saves the
    master key to a file (secretkey) in a temp directory (on Windows it is the c\Users\
    \AppData\Local\Tmp folder
    For instructions on how to create and test a working sample see Using the AWS SDK for
    Java (p 563)
    import static orgjunitAssertassertTrue
    import javaioFile
    import javaioFileInputStream
    import javaioFileOutputStream
    import javaioIOException
    import javasecurityKeyFactory
    import javasecurityKeyPair
    import javasecurityKeyPairGenerator
    import javasecurityNoSuchAlgorithmException
    import javasecurityPrivateKey
    import javasecurityPublicKey
    import javasecuritySecureRandom
    import javasecurityspecInvalidKeySpecException
    import javasecurityspecPKCS8EncodedKeySpec
    import javasecurityspecX509EncodedKeySpec
    import javautilArrays
    public class GenerateAsymmetricMasterKey {
    private static final String keyDir
    SystemgetProperty(javaiotmpdir)
    private static final SecureRandom srand new SecureRandom()
    public static void main(String[] args) throws Exception {
    Generate RSA key pair of 1024 bits
    KeyPair keypair genKeyPair(RSA 1024)
    Save to file system
    saveKeyPair(keyDir keypair)
    Loads from file system
    KeyPair loaded loadKeyPair(keyDir RSA)
    Sanity check
    assertTrue(Arraysequals(keypairgetPublic()getEncoded() loaded
    getPublic()getEncoded()))
    assertTrue(Arraysequals(keypairgetPrivate()getEncoded() loaded
    getPrivate()getEncoded()))
    }
    API Version 20060301
    416Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    public static KeyPair genKeyPair(String algorithm int bitLength)
    throws NoSuchAlgorithmException {
    KeyPairGenerator keyGenerator
    KeyPairGeneratorgetInstance(algorithm)
    keyGeneratorinitialize(1024 srand)
    return keyGeneratorgenerateKeyPair()
    }
    public static void saveKeyPair(String dir KeyPair keyPair)
    throws IOException {
    PrivateKey privateKey keyPairgetPrivate()
    PublicKey publicKey keyPairgetPublic()
    X509EncodedKeySpec x509EncodedKeySpec new X509EncodedKeySpec(
    publicKeygetEncoded())
    FileOutputStream fos new FileOutputStream(dir + publickey)
    foswrite(x509EncodedKeySpecgetEncoded())
    fosclose()
    PKCS8EncodedKeySpec pkcs8EncodedKeySpec new PKCS8EncodedKeySpec(
    privateKeygetEncoded())
    fos new FileOutputStream(dir + privatekey)
    foswrite(pkcs8EncodedKeySpecgetEncoded())
    fosclose()
    }
    public static KeyPair loadKeyPair(String path String algorithm)
    throws IOException NoSuchAlgorithmException
    InvalidKeySpecException {
    read public key from file
    File filePublicKey new File(path + publickey)
    FileInputStream fis new FileInputStream(filePublicKey)
    byte[] encodedPublicKey new byte[(int) filePublicKeylength()]
    fisread(encodedPublicKey)
    fisclose()
    read private key from file
    File filePrivateKey new File(path + privatekey)
    fis new FileInputStream(filePrivateKey)
    byte[] encodedPrivateKey new byte[(int) filePrivateKeylength()]
    fisread(encodedPrivateKey)
    fisclose()
    Convert them into KeyPair
    KeyFactory keyFactory KeyFactorygetInstance(algorithm)
    X509EncodedKeySpec publicKeySpec new X509EncodedKeySpec(
    encodedPublicKey)
    PublicKey publicKey keyFactorygeneratePublic(publicKeySpec)
    PKCS8EncodedKeySpec privateKeySpec new PKCS8EncodedKeySpec(
    encodedPrivateKey)
    PrivateKey privateKey keyFactorygeneratePrivate(privateKeySpec)
    return new KeyPair(publicKey privateKey)
    }
    }
    API Version 20060301
    417Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    This code example is for demonstration purposes only For production use you should consult your
    security engineer on how to obtain or generate the clientside master key
    Example 2b Uploading a File to Amazon S3 Using a Key Pair
    Run this code to encrypt sample data using a symmetric master key created by the preceding code
    example The example uses an S3 encryption client to encrypt the data on the clientside and then
    upload it to Amazon S3
    For instructions on how to create and test a working sample see Using the AWS SDK for
    Java (p 563)
    import javaioByteArrayInputStream
    import javaioFile
    import javasecurityKeyFactory
    import javasecurityKeyPair
    import javasecurityPrivateKey
    import javasecurityPublicKey
    import javasecurityspecPKCS8EncodedKeySpec
    import javasecurityspecX509EncodedKeySpec
    import javautilArrays
    import javautilIterator
    import javautilUUID
    import orgapachecommonsioFileUtils
    import orgapachecommonsioIOUtils
    import orgjodatimeDateTime
    import orgjodatimeformatDateTimeFormat
    import orgjunitAssert
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3EncryptionClient
    import comamazonawsservicess3modelEncryptionMaterials
    import comamazonawsservicess3modelListVersionsRequest
    import comamazonawsservicess3modelObjectListing
    import comamazonawsservicess3modelObjectMetadata
    import comamazonawsservicess3modelPutObjectRequest
    import comamazonawsservicess3modelS3Object
    import comamazonawsservicess3modelS3ObjectSummary
    import comamazonawsservicess3modelS3VersionSummary
    import comamazonawsservicess3modelStaticEncryptionMaterialsProvider
    import comamazonawsservicess3modelVersionListing
    public class S3ClientSideEncryptionAsymmetricMasterKey {
    private static final String keyDir
    SystemgetProperty(javaiotmpdir)
    private static final String bucketName UUIDrandomUUID() +
    + DateTimeFormatforPattern(yyMMddhhmmss)print(new
    DateTime())
    private static final String objectKey UUIDrandomUUID()toString()

    public static void main(String[] args) throws Exception {

    1 Load keys from files
    byte[] bytes FileUtilsreadFileToByteArray(new File(
    keyDir + privatekey))
    KeyFactory kf KeyFactorygetInstance(RSA)
    PKCS8EncodedKeySpec ks new PKCS8EncodedKeySpec(bytes)
    API Version 20060301
    418Amazon Simple Storage Service Developer Guide
    ClientSide Encryption
    PrivateKey pk kfgeneratePrivate(ks)
    bytes FileUtilsreadFileToByteArray(new File(keyDir +
    publickey))
    PublicKey publicKey KeyFactorygetInstance(RSA)generatePublic(
    new X509EncodedKeySpec(bytes))
    KeyPair loadedKeyPair new KeyPair(publicKey pk)
    2 Construct an instance of AmazonS3EncryptionClient
    EncryptionMaterials encryptionMaterials new EncryptionMaterials(
    loadedKeyPair)
    AmazonS3EncryptionClient encryptionClient new
    AmazonS3EncryptionClient(
    new ProfileCredentialsProvider()
    new StaticEncryptionMaterialsProvider(encryptionMaterials))
    Create the bucket
    encryptionClientcreateBucket(bucketName)
    3 Upload the object
    byte[] plaintext Hello World S3 Clientside Encryption Using
    Asymmetric Master Key
    getBytes()
    Systemoutprintln(plaintext's length + plaintextlength)
    encryptionClientputObject(new PutObjectRequest(bucketName
    objectKey
    new ByteArrayInputStream(plaintext) new ObjectMetadata()))
    4 Download the object
    S3Object downloadedObject encryptionClientgetObject(bucketName
    objectKey)
    byte[] decrypted IOUtilstoByteArray(downloadedObject
    getObjectContent())
    AssertassertTrue(Arraysequals(plaintext decrypted))
    deleteBucketAndAllContents(encryptionClient)
    }
    private static void deleteBucketAndAllContents(AmazonS3 client) {
    Systemoutprintln(Deleting S3 bucket + bucketName)
    ObjectListing objectListing clientlistObjects(bucketName)
    while (true) {
    for ( Iterator<> iterator
    objectListinggetObjectSummaries()iterator() iteratorhasNext() ) {
    S3ObjectSummary objectSummary (S3ObjectSummary)
    iteratornext()
    clientdeleteObject(bucketName objectSummarygetKey())
    }
    if (objectListingisTruncated()) {
    objectListing clientlistNextBatchOfObjects(objectListing)
    } else {
    break
    }
    }
    VersionListing list clientlistVersions(new
    ListVersionsRequest()withBucketName(bucketName))
    for ( Iterator<> iterator listgetVersionSummaries()iterator()
    iteratorhasNext() ) {
    S3VersionSummary s (S3VersionSummary)iteratornext()
    API Version 20060301
    419Amazon Simple Storage Service Developer Guide
    Reduced Redundancy Storage
    clientdeleteVersion(bucketName sgetKey() sgetVersionId())
    }
    clientdeleteBucket(bucketName)
    }
    }
    Using Reduced Redundancy Storage
    Topics
    • Setting the Storage Class of an Object You Upload (p 421)
    • Changing the Storage Class of an Object in Amazon S3 (p 421)
    Amazon S3 stores objects according to their storage class It assigns the storage class to an object
    when it is written to Amazon S3 You can assign objects a specific storage class (standard or
    reduced redundancy) only when you write the objects to an Amazon S3 bucket or when you copy
    objects that are already stored in Amazon S3 Standard is the default storage class For information
    about storage classes see Object Key and Metadata (p 99)
    In order to reduce storage costs you can use reduced redundancy storage for noncritical reproducible
    data at lower levels of redundancy than Amazon S3 provides with standard storage The lower level
    of redundancy results in less durability and availability but in many cases the lower costs can make
    reduced redundancy storage an acceptable storage solution For example it can be a costeffective
    solution for sharing media content that is durably stored elsewhere It can also make sense if you are
    storing thumbnails and other resized images that can be easily reproduced from an original image
    Reduced redundancy storage is designed to provide 9999 durability of objects over a given year
    This durability level corresponds to an average annual expected loss of 001 of objects For example
    if you store 10000 objects using the RRS option you can on average expect to incur an annual loss
    of a single object per year (001 of 10000 objects)
    Note
    This annual loss represents an expected average and does not guarantee the loss of less
    than 001 of objects in a given year
    Reduced redundancy storage stores objects on multiple devices across multiple facilities providing
    400 times the durability of a typical disk drive but it does not replicate objects as many times as
    Amazon S3 standard storage In addition reduced redundancy storage is designed to sustain the loss
    of data in a single facility
    If an object in reduced redundancy storage has been lost Amazon S3 will return a 405 error on
    requests made to that object Amazon S3 also offers notifications for reduced redundancy storage
    object loss you can configure your bucket so that when Amazon S3 detects the loss of an RRS
    object a notification will be sent through Amazon Simple Notification Service (Amazon SNS) You can
    then replace the lost object To enable notifications you can use the Amazon S3 console to set the
    Notifications property of your bucket
    API Version 20060301
    420Amazon Simple Storage Service Developer Guide
    Setting the Storage Class of an Object You Upload
    Latency and throughput for reduced redundancy storage are the same as for standard storage For
    more information about cost considerations see Amazon S3 Pricing
    Setting the Storage Class of an Object You Upload
    To set the storage class of an object you upload to RRS you set xamzstorageclass to
    REDUCED_REDUNDANCY in a PUT request
    How to Set the Storage Class of an Object You're Uploading to RRS
    • Create a PUT Object request setting the xamzstorageclass request header to
    REDUCED_REDUNDANCY
    You must have the correct permissions on the bucket to perform the PUT operation The default
    value for the storage class is STANDARD (for regular Amazon S3 storage)
    The following example sets the storage class of myimagejpg to RRS
    PUT myimagejpg HTTP11
    Host myBuckets3amazonawscom
    Date Wed 12 Oct 2009 175000 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLExQE0diMbLRepdf3YB+FIEXAMPLE
    ContentType imagejpeg
    ContentLength 11434
    Expect 100continue
    xamzstorageclass REDUCED_REDUNDANCY
    Changing the Storage Class of an Object in Amazon
    S3
    Topics
    • Return Code for Lost Data (p 423)
    You can also change the storage class of an object that is already stored in Amazon S3 by copying it
    to the same key name in the same bucket To do that you use the following request headers in a PUT
    Object copy request
    • xamzmetadatadirective set to COPY
    API Version 20060301
    421Amazon Simple Storage Service Developer Guide
    Changing the Storage Class of an Object in Amazon S3
    • xamzstorageclass set to STANDARD STANDARD_IA or REDUCED_REDUNDANCY
    Important
    To optimize the execution of the copy request do not change any of the other metadata in the
    PUT Object copy request If you need to change metadata other than the storage class set
    xamzmetadatadirective to REPLACE for better performance
    How to Rewrite the Storage Class of an Object in Amazon S3
    • Create a PUT Object copy request and set the xamzstorageclass request header to
    REDUCED_REDUNDANCY (for RRS) or STANDARD (for regular Amazon S3 storage) or STANDARD_IA
    (for StandardInfrequent Access) and make the target name the same as the source name
    You must have the correct permissions on the bucket to perform the copy operation
    The following example sets the storage class of myimagejpg to RRS
    PUT myimagejpg HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    xamzcopysource bucketmyimagejpg
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    xamzstorageclass REDUCED_REDUNDANCY
    xamzmetadatadirective COPY
    The following example sets the storage class of myimagejpg to Standard
    PUT myimagejpg HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    xamzcopysource bucketmyimagejpg
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    xamzstorageclass STANDARD
    xamzmetadatadirective COPY
    The following example sets the storage class of myimagejpg to StandardInfrequent Access
    PUT myimagejpg HTTP11
    Host buckets3amazonawscom
    Date Sat 30 Apr 2016 232937 GMT
    xamzcopysource bucketmyimagejpg
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    xamzstorageclass STANDARD_IA
    xamzmetadatadirective COPY
    Note
    If you copy an object and fail to include the xamzstorageclass request header the
    storage class of the target object defaults to STANDARD
    It is not possible to change the storage class of a specific version of an object When you copy it
    Amazon S3 gives it a new version ID
    Note
    When an object is written in a copy request the entire object is rewritten in order to apply the
    new storage class
    API Version 20060301
    422Amazon Simple Storage Service Developer Guide
    Versioning
    For more information about versioning see Using Versioning (p 423)
    Return Code for Lost Data
    If Amazon S3 detects that an object has been lost any subsequent GET or HEAD operations or PUT
    Object copy operation that uses the lost object as the source object will result in a 405 Method
    Not Allowed error Once an object is marked lost Amazon S3 will never be able to recover the
    object In this situation you can either delete the key or upload a copy of the object
    Using Versioning
    Versioning is a means of keeping multiple variants of an object in the same bucket You can use
    versioning to preserve retrieve and restore every version of every object stored in your Amazon S3
    bucket With versioning you can easily recover from both unintended user actions and application
    failures
    In one bucket for example you can have two objects with the same key but different version IDs such
    as photogif (version 111111) and photogif (version 121212)
    Versioningenabled buckets enable you to recover objects from accidental deletion or overwrite For
    example
    • If you delete an object instead of removing it permanently Amazon S3 inserts a delete marker
    which becomes the current object version You can always restore the previous version For more
    information see Deleting Object Versions (p 437)
    • If you overwrite an object it results in a new object version in the bucket You can always restore the
    previous version
    Important
    If you have an object expiration lifecycle policy in your nonversioned bucket and you want to
    maintain the same permanent delete behavior when you enable versioning you must add a
    noncurrent expiration policy The noncurrent expiration lifecycle policy will manage the deletes
    of the noncurrent object versions in the versionenabled bucket (A versionenabled bucket
    maintains one current and zero or more noncurrent object versions) For more information
    see Lifecycle Configuration for a Bucket with Versioning in the Amazon Simple Storage
    Service Console User Guide
    Buckets can be in one of three states unversioned (the default) versioningenabled or versioning
    suspended
    Important
    Once you versionenable a bucket it can never return to an unversioned state You can
    however suspend versioning on that bucket
    API Version 20060301
    423Amazon Simple Storage Service Developer Guide
    How to Configure Versioning on a Bucket
    The versioning state applies to all (never some) of the objects in that bucket The first time you enable
    a bucket for versioning objects in it are thereafter always versioned and given a unique version ID
    Note the following
    • Objects stored in your bucket before you set the versioning state have a version ID of null
    When you enable versioning existing objects in your bucket do not change What changes is how
    Amazon S3 handles the objects in future requests For more information see Managing Objects in a
    VersioningEnabled Bucket (p 428)
    • The bucket owner (or any user with appropriate permissions) can suspend versioning to stop
    accruing object versions When you suspend versioning existing objects in your bucket do not
    change What changes is how Amazon S3 handles objects in future requests For more information
    see Managing Objects in a VersioningSuspended Bucket (p 444)
    How to Configure Versioning on a Bucket
    You can configure bucket versioning using any of the following methods
    • Configure versioning using the Amazon S3 console
    • Configure versioning programmatically using the AWS SDKs
    Both the console and the SDKs call the REST API Amazon S3 provides to manage versioning
    Note
    If you need to you can also make the Amazon S3 REST API calls directly from your code
    However this can be cumbersome because it requires you to write code to authenticate
    your requests
    Each bucket you create has a versioning subresource (see Bucket Configuration Options (p 61))
    associated with it By default your bucket is unversioned and accordingly the versioning
    subresource stores empty versioning configuration


    To enable versioning you send a request to Amazon S3 with a versioning configuration that includes
    a status

    Enabled

    To suspend versioning you set the status value to Suspended
    The bucket owner an AWS account that created the bucket (root account) and authorized users can
    configure the versioning state of a bucket For more information about permissions see Managing
    Access Permissions to Your Amazon S3 Resources (p 266)
    For an example of configuring versioning see Examples of Enabling Bucket Versioning (p 426)
    MFA Delete
    You can optionally add another layer of security by configuring a bucket to enable MFA (MultiFactor
    Authentication) Delete which requires additional authentication for either of the following operations
    • Change the versioning state of your bucket
    API Version 20060301
    424Amazon Simple Storage Service Developer Guide
    Related Topics
    • Permanently delete an object version
    MFA Delete requires two forms of authentication together
    • Your security credentials
    • The concatenation of a valid serial number a space and the sixdigit code displayed on an approved
    authentication device
    MFA Delete thus provides added security in the event for example your security credentials are
    compromised
    To enable or disable MFA delete you use the same API that you use to configure versioning on a
    bucket Amazon S3 stores the MFA Delete configuration in the same versioning subresource that
    stores the bucket's versioning status

    VersioningState
    MfaDeleteState

    To use MFA Delete you can use either a hardware or virtual MFA device to generate an authentication
    code The following example shows a generated authentication code displayed on a hardware device
    Note
    MFA Delete and MFAprotected API access are features intended to provide protection
    for different scenarios You configure MFA Delete on a bucket to ensure that data in your
    bucket cannot be accidentally deleted MFAprotected API access is used to enforce another
    authentication factor (MFA code) when accessing sensitive Amazon S3 resources You
    can require any operations against these Amazon S3 resources be done with temporary
    credentials created using MFA For an example see Adding a Bucket Policy to Require MFA
    Authentication (p 339)
    For more information on how to purchase and activate an authentication device see http
    awsamazoncomiamdetailsmfa
    Note
    The bucket owner the AWS account that created the bucket (root account) and all authorized
    IAM users can enable versioning but only the bucket owner (root account) can enable MFA
    delete
    Related Topics
    For more information see the following topics
    Examples of Enabling Bucket Versioning (p 426)
    Managing Objects in a VersioningEnabled Bucket (p 428)
    Managing Objects in a VersioningSuspended Bucket (p 444)
    API Version 20060301
    425Amazon Simple Storage Service Developer Guide
    Examples
    Examples of Enabling Bucket Versioning
    Topics
    • Using the Amazon S3 Console (p 426)
    • Using the AWS SDK for Java (p 426)
    • Using the AWS SDK for NET (p 427)
    • Using Other AWS SDKs (p 428)
    This section provides examples of enabling versioning on a bucket The examples first enable
    versioning on a bucket and then retrieve versioning status For an introduction see Using
    Versioning (p 423)
    Using the Amazon S3 Console
    For more information about enabling versioning on a bucket using the Amazon S3 console see Enable
    Versioning in the Amazon Simple Storage Service Console User Guide
    Using the AWS SDK for Java
    For instructions on how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioIOException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsregionsRegion
    import comamazonawsregionsRegions
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelAmazonS3Exception
    import comamazonawsservicess3modelBucketVersioningConfiguration
    import
    comamazonawsservicess3modelSetBucketVersioningConfigurationRequest
    public class BucketVersioningConfigurationExample {
    public static String bucketName *** bucket name ***
    public static AmazonS3Client s3Client
    public static void main(String[] args) throws IOException {
    s3Client new AmazonS3Client(new ProfileCredentialsProvider())
    s3ClientsetRegion(RegiongetRegion(RegionsUS_EAST_1))
    try {
    1 Enable versioning on the bucket
    BucketVersioningConfiguration configuration
    new BucketVersioningConfiguration()withStatus(Enabled)

    SetBucketVersioningConfigurationRequest
    setBucketVersioningConfigurationRequest
    new SetBucketVersioningConfigurationRequest(bucketNameconfiguration)


    s3ClientsetBucketVersioningConfiguration(setBucketVersioningConfigurationRequest)

    2 Get bucket versioning configuration information
    BucketVersioningConfiguration conf
    s3ClientgetBucketVersioningConfiguration(bucketName)
    API Version 20060301
    426Amazon Simple Storage Service Developer Guide
    Examples
    Systemoutprintln(bucket versioning configuration status +
    confgetStatus())
    } catch (AmazonS3Exception amazonS3Exception) {
    Systemoutformat(An Amazon S3 error occurred Exception s
    amazonS3ExceptiontoString())
    } catch (Exception ex) {
    Systemoutformat(Exception s extoString())
    }
    }
    }
    Using the AWS SDK for NET
    For information about how to create and test a working sample see Running the Amazon S3 NET
    Code Examples (p 566)
    using System
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class BucketVersioningConfiguration
    {
    static string bucketName *** bucket name ***
    public static void Main(string[] args)
    {
    using (var client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    try
    {
    EnableVersioningOnBucket(client)
    string bucketVersioningStatus
    RetrieveBucketVersioningConfiguration(client)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&

    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||

    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS
    Credentials)
    ConsoleWriteLine(
    To sign up for service go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when listing
    objects
    API Version 20060301
    427Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    amazonS3ExceptionMessage)
    }
    }
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void EnableVersioningOnBucket(IAmazonS3 client)
    {
    PutBucketVersioningRequest request new
    PutBucketVersioningRequest
    {
    BucketName bucketName
    VersioningConfig new S3BucketVersioningConfig
    {
    Status VersionStatusEnabled
    }
    }
    PutBucketVersioningResponse response
    clientPutBucketVersioning(request)
    }
    static string RetrieveBucketVersioningConfiguration(IAmazonS3 client)
    {
    GetBucketVersioningRequest request new
    GetBucketVersioningRequest
    {
    BucketName bucketName
    }

    GetBucketVersioningResponse response
    clientGetBucketVersioning(request)
    return responseVersioningConfigStatus
    }
    }
    }
    Using Other AWS SDKs
    For information about using other AWS SDKs see Sample Code and Libraries
    Managing Objects in a VersioningEnabled Bucket
    Topics
    • Adding Objects to VersioningEnabled Buckets (p 429)
    • Listing Objects in a VersioningEnabled Bucket (p 430)
    • Retrieving Object Versions (p 435)
    • Deleting Object Versions (p 437)
    • Transitioning Object Versions (p 442)
    • Restoring Previous Versions (p 442)
    • Versioned Object Permissions (p 443)
    API Version 20060301
    428Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    Objects stored in your bucket before you set the versioning state have a version ID of null When you
    enable versioning existing objects in your bucket do not change What changes is how Amazon S3
    handles the objects in future requests The topics in this section explain various object operations in a
    versioningenabled bucket
    Adding Objects to VersioningEnabled Buckets
    Topics
    • Using the Console (p 429)
    • Using the AWS SDKs (p 429)
    • Using the REST API (p 429)
    Once you enable versioning on a bucket Amazon S3 automatically adds a unique version ID to every
    object stored (using PUT POST or COPY) in the bucket
    The following figure shows that Amazon S3 adds a unique version ID to an object when it is added to a
    versioningenabled bucket
    Using the Console
    For instructions see Uploading Objects into Amazon S3 in the Amazon Simple Storage Service
    Console User Guide
    Using the AWS SDKs
    For examples of uploading objects using the AWS SDKs for Java NET and PHP see Uploading
    Objects (p 157) The examples for uploading objects in nonversioned and versioningenabled buckets
    are the same although in the case of versioningenabled buckets Amazon S3 assigns a version
    number Otherwise the version number is null
    For information about using other AWS SDKs see Sample Code and Libraries
    Using the REST API
    Adding Objects to VersioningEnabled Buckets
    1 Enable versioning on a bucket using a PUT Bucket versioning request For more
    information see PUT Bucket versioning
    2 Send a PUT POST or COPY request to store an object in the bucket
    API Version 20060301
    429Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    When you add an object to a versioningenabled bucket Amazon S3 returns the version ID of the
    object in the xamzversionid response header for example
    xamzversionid 3L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY
    Note
    Normal Amazon S3 rates apply for every version of an object stored and transferred Each
    version of an object is the entire object it is not just a diff from the previous version Thus if
    you have three versions of an object stored you are charged for three objects
    Note
    The version ID values that Amazon S3 assigns are URL safe (can be included as part of a
    URI)
    Listing Objects in a VersioningEnabled Bucket
    Topics
    • Using the Console (p 430)
    • Using the AWS SDKs (p 430)
    • Using the REST API (p 433)
    This section provides an example of listing object versions from a versioningenabled bucket
    Amazon S3 stores object version information in the versions subresource (see Bucket Configuration
    Options (p 61)) associated with the bucket
    Using the Console
    If your bucket is versioningenabled the console provides buttons for you to optionally show or hide
    object versions If you hide object versions the console shows only the list of the latest object versions
    Using the AWS SDKs
    The code examples in this section retrieve an object listing from a versionenabled bucket Each
    request returns up to 1000 versions If you have more you will need to send a series of requests
    to retrieve a list of all versions To illustrate how pagination works the code examples limit the
    response to two object versions If there are more than two object versions in the bucket the response
    returns the IsTruncated element with the value true and also includes the NextKeyMarker and
    NextVersionIdMarker elements whose values you can use to retrieve the next set of object keys
    The code example includes these values in the subsequent request to retrieve the next set of objects
    For information about using other AWS SDKs see Sample Code and Libraries
    Using the AWS SDK for Java
    For information about how to create and test a working sample see Testing the Java Code
    Examples (p 564)
    import javaioIOException
    import comamazonawsAmazonClientException
    import comamazonawsAmazonServiceException
    import comamazonawsauthprofileProfileCredentialsProvider
    import comamazonawsservicess3AmazonS3
    import comamazonawsservicess3AmazonS3Client
    import comamazonawsservicess3modelListVersionsRequest
    import comamazonawsservicess3modelS3VersionSummary
    import comamazonawsservicess3modelVersionListing
    API Version 20060301
    430Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    public class ListKeysVersionEnabledBucket {
    private static String bucketName *** bucket name ***

    public static void main(String[] args) throws IOException {
    AmazonS3 s3client new AmazonS3Client(new
    ProfileCredentialsProvider())
    try {
    Systemoutprintln(Listing objects)

    ListVersionsRequest request new ListVersionsRequest()
    withBucketName(bucketName)
    withMaxResults(2)
    you can specify withPrefix to obtain version list for a
    specific object or objects with
    the specified key prefix

    VersionListing versionListing
    do {
    versionListing s3clientlistVersions(request)
    for (S3VersionSummary objectSummary
    versionListinggetVersionSummaries()) {
    Systemoutprintln( + objectSummarygetKey() +
    +
    (size + objectSummarygetSize() + ) +
    (versionID + objectSummarygetVersionId() + ))
    }
    requestsetKeyMarker(versionListinggetNextKeyMarker())

    requestsetVersionIdMarker(versionListinggetNextVersionIdMarker())
    } while (versionListingisTruncated())
    } catch (AmazonServiceException ase) {
    Systemoutprintln(Caught an AmazonServiceException +
    which means your request made it +
    to Amazon S3 but was rejected with an error response
    +
    for some reason)
    Systemoutprintln(Error Message + asegetMessage())
    Systemoutprintln(HTTP Status Code + asegetStatusCode())
    Systemoutprintln(AWS Error Code + asegetErrorCode())
    Systemoutprintln(Error Type + asegetErrorType())
    Systemoutprintln(Request ID + asegetRequestId())
    } catch (AmazonClientException ace) {
    Systemoutprintln(Caught an AmazonClientException +
    which means the client encountered +
    an internal error while trying to communicate +
    with S3 +
    such as not being able to access the network)
    Systemoutprintln(Error Message + acegetMessage())
    }
    }
    }
    Using the AWS SDK for NET
    For information about how to create and test a working sample see Running the Amazon S3 NET
    Code Examples (p 566)
    using System
    API Version 20060301
    431Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    using AmazonS3
    using AmazonS3Model
    namespace s3amazoncomdocsamples
    {
    class ListObjectsVersioningEnabledBucket
    {
    static string bucketName *** bucket name ***
    public static void Main(string[] args)
    {
    using (var client new
    AmazonS3Client(AmazonRegionEndpointUSEast1))
    {
    ConsoleWriteLine(Listing objects stored in a bucket)
    GetObjectListWithAllVersions(client)
    }
    ConsoleWriteLine(Press any key to continue)
    ConsoleReadKey()
    }
    static void GetObjectListWithAllVersions(IAmazonS3 client)
    {
    try
    {
    ListVersionsRequest request new ListVersionsRequest()
    {
    BucketName bucketName
    You can optionally specify key name prefix in the
    request
    if you want list of object versions of a specific
    object
    For this example we limit response to return list of 2
    versions
    MaxKeys 2
    }
    do
    {
    ListVersionsResponse response
    clientListVersions(request)
    Process response
    foreach (S3ObjectVersion entry in responseVersions)
    {
    ConsoleWriteLine(key {0} size {1}
    entryKey entrySize)
    }
    If response is truncated set the marker to get the
    next
    set of keys
    if (responseIsTruncated)
    {
    requestKeyMarker responseNextKeyMarker
    requestVersionIdMarker
    responseNextVersionIdMarker
    }
    API Version 20060301
    432Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    else
    {
    request null
    }
    } while (request null)
    }
    catch (AmazonS3Exception amazonS3Exception)
    {
    if (amazonS3ExceptionErrorCode null &&
    (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId)
    ||
    amazonS3ExceptionErrorCodeEquals(InvalidSecurity)))
    {
    ConsoleWriteLine(Check the provided AWS Credentials)
    ConsoleWriteLine(
    To sign up for service go to httpawsamazoncom
    s3)
    }
    else
    {
    ConsoleWriteLine(
    Error occurred Message'{0}' when listing objects
    amazonS3ExceptionMessage)
    }
    }
    }
    }
    }
    Using the REST API
    To list all of the versions of all of the objects in a bucket you use the versions subresource in a
    GET Bucket request Amazon S3 can retrieve only a maximum of 1000 objects and each object
    version counts fully as an object Therefore if a bucket contains two keys (eg photogif and
    picturejpg) and the first key has 990 versions and the second key has 400 versions a single
    request would retrieve all 990 versions of photogif and only the most recent 10 versions of
    picturejpg
    Amazon S3 returns object versions in the order in which they were stored with the most recently
    stored returned first
    To list all object versions in a bucket
    • In a GET Bucket request include the versions subresource
    GET versions HTTP11
    Host bucketNames3amazonawscom
    Date Wed 28 Oct 2009 223200 +0000
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    Retrieving a Subset of Objects in a Bucket
    This section discusses the following two example scenarios
    • You want to retrieve a subset of all object versions in a bucket for example retrieve all versions of a
    specific object
    API Version 20060301
    433Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    • The number of object versions in the response exceeds the value for maxkey (1000 by default) so
    that you have to submit a second request to retrieve the remaining object versions
    To retrieve a subset of object versions you use the request parameters for GET Bucket For more
    information see GET Bucket
    Example 1 Retrieving All Versions of Only a Specific Object
    You can retrieve all versions of an object using the versions subresource and the prefix request
    parameter using the following process For more information about prefix see GET Bucket
    Retrieving All Versions of a Key
    1 Set the prefix parameter to the key of the object you want to retrieve
    2 Send a GET Bucket request using the versions subresource and prefix
    GET versions&prefixobjectName HTTP11
    Example Retrieving Objects Using a Prefix
    The following example retrieves objects whose key is or begins with myObject
    GET versions&prefixmyObject HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    You can use the other request parameters to retrieve a subset of all versions of the object For more
    information see GET Bucket
    Example 2 Retrieving a Listing of Additional Objects if the Response Is Truncated
    If the number of objects that could be returned in a GET request exceeds the value of maxkeys
    the response contains true and includes the first key (in
    NextKeyMarker) and the first version ID (in NextVersionIdMarker) that satisfy the request but
    were not returned You use those returned values as the starting position in a subsequent request to
    retrieve the additional objects that satisfy the GET request
    Use the following process to retrieve additional objects that satisfy the original GET Bucket
    versions request from a bucket For more information about keymarker versionidmarker
    NextKeyMarker and NextVersionIdMarker see GET Bucket
    Retrieving Additional Responses that Satisfy the Original GET Request
    1 Set the value of keymarker to the key returned in NextKeyMarker in
    the previous response
    2 Set the value of versionidmarker to the version ID returned in
    NextVersionIdMarker in the previous response
    3 Send a GET Bucket versions request using keymarker and
    versionidmarker
    API Version 20060301
    434Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    Example Retrieving Objects Starting with a Specified Key and Version ID
    GET versions&keymarkermyObject&versionidmarker298459348571 HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    Retrieving Object Versions
    A simple GET request retrieves the current version of an object The following figure shows how GET
    returns the current version of the object photogif
    To retrieve a specific version you have to specify its version ID The following figure shows that a GET
    versionId request retrieves the specified version of the object (not necessarily the current one)
    Using the Console
    For instructions see Downloading an Object in the Amazon Simple Storage Service Console User
    Guide You will need to click the Show button in the console to list all object versions
    API Version 20060301
    435Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    Using the AWS SDKs
    For examples of uploading objects using AWS SDKs for Java NET and PHP see Getting
    Objects (p 143) The examples for uploading objects in a nonversioned and versioningenabled
    buckets are the same although in the case of versioningenabled buckets Amazon S3 assigns a
    version number Otherwise the version number is null
    For information about using other AWS SDKs see Sample Code and Libraries
    Using REST
    To retrieve a specific object version
    1 Set versionId to the ID of the version of the object you want to retrieve
    2 Send a GET Object versionId request
    Example Retrieving a Versioned Object
    The following request retrieves version L4kqtJlcpXroDTDmpUMLUo of myimagejpg
    GET myimagejpgversionIdL4kqtJlcpXroDTDmpUMLUo HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    Related Topics
    Retrieving the Metadata of an Object Version (p 436)
    Retrieving the Metadata of an Object Version
    If you only want to retrieve the metadata of an object (and not its content) you use the HEAD operation
    By default you get the metadata of the most recent version To retrieve the metadata of a specific
    object version you specify its version ID
    To retrieve the metadata of an object version
    1 Set versionId to the ID of the version of the object whose metadata you want to retrieve
    2 Send a HEAD Object versionId request
    Example Retrieving the Metadata of a Versioned Object
    The following request retrieves the metadata of version 3HL4kqCxf3vjVBH40Nrjfkd of myimagejpg
    HEAD myimagejpgversionId3HL4kqCxf3vjVBH40Nrjfkd HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    The following shows a sample response
    HTTP11 200 OK
    xamzid2 ef8yU9AS1ed4OpIszj7UDNEHGran
    xamzrequestid 318BC8BC143432E5
    API Version 20060301
    436Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    xamzversionid 3HL4kqtJlcpXroDTDmjVBH40Nrjfkd
    Date Wed 28 Oct 2009 223200 GMT
    LastModified Sun 1 Jan 2006 120000 GMT
    ETag fba9dede5f27731c9771645a39863328
    ContentLength 434234
    ContentType textplain
    Connection close
    Server AmazonS3
    Deleting Object Versions
    You can delete object versions whenever you want In addition you can also define lifecycle
    configuration rules for objects that have a welldefined lifecycle to request Amazon S3 to expire current
    object versions or permanently remove noncurrent object versions When your bucket is version
    enabled or versioning is suspended the lifecycle configuration actions work as follows
    • The Expiration action applies to the current object version and instead of deleting the current
    object version Amazon S3 retains the current version as a noncurrent version by adding a delete
    marker which then becomes the current version
    • The NoncurrentVersionExpiration action applies to noncurrent object versions and Amazon
    S3 permanently removes these object versions You cannot recover permanently removed objects
    For more information see Object Lifecycle Management (p 109)
    A DELETE request has the following use cases
    • When versioning is enabled a simple DELETE cannot permanently delete an object
    Instead Amazon S3 inserts a delete marker in the bucket and that marker becomes the current
    version of the object with a new ID When you try to GET an object whose current version is a delete
    marker Amazon S3 behaves as though the object has been deleted (even though it has not been
    erased) and returns a 404 error
    The following figure shows that a simple DELETE does not actually remove the specified object
    Instead Amazon S3 inserts a delete marker
    • To permanently delete versioned objects you must use DELETE Object versionId
    The following figure shows that deleting a specified object version permanently removes that object
    API Version 20060301
    437Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    Using the Console
    For instructions see Deleting an Object in the Amazon Simple Storage Service Console User Guide
    You will need to click the Show button in the console to list all object versions
    Using the AWS SDKs
    For examples of uploading objects using the AWS SDKs for Java NET and PHP see Deleting
    Objects (p 237) The examples for uploading objects in nonversioned and versioningenabled buckets
    are the same although in the case of versioningenabled buckets Amazon S3 assigns a version
    number Otherwise the version number is null
    For information about using other AWS SDKs see Sample Code and Libraries
    Using REST
    To a delete a specific version of an object
    • In a DELETE specify a version ID
    Example Deleting a Specific Version
    The following example shows how to delete version UIORUnfnd89493jJFJ of photogif
    DELETE photogifversionIdUIORUnfnd89493jJFJ HTTP11
    Host buckets3amazonawscom
    Date Wed 12 Oct 2009 175000 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLExQE0diMbLRepdf3YB+FIEXAMPLE
    ContentType textplain
    ContentLength 0
    Related Topics
    Using MFA Delete (p 439)
    API Version 20060301
    438Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    Working with Delete Markers (p 439)
    Removing Delete Markers (p 441)
    Using Versioning (p 423)
    Using MFA Delete
    If a bucket's versioning configuration is MFA Delete–enabled the bucket owner must include the x
    amzmfa request header in requests to permanently delete an object version or change the versioning
    state of the bucket Requests that include xamzmfa must use HTTPS The header's value is the
    concatenation of your authentication device's serial number a space and the authentication code
    displayed on it If you do not include this request header the request fails
    For more information about authentication devices see httpawsamazoncomiamdetailsmfa
    Example Deleting an Object from an MFA Delete Enabled Bucket
    The following example shows how to delete myimagejpg (with the specified version) which is
    in a bucket configured with MFA Delete enabled Note the space between [SerialNumber] and
    [AuthenticationCode] For more information see DELETE Object
    DELETE myimagejpgversionId3HL4kqCxf3vjVBH40Nrjfkd HTTPS11
    Host bucketNames3amazonawscom
    xamzmfa 20899872 301749
    Date Wed 28 Oct 2009 223200 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    For more information about enabling MFA delete see MFA Delete (p 424)
    Working with Delete Markers
    A delete marker is a placeholder (marker) for a versioned object that was named in a simple DELETE
    request Because the object was in a versioningenabled bucket the object was not deleted The
    delete marker however makes Amazon S3 behave as if it had been deleted
    A delete marker has a key name (or key) and version ID like any other object However a delete
    marker differs from other objects in the following ways
    • It does not have data associated with it
    • It is not associated with an access control list (ACL) value
    • It does not retrieve anything from a GET request because it has no data you get a 404 error
    • The only operation you can use on a delete marker is DELETE and only the bucket owner can issue
    such a request
    Delete markers accrue a nominal charge for storage in Amazon S3 The storage size of a delete
    marker is equal to the size of the key name of the delete marker A key name is a sequence of Unicode
    characters The UTF8 encoding adds from 1 to 4 bytes of storage to your bucket for each character
    in the name For more information about key names see Object Keys (p 99) For information about
    deleting a delete marker see Removing Delete Markers (p 441)
    Only Amazon S3 can create a delete marker and it does so whenever you send a DELETE Object
    request on an object in a versioningenabled or suspended bucket The object named in the DELETE
    request is not actually deleted Instead the delete marker becomes the current version of the object
    (The object's key name (or key) becomes the key of the delete marker) If you try to get an object and
    its current version is a delete marker Amazon S3 responds with
    API Version 20060301
    439Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    • A 404 (Object not found) error
    • A response header xamzdeletemarker true
    The response header tells you that the object accessed was a delete marker This response header
    never returns false if the value is false Amazon S3 does not include this response header in the
    response
    The following figure shows how a simple GET on an object whose current version is a delete marker
    returns a 404 No Object Found error
    The only way to list delete markers (and other versions of an object) is by using the versions
    subresource in a GET Bucket versions request A simple GET does not retrieve delete marker
    objects The following figure shows that a GET Bucket request does not return objects whose current
    version is a delete marker
    API Version 20060301
    440Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    Removing Delete Markers
    To delete a delete marker you must specify its version ID in a DELETE Object versionId request
    If you use a DELETE request to delete a delete marker (without specifying the version ID of the delete
    marker) Amazon S3 does not delete the delete marker but instead inserts another delete marker
    The following figure shows how a simple DELETE on a delete marker removes nothing but adds a new
    delete marker to a bucket
    In a versioningenabled bucket this new delete marker would have a unique version ID So it's
    possible to have multiple delete markers of the same object in one bucket
    To permanently delete a delete marker you must include its version ID in a DELETE Object
    versionId request The following figure shows how a DELETE Object versionId request
    permanently removes a delete marker Only the owner of a bucket can permanently remove a delete
    marker
    The effect of removing the delete marker is that a simple GET request will now retrieve the current
    version (121212) of the object
    API Version 20060301
    441Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    To permanently remove a delete marker
    1 Set versionId to the ID of the version to the delete marker you want to remove
    2 Send a DELETE Object versionId request
    Example Removing a Delete Marker
    The following example removes the delete marker for photogif version 4857693
    DELETE photogifversionId4857693 HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    When you delete a delete marker Amazon S3 includes in the response
    204 NoContent
    xamzversionid versionID
    xamzdeletemarker true
    Transitioning Object Versions
    You can define lifecycle configuration rules for objects that have a welldefined lifecycle to transition
    object versions to the GLACIER storage class at a specific time in the object's lifetime For more
    information see Object Lifecycle Management (p 109)
    Restoring Previous Versions
    One of the value propositions of versioning is the ability to retrieve previous versions of an object
    There are two approaches to doing so
    • Copy a previous version of the object into the same bucket
    The copied object becomes the current version of that object and all object versions are preserved
    • Permanently delete the current version of the object
    When you delete the current object version you in effect turn the previous version into the current
    version of that object
    Because all object versions are preserved you can make any earlier version the current version
    by copying a specific version of the object into the same bucket In the following figure the source
    object (ID 111111) is copied into the same bucket Amazon S3 supplies a new ID (88778877) and it
    becomes the current version of the object So the bucket has both the original object version (111111)
    and its copy (88778877)
    API Version 20060301
    442Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningEnabled Bucket
    A subsequent GET will retrieve version 88778877
    The following figure shows how deleting the current version (121212) of an object which leaves the
    previous version (111111) as the current object
    A subsequent GET will retrieve version 111111
    Versioned Object Permissions
    Permissions are set at the version level Each version has its own object owner an AWS account that
    creates the object version is the owner So you can set different permissions for different versions of
    the same object To do so you must specify the version ID of the object whose permissions you want
    to set in a PUT Object versionId acl request For a detailed description and instructions on using
    ACLs see Managing Access Permissions to Your Amazon S3 Resources (p 266)
    API Version 20060301
    443Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningSuspended Bucket
    Example Setting Permissions for an Object Version
    The following request sets the permission of the grantee BucketOwner@amazoncom to
    FULL_CONTROL on the key myimagejpg version ID 3HL4kqtJvjVBH40Nrjfkd
    PUT myimagejpgacl&versionId3HL4kqtJvjVBH40Nrjfkd HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    ContentLength 124



    75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a
    mtd@amazoncom



    xsitypeCanonicalUser>

    a9a7b886d6fd24a52fe8ca5bef65f89a64e0193f23000e241bf9b1c61be666e9
    BucketOwner@amazoncom

    FULL_CONTROL



    Likewise to get the permissions of a specific object version you must specify its version ID in a GET
    Object versionId acl request You need to include the version ID because by default GET
    Object acl returns the permissions of the current version of the object
    Example Retrieving the Permissions for a Specified Object Version
    In the following example Amazon S3 returns the permissions for the key myimagejpg version ID
    DVBH40Nr8X8gUMLUo
    GET myimagejpgversionIdDVBH40Nr8X8gUMLUo&acl HTTP11
    Host buckets3amazonawscom
    Date Wed 28 Oct 2009 223200 GMT
    Authorization AWS AKIAIOSFODNN7EXAMPLE0RQf4cRonhpaBX5sCYVf1bNRuU
    For more information see GET Object acl
    Managing Objects in a VersioningSuspended
    Bucket
    Topics
    • Adding Objects to VersioningSuspended Buckets (p 445)
    • Retrieving Objects from VersioningSuspended Buckets (p 446)
    • Deleting Objects from VersioningSuspended Buckets (p 446)
    API Version 20060301
    444Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningSuspended Bucket
    You suspend versioning to stop accruing new versions of the same object in a bucket You might do
    this because you only want a single version of an object in a bucket or you might not want to accrue
    charges for multiple versions
    When you suspend versioning existing objects in your bucket do not change What changes is
    how Amazon S3 handles objects in future requests The topics in this section explain various object
    operations in a versioningsuspended bucket
    Adding Objects to VersioningSuspended Buckets
    Once you suspend versioning on a bucket Amazon S3 automatically adds a null version ID to every
    subsequent object stored thereafter (using PUT POST or COPY) in that bucket
    The following figure shows how Amazon S3 adds the version ID of null to an object when it is added
    to a versionsuspended bucket
    If a null version is already in the bucket and you add another object with the same key the added
    object overwrites the original null version
    If there are versioned objects in the bucket the version you PUT becomes the current version of the
    object The following figure shows how adding an object to a bucket that contains versioned objects
    does not overwrite the object already in the bucket In this case version 111111 was already in the
    bucket Amazon S3 attaches a version ID of null to the object being added and stores it in the bucket
    Version 111111 is not overwritten
    If a null version already exists in a bucket the null version is overwritten as shown in the following
    figure
    API Version 20060301
    445Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningSuspended Bucket
    Note that although the key and version ID (null) of null version are the same before and after the PUT
    the contents of the null version originally stored in the bucket is replaced by the contents of the object
    PUT into the bucket
    Retrieving Objects from VersioningSuspended Buckets
    A GET Object request returns the current version of an object whether you've enabled versioning on
    a bucket or not The following figure shows how a simple GET returns the current version of an object
    Deleting Objects from VersioningSuspended Buckets
    If versioning is suspended a DELETE request
    • Can only remove an object whose version ID is null
    Doesn't remove anything if there isn't a null version of the object in the bucket
    • Inserts a delete marker into the bucket
    The following figure shows how a simple DELETE removes a null version and Amazon S3 inserts a
    delete marker in its place with a version ID of null
    API Version 20060301
    446Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningSuspended Bucket
    Remember that a delete marker doesn't have content so you lose the content of the null version when
    a delete marker replaces it
    The following figure shows a bucket that doesn't have a null version In this case the DELETE removes
    nothing Amazon S3 just inserts a delete marker
    Even in a versioningsuspended bucket the bucket owner can permanently delete a specified version
    The following figure shows that deleting a specified object version permanently removes that object
    Only the bucket owner can delete a specified object version
    API Version 20060301
    447Amazon Simple Storage Service Developer Guide
    Managing Objects in a VersioningSuspended Bucket
    API Version 20060301
    448Amazon Simple Storage Service Developer Guide
    Hosting a Static Website on
    Amazon S3
    Topics
    • Website Endpoints (p 450)
    • Configure a Bucket for Website Hosting (p 452)
    • Example Walkthroughs Hosting Websites On Amazon S3 (p 462)
    You can host a static website on Amazon S3 On a static website individual web pages include static
    content They may also contain clientside scripts By contrast a dynamic website relies on serverside
    processing including serverside scripts such as PHP JSP or ASPNET Amazon S3 does not support
    serverside scripting
    Note
    Amazon Web Services (AWS) has resources for hosting dynamic websites To learn more
    about website hosting on AWS go to Websites and Website Hosting
    To host your static website you configure an Amazon S3 bucket for website hosting and then upload
    your website content to the bucket The website is then available at the regionspecific website
    endpoint of the bucket
    s3websiteamazonawscom
    For a list of region specific website endpoints for Amazon S3 see Website Endpoints (p 450) For
    example suppose you create a bucket called examplebucket in the US East (N Virginia) Region and
    configure it as a website The following example URLs provide access to your website content
    • This URL returns a default index document that you configured for the website
    httpexamplebuckets3websiteuseast1amazonawscom
    • This URL requests the photojpg object which is stored at the root level in the bucket
    httpexamplebuckets3websiteuseast1amazonawscomphotojpg
    • This URL requests the docsdoc1html object in your bucket
    httpexamplebuckets3websiteuseast1amazonawscomdocsdoc1html
    API Version 20060301
    449Amazon Simple Storage Service Developer Guide
    Website Endpoints
    Using Your Own Domain
    Instead of accessing the website by using an Amazon S3 website endpoint you can use your own
    domain such as examplecom to serve your content Amazon S3 in conjunction with Amazon
    Route 53 supports hosting a website at the root domain For example if you have the root domain
    examplecom and you host your website on Amazon S3 your website visitors can access the site
    from their browser by typing either httpwwwexamplecom or httpexamplecom For an
    example walkthrough see Example Setting Up a Static Website Using a Custom Domain (p 464)
    To configure a bucket for website hosting you add website configuration to the bucket For more
    information see Configure a Bucket for Website Hosting (p 452)
    Website Endpoints
    Topics
    • Key Differences Between the Amazon Website and the REST API Endpoint (p 451)
    When you configure a bucket for website hosting the website is available via the regionspecific
    website endpoint Website endpoints are different from the endpoints where you send REST API
    requests For more information about the endpoints see Request Endpoints (p 13)
    The two general forms of an Amazon S3 website endpoint are as follows
    bucketnames3websiteregionamazonawscom
    bucketnames3websiteregionamazonawscom
    For example if your bucket is named examplebucket and it resides in the US East (N Virginia)
    region the website is available at the following Amazon S3 website endpoint
    httpexamplebuckets3websiteuseast1amazonawscom
    Or if your bucket is named examplebucket and it resides in the EU (Frankfurt) region the website
    is available at the following Amazon S3 website endpoint
    httpexamplebuckets3websiteeucentral1amazonawscom
    The following table lists Amazon S3 regions and the corresponding website endpoints
    Note
    The website endpoints do not support https
    Region Website endpoint
    US East (N
    Virginia) region
    bucketnames3websiteuseast1amazonawscom
    US West (N
    California) region
    bucketnames3websiteuswest1amazonawscom
    US West
    (Oregon) region
    bucketnames3websiteuswest2amazonawscom
    API Version 20060301
    450Amazon Simple Storage Service Developer Guide
    Key Differences Between the Amazon
    Website and the REST API Endpoint
    Region Website endpoint
    Asia Pacific
    (Mumbai) region
    bucketnames3websiteapsouth1amazonawscom
    Asia Pacific
    (Seoul) region
    bucketnames3websiteapnortheast2amazonawscom
    Asia Pacific
    (Singapore)
    region
    bucketnames3websiteapsoutheast1amazonawscom
    Asia Pacific
    (Sydney) region
    bucketnames3websiteapsoutheast2amazonawscom
    Asia Pacific
    (Tokyo) region
    bucketnames3websiteapnortheast1amazonawscom
    EU (Frankfurt)
    region
    bucketnames3websiteeucentral1amazonawscom
    EU (Ireland)
    region
    bucketnames3websiteeuwest1amazonawscom
    South America
    (São Paulo)
    region
    bucketnames3websitesaeast1amazonawscom
    In order for your customers to access content at the website endpoint you must make all your content
    publicly readable To do so you can use a bucket policy or an ACL on an object to grant the necessary
    permissions
    Note
    Requester Pays buckets or DevPay buckets do not allow access through the website
    endpoint Any request to such a bucket will receive a 403 Access Denied response For
    more information see Requester Pays Buckets (p 92)
    If you have a registered domain you can add a DNS CNAME entry to point to the Amazon S3
    website endpoint For example if you have registered domain wwwexamplebucketcom you
    could create a bucket wwwexamplebucketcom and add a DNS CNAME record that points
    to wwwexamplebucketcoms3websiteamazonawscom All requests to
    httpwwwexamplebucketcom will be routed to wwwexamplebucketcoms3website
    amazonawscom For more information see Virtual Hosting of Buckets (p 50)
    Key Differences Between the Amazon Website and
    the REST API Endpoint
    The website endpoint is optimized for access from a web browser The following table describes the
    key differences between the Amazon REST API endpoint and the website endpoint
    Key Difference REST API Endpoint Website Endpoint
    Access control Supports both public and private
    content
    Supports only publicly readable content
    Error message
    handling
    Returns an XMLformatted error
    response
    Returns an HTML document
    API Version 20060301
    451Amazon Simple Storage Service Developer Guide
    Configure a Bucket for Website Hosting
    Key Difference REST API Endpoint Website Endpoint
    Redirection
    support
    Not applicable Supports both objectlevel and bucket
    level redirects
    Requests
    supported
    Supports all bucket and object
    operations
    Supports only GET and HEAD requests
    on objects
    Responses to
    GET and HEAD
    requests at the
    root of a bucket
    Returns a list of the object keys in the
    bucket
    Returns the index document that is
    specified in the website configuration
    Secure Sockets
    Layer (SSL)
    support
    Supports SSL connections Does not support SSL connections
    Configure a Bucket for Website Hosting
    Topics
    • Overview (p 452)
    • Syntax for Specifying Routing Rules (p 454)
    • Index Document Support (p 457)
    • Custom Error Document Support (p 459)
    • Configuring a Web Page Redirect (p 460)
    • Permissions Required for Website Access (p 462)
    Overview
    To configure a bucket for static website hosting you add a website configuration to your bucket The
    configuration includes the following information
    • Index document
    When you type a URL such as httpexamplecom you are not requesting a specific page In this
    case the web server serves a default page for the directory where the requested website content
    is stored This default page is referred to as index document and is typically named indexhtml
    When you configure a bucket for website hosting you must specify an index document Amazon S3
    returns this index document when requests are made to the root domain or any of the subfolders
    For more information see Index Documents and Folders (p 458)
    • Error document
    If an error occurs Amazon S3 returns an HTML error document For 4XX class errors you can
    optionally provide your own custom error document in which you can provide additional guidance to
    your users For more information see Custom Error Document Support (p 459)
    • Redirects all requests
    If your root domain is examplecom and you want to serve requests for both http
    examplecom and httpwwwexamplecom you can create two buckets named examplecom
    and wwwexamplecom maintain website content in only one bucket say examplecom and
    configure the other bucket to redirect all requests to the examplecom bucket
    • Advanced conditional redirects
    API Version 20060301
    452Amazon Simple Storage Service Developer Guide
    Overview
    You can conditionally route requests according to specific object key names or prefixes in the
    request or according to the response code For example suppose that you delete or rename
    an object in your bucket You can add a routing rule that redirects the request to another object
    Suppose that you want to make a folder unavailable You can add a routing rule to redirect the
    request to another page which explains why the folder is no longer available You can also add a
    routing rule to handle an error condition by routing requests that return the error to another domain
    where the error will be processed
    You can manage your buckets website configuration using the Amazon S3 console The bucket
    Properties panel in the console enables you to specify the website configuration
    To host a static website on Amazon S3 you need only provide the name of the index document
    To redirect all requests to the bucket's website endpoint to another host you only need to provide host
    name
    API Version 20060301
    453Amazon Simple Storage Service Developer Guide
    Syntax for Specifying Routing Rules
    However when configuring bucket for website hosting you can optionally specify advanced redirection
    rules
    You describe the rules using XML The following section provides general syntax and examples of
    specifying redirection rules
    Syntax for Specifying Routing Rules
    The following is a general syntax for defining the routing rules in a website configuration



    [
    ]

    API Version 20060301
    454Amazon Simple Storage Service Developer Guide
    Syntax for Specifying Routing Rules


    [ ]




    [ ]
    [ ]

    Note must have at least one child element


    [ ]
    [ ]
    [ ]
    [ ]
    [ ]

    Note must have at least one child element
    Also you can have either ReplaceKeyPrefix with or
    ReplaceKeyWith
    but not both
    The following table describes the elements in the routing rule
    Name Description
    RoutingRules Container for a collection of RoutingRule elements
    RoutingRule A rule that identifies a condition and the redirect that is applied when
    the condition is met
    Condition A RoutingRules container must contain at least one
    routing rule
    Condition Container for describing a condition that must be met for the
    specified redirect to be applied If the routing rule does not include a
    condition the rule is applied to all requests
    KeyPrefixEquals The object key name prefix from which requests will be redirected
    KeyPrefixEquals is required if
    HttpErrorCodeReturnedEquals is not specified If both
    KeyPrefixEquals and HttpErrorCodeReturnedEquals are
    specified both must be true for the condition to be met
    HttpErrorCodeReturnedEqualsThe HTTP error code that must match for the redirect to apply In the
    event of an error if the error code meets this value then specified
    redirect applies
    HttpErrorCodeReturnedEquals is required if
    KeyPrefixEquals is not specified If both KeyPrefixEquals and
    HttpErrorCodeReturnedEquals are specified both must be true
    for the condition to be met
    Redirect Container element that provides instructions for redirecting the
    request You can redirect requests to another host or another
    page or you can specify another protocol to use A RoutingRule
    API Version 20060301
    455Amazon Simple Storage Service Developer Guide
    Syntax for Specifying Routing Rules
    Name Description
    must have a Redirect element A Redirect element must
    contain at least one of the following sibling elements Protocol
    HostName ReplaceKeyPrefixWith ReplaceKeyWith or
    HttpRedirectCode
    Protocol The protocol http or https to be used in the Location header that is
    returned in the response
    Protocol is not required if one of its siblings is supplied
    HostName The host name to be used in the Location header that is returned in
    the response
    HostName is not required if one of its siblings is supplied
    ReplaceKeyPrefixWith The object key name prefix that will replace the value of
    KeyPrefixEquals in the redirect request
    ReplaceKeyPrefixWith is not required if one of its siblings is
    supplied It can be supplied only if ReplaceKeyWith is not supplied
    ReplaceKeyWith The object key to be used in the Location header that is returned in
    the response
    ReplaceKeyWith is not required if one of its siblings is supplied It
    can be supplied only if ReplaceKeyPrefixWith is not supplied
    HttpRedirectCode The HTTP redirect code to be used in the Location header that is
    returned in the response
    HttpRedirectCode is not required if one of its siblings is supplied
    The following are some of the examples
    Example 1 Redirect after renaming a key prefix
    Suppose your bucket contained the following objects
    indexhtml
    docsarticle1html
    docsarticle2html
    Now you decided to rename the folder from docs to documents After you make this change
    you will need to redirect requests for prefix docs to documents For example request for docs
    article1html will need to be redirected to documentsarticle1html
    In this case you add the following routing rule to the website configuration



    docs


    documents



    API Version 20060301
    456Amazon Simple Storage Service Developer Guide
    Index Document Support
    Example 2 Redirect requests for a deleted folder to a page
    Suppose you delete the images folder (that is you delete all objects with key prefix images) You
    can add a routing rule that redirects requests for any object with the key prefix images to a page
    named folderdeletedhtml



    images


    folderdeletedhtml



    Example 3 Redirect for an HTTP error
    Suppose that when a requested object is not found you want to redirect requests to an Amazon
    EC2 instance You can add a redirection rule so that when an HTTP status code 404 (Not Found) is
    returned the site visitor is redirected to an EC2 instance that will handle the request The following
    example also inserts the object key prefix report404 in the redirect For example if you request
    a page ExamplePagehtml and it results in a HTTP 404 error the request is redirected to a page
    report404ExamplePagehtml on the specified EC2 instance If there is no routing rule and the
    HTTP error 404 occurs the error document specified in the configuration is returned



    404


    ec2112233344compute1amazonawscom
    report404



    Index Document Support
    An index document is a webpage that is returned when a request is made to the root of a website or
    any subfolder For example if a user enters httpwwwexamplecom in the browser the user is
    not requesting any specific page In that case Amazon S3 serves up the index document which is
    sometimes referred to as the default page
    When you configure your bucket as a website you should provide the name of the index document
    You must upload an object with this name and configure it to be publicly readable For information
    about configuring a bucket as a website see Example Setting Up a Static Website (p 463)
    The trailing slash at the rootlevel URL is optional For example if you configure your website with
    indexhtml as the index document either of the following two URLs will return indexhtml
    httpexamplebuckets3websiteregionamazonawscom
    httpexamplebuckets3websiteregionamazonawscom
    API Version 20060301
    457Amazon Simple Storage Service Developer Guide
    Index Document Support
    For more information about Amazon S3 website endpoints see Website Endpoints (p 450)
    Index Documents and Folders
    In Amazon S3 a bucket is a flat container of objects it does not provide any hierarchical organization
    as the file system on your computer does You can create a logical hierarchy by using object key
    names that imply a folder structure For example consider a bucket with three objects and the
    following key names
    sample1jpg
    photos2006Jansample2jpg
    photos2006Febsample3jpg
    Although these are stored with no physical hierarchical organization you can infer the following logical
    folder structure from the key names
    sample1jpg object is at the root of the bucket
    sample2jpg object is in the photos2006Jan subfolder and
    sample3jpg object is in photos2006Feb subfolder
    The folder concept that Amazon S3 console supports is based on object key names To continue the
    previous example the console displays the ExampleBucket with a photos folder
    You can upload objects to the bucket or to the photos folder within the bucket If you add the object
    samplejpg to the bucket the key name is samplejpg If you upload the object to the photos
    folder the object key name is photossamplejpg
    If you create such a folder structure in your bucket you must have an index document at each level
    When a user specifies a URL that resembles a folder lookup the presence or absence of a trailing
    slash determines the behavior of the website For example the following URL with a trailing slash
    returns the photosindexhtml index document
    httpexamplebuckets3websiteregionamazonawscomphotos
    However if you exclude the trailing slash from the preceding URL Amazon S3 first looks for an object
    photos in the bucket If the photos object is not found then it searches for an index document
    photosindexhtml If that document is found Amazon S3 returns a 302 Found message and
    API Version 20060301
    458Amazon Simple Storage Service Developer Guide
    Custom Error Document Support
    points to the photos key For subsequent requests to photos Amazon S3 returns photos
    indexhtml If the index document is not found Amazon S3 returns an error
    Custom Error Document Support
    The following table lists the subset of HTTP response codes that Amazon S3 returns when an error
    occurs
    HTTP Error Code Description
    301 Moved
    Permanently
    When a user sends a request directly to the Amazon S3 website endpoints
    (https3websiteamazonawscom) Amazon S3 returns a
    301 Moved Permanently response and redirects those requests to http
    awsamazoncoms3
    302 Found When Amazon S3 receives a request for a key x https3
    websiteamazonawscomx without a trailing slash it first
    looks for the object with the keyname x If the object is not found Amazon
    S3 determines that the request is for subfolder x and redirects the request by
    adding a slash at the end and returns 302 Found
    304 Not Modified Amazon S3 users request headers IfModifiedSince IfUnmodified
    Since IfMatch andor IfNoneMatch to determine whether the
    requested object is same as the cached copy held by the client If the object is
    the same the website endpoint returns a 304 Not Modified response
    400 Malformed
    Request
    The website endpoint responds with a 400 Malformed Request when a user
    attempts to access a bucket through the incorrect regional endpoint
    403 Forbidden The website endpoint responds with a 403 Forbidden when a user request
    translates to an object that is not publicly readable The object owner must
    make the object publicly readable using a bucket policy or an ACL
    404 Not Found The website endpoint responds with 404 Not Found for the following reasons
    • Amazon S3 determines the website URL refers to an object key that does
    not exist
    • Amazon infers the request is for an index document that does not exist
    • A bucket specified in the URL does not exist
    • A bucket specified in the URL exists however it is not configured as a
    website
    You can create a custom document that is returned for 404 Not Found Make
    sure the document is uploaded to the bucket configured as a website and that
    the website hosting configuration is set to use the document
    For information on how Amazon S3 interprets the URL as a request for an
    object or an index document see Index Document Support (p 457)
    500 Service Error The website endpoint responds with a 500 Service Error when an internal
    server error occurs
    503 Service
    Unavailable
    The website endpoint responds with a 503 Service Unavailable when Amazon
    S3 determines that you need to reduce your request rate
    For each of these errors Amazon S3 returns a predefined HTML as shown in the following sample
    HTML returned for 403 Forbidden response
    API Version 20060301
    459Amazon Simple Storage Service Developer Guide
    Configuring a Redirect
    You can optionally provide a custom error document with a userfriendly error message and with
    additional help You provide this custom error document as part of adding website configuration to your
    bucket Amazon S3 returns your custom error document for only the HTTP 4XX class of error codes
    Error Documents and Browser Behavior
    When an error occurs Amazon S3 returns an HTML error document If you have configured your
    website with a custom error document Amazon S3 returns that error document However note that
    when an error occurs some browsers display their own error message ignoring the error document
    Amazon S3 returns For example when an HTTP 404 Not Found error occurs Chrome might display
    its own error ignoring the error document that Amazon S3 returns
    Configuring a Web Page Redirect
    If your Amazon S3 bucket is configured for website hosting you can redirect requests for an object to
    another object in the same bucket or to an external URL You set the redirect by adding the xamz
    websiteredirectlocation property to the object metadata The website then interprets the
    object as 301 redirect To redirect a request to another object you set the redirect location to the key of
    the target object To redirect a request to an external URL you set the redirect location to the URL that
    you want For more information about object metadata see SystemDefined Metadata (p 101)
    A bucket configured for website hosting has both the website endpoint and the REST endpoint A
    request for a page that is configured as a 301 redirect has the following possible outcomes depending
    on the endpoint of the request
    • Regionspecific website endpoint – Amazon S3 redirects the page request according to the value
    of the xamzwebsiteredirectlocation property
    • REST endpoint – Amazon S3 does not redirect the page request It returns the requested object
    For more information about the endpoints see Key Differences Between the Amazon Website and the
    REST API Endpoint (p 451)
    You can set a page redirect from the Amazon S3 console or by using the Amazon S3 REST API
    Page Redirect Support in the Amazon S3 Console
    You can use the Amazon S3 console to set the website redirect location in the metadata of the object
    When you set a page redirect you can either keep or delete the source object content For example
    suppose you have a page1html object in your bucket To redirect any requests for this page to another
    object page2html you can do one of the following
    API Version 20060301
    460Amazon Simple Storage Service Developer Guide
    Configuring a Redirect
    • To keep the content of the page1html object and only redirect page requests under Properties
    for page1html click the Metadata tab Add Website Redirect Location to the metadata as shown
    in the following example and set its value to page2html The prefix in the value is required
    You can also set the value to an external URL such as httpwwwexamplecom
    • To delete the content of the page1html object and redirect requests you can upload a new
    zerobyte object with the same key page1html to replace the existing object and then specify
    Website Redirect Location for page1html in the upload process For information about
    uploading an object go to Uploading Objects into Amazon S3 in the Amazon Simple Storage Service
    Console User Guide
    Setting a Page Redirect from the REST API
    The following Amazon S3 API actions support the xamzwebsiteredirectlocation header
    in the request Amazon S3 stores the header value in the object metadata as xamzwebsite
    redirectlocation
    • PUT Object
    • Initiate Multipart Upload
    • POST Object
    • PUT Object Copy
    When setting a page redirect you can either keep or delete the object content For example suppose
    you have a page1html object in your bucket
    • To keep the content of page1html and only redirect page requests you can submit a PUT Object
    Copy request to create a new page1html object that uses the existing page1html object as
    the source In your request you set the xamzwebsiteredirectlocation header When the
    request is complete you have the original page with its content unchanged but Amazon S3 redirects
    any requests for the page to the redirect location that you specify
    • To delete the content of the page1html object and redirect requests for the page you can send a
    PUT Object request to upload a zerobyte object that has the same object key page1html In the
    PUT request you set xamzwebsiteredirectlocation for page1html to the new object
    When the request is complete page1html has no content and any requests will be redirected to
    the location that is specified by xamzwebsiteredirectlocation
    API Version 20060301
    461Amazon Simple Storage Service Developer Guide
    Permissions Required for Website Access
    When you retrieve the object using the GET Object action along with other object metadata Amazon
    S3 returns the xamzwebsiteredirectlocation header in the response
    Permissions Required for Website Access
    When you configure a bucket as a website you must make the objects that you want to serve publicly
    readable To do so you write a bucket policy that grants everyone s3GetObject permission On the
    website endpoint if a user requests an object that does not exist Amazon S3 returns HTTP response
    code 404 (Not Found) If the object exists but you have not granted read permission on the object
    the website endpoint returns HTTP response code 403 (Access Denied) The user can use the
    response code to infer if a specific object exists or not If you do not want this behavior you should not
    enable website support for your bucket
    The following sample bucket policy grants everyone access to the objects in the specified folder For
    more information on bucket policies see Using Bucket Policies and User Policies (p 308)
    {
    Version20121017
    Statement[{
    SidPublicReadGetObject
    EffectAllow
    Principal *
    Action[s3GetObject]
    Resource[arnawss3examplebucket*
    ]
    }
    ]
    }
    Note
    The bucket policy applies only to objects owned by the bucket owner If your bucket contains
    objects not owned by the bucket owner then public READ permission on those objects should
    be granted using the object ACL
    You can grant public read permission to your objects by using either a bucket policy or an object ACL
    To make an object publicly readable using an ACL you grant READ permission to the AllUsers group
    as shown in the following grant element You add this grant element to the object ACL For information
    on managing ACLs see Managing Access with ACLs (p 364)

    xsitypeGroup>
    httpacsamazonawscomgroupsglobalAllUsers

    READ

    Example Walkthroughs Hosting Websites On
    Amazon S3
    Topics
    API Version 20060301
    462Amazon Simple Storage Service Developer Guide
    Example Setting Up a Static Website
    • Example Setting Up a Static Website (p 463)
    • Example Setting Up a Static Website Using a Custom Domain (p 464)
    This section provides two examples In the first example you configure a bucket for website hosting
    upload a sample index document and test the website using the Amazon S3 website endpoint for the
    bucket The second example shows how you can use your own domain such as examplecom instead
    of the Amazon S3 bucket website endpoint and serve content from an Amazon S3 bucket configured
    as a website The example also shows how Amazon S3 offers the root domain support
    Example Setting Up a Static Website
    You can configure an Amazon S3 bucket to function like a website This example walks you through
    the steps of hosting a website on Amazon S3 In the following procedure you will use the AWS
    Management Console to perform the necessary tasks
    1 Create an Amazon S3 bucket and configure it as a website (see To create a bucket and configure it
    as a website (p 463))
    2 Add a bucket policy that make the bucket content public (see To add a bucket policy that makes
    your bucket content publicly available (p 463))
    The content that you serve at the website endpoint must be publicly readable You can grant the
    necessary permissions by adding a bucket policy or using Access Control List (ACL) Here we
    describe adding a bucket policy
    3 Upload an index document (see To upload an index document (p 464))
    4 Test your website using the Amazon S3 bucket website endpoint (Test your website (p 464))
    To create a bucket and configure it as a website
    1 Sign in to the AWS Management Console and open the Amazon S3 console at https
    consoleawsamazoncoms3
    2 Create a bucket
    For stepbystep instructions go to Create a bucket in Amazon Simple Storage Service Console
    User Guide
    For bucket naming guidelines see Bucket Restrictions and Limitations (p 62) If you have your
    registered domain name for additional information about bucket naming see Customizing
    Amazon S3 URLs with CNAMEs (p 53)
    3 Open the bucket Properties panel click Static Website Hosting and do the following
    1 Select the Enable website hosting
    2 In the Index Document box add the name of your index document This name is typically
    indexhtml
    3 Click Save to save the website configuration
    4 Note down the Endpoint
    This is the Amazon S3provided website endpoint for your bucket You will use this endpoint
    in the following steps to test your website
    To add a bucket policy that makes your bucket content publicly available
    1 In bucket Properties panel click the Permissions
    2 Click Add Bucket Policy
    3 Copy the following bucket policy and then paste it in the Bucket Policy Editor
    API Version 20060301
    463Amazon Simple Storage Service Developer Guide
    Example Setting Up a Static
    Website Using a Custom Domain
    {
    Version20121017
    Statement[{
    SidPublicReadForGetBucketObjects
    EffectAllow
    Principal *
    Action[s3GetObject]
    Resource[arnawss3examplebucket*
    ]
    }
    ]
    }
    4 In the policy replace examplebucket with the name of your bucket
    5 Click Save
    To upload an index document
    1 Create a document The file name must be same as the name that you provided for the index
    document earlier
    2 Using the console upload the index document to your bucket
    For instructions go to Uploading Objects into Amazon S3 in the Amazon Simple Storage Service
    Console User Guide
    Test your website
    • Enter the following URL in the browser replacing examplebucket with the name of your
    bucket and websiteregion with the name of the region where you deployed your bucket For
    information about region names see Website Endpoints (p 450) )
    httpexamplebuckets3websiteregionamazonawscom
    If your browser displays your indexhtml page the website was successfully deployed
    Note
    HTTPS access to the website is not supported
    You now have a website hosted on Amazon S3 This website is available at the Amazon S3 website
    endpoint However you might have a domain such as examplecom that you want to use to serve
    the content from the website you created You might also want to use Amazon S3's root domain
    support to serve requests for both the httpwwwexamplecom and httpexamplecom This
    requires additional steps For an example see Example Setting Up a Static Website Using a Custom
    Domain (p 464)
    Example Setting Up a Static Website Using a
    Custom Domain
    Topics
    • Before You Begin (p 465)
    • Step 1 Register a Domain (p 465)
    • Step 2 Create and Configure Buckets and Upload Data (p 465)
    API Version 20060301
    464Amazon Simple Storage Service Developer Guide
    Example Setting Up a Static
    Website Using a Custom Domain
    • Step 3 Create and Configure Amazon Route 53 Hosted Zone (p 469)
    • Step 4 Switch to Amazon Route 53 as Your DNS Provider (p 470)
    • Step 5 Testing (p 471)
    Suppose you want to host your static website on Amazon S3 You have registered a domain for
    example examplecom and you want requests for httpwwwexamplecom and http
    examplecom to be served from your Amazon S3 content
    Whether you have an existing static website that you now want to host on Amazon S3 or you are
    starting from scratch this example will help you host websites on Amazon S3
    Before You Begin
    As you walk through the steps in this example note that you will work with the following services
    Domain registrar of your choice– If you do not already have a registered domain name such as
    examplecom you will need to create and register one with a registrar of your choice You can
    typically register a domain for a small yearly fee For procedural information about registering a domain
    name see the web site of the registrar
    Amazon S3– You will use Amazon S3 to create buckets upload a sample website page configure
    permissions so everyone can see the content and then configure the buckets for website hosting In
    this example because you want to allow requests for both httpwwwexamplecom and http
    examplecom you will create two buckets however you will host content in only one bucket You will
    configure the other Amazon S3 bucket to redirect requests to the bucket that hosts the content
    Amazon Route 53– You will configure Amazon Route 53 as your DNS provider You will create a
    hosted zone in Amazon Route 53 for your domain and configure applicable DNS records If you are
    switching from an existing DNS provider you will need to ensure that you have transferred all of the
    DNS records for your domain
    As you walk through this example a basic familiarity with domains Domain Name System (DNS)
    CNAME records and A records would be helpful A detailed explanation of these concepts is beyond
    the scope of this guide but your domain registrar should provide any basic information that you need
    In this step we use Amazon Route 53 however most registrars can be used to define a CNAME
    record pointing to an Amazon S3 bucket
    Note
    All the steps in this example use examplecom as a domain name You will need to replace
    this domain name with the one you registered
    Step 1 Register a Domain
    If you already have a registered domain you can skip this step If you are new to hosting a website
    your first step is to register a domain such as examplecom with a registrar of your choice
    After you have chosen a registrar you will register your domain name according to the instructions at
    the registrar’s website For a list of registrar web sites that you can use to register your domain name
    see Information for Registrars and Registrants at the ICANNorg website
    When you have a registered domain name your next task is to create and configure Amazon S3
    buckets for website hosting and to upload your website content
    Step 2 Create and Configure Buckets and Upload Data
    In this example to support requests from both the root domain such as examplecom and subdomain
    such as wwwexamplecom you will create two buckets One bucket will contain the content and you
    API Version 20060301
    465Amazon Simple Storage Service Developer Guide
    Example Setting Up a Static
    Website Using a Custom Domain
    will configure the other bucket to redirect requests You perform the following tasks in Amazon S3
    console to create and configure your website
    1 Create two buckets
    2 Configure these buckets for website hosting
    3 Test the Amazon S3 provided bucket website endpoint
    Step 21 Create Two Buckets
    The bucket names must match the names of the website that you are hosting For example to host
    your examplecom website on Amazon S3 you would create a bucket named examplecom To host
    a website under wwwexamplecom you would name the bucket wwwexamplecom In this example
    your website will support requests from both examplecom and wwwexamplecom
    In this step you will sign in to the Amazon S3 console with your AWS account credentials and create
    the following two buckets
    • examplecom
    • wwwexamplecom
    Note
    To create the buckets for this example follow these steps As you walk through this example
    substitute the domain name that you registered for examplecom
    1 Sign in to the AWS Management Console and open the Amazon S3 console at https
    consoleawsamazoncoms3
    2 Create two buckets that match your domain name and subdomain For instance examplecom
    and wwwexamplecom
    For stepbystep instructions go to Creating a Bucket in the Amazon Simple Storage Service
    Console User Guide
    Note
    Like domains subdomains must have their own Amazon S3 buckets and the buckets
    must share the exact names as the subdomains In this example we are creating the
    wwwexamplecom subdomain so we need to have an Amazon S3 bucket named
    wwwexamplecom as well
    3 Upload your website data to the examplecom bucket
    You will host your content out of the root domain bucket (examplecom) and you will redirect
    requests for wwwexamplecom to the root domain bucket Note that you can store content in
    either bucket For this example you will host content in examplecom bucket The content can be
    text files family photos videos—whatever you want If you have not yet created a website then
    you only need one file for this example You can upload any file For example you can create a file
    using the following HTML and upload it the bucket The file name of the home page of a website is
    typically indexhtml but you can give it any name In a later step you will provide this file name as
    the index document name for your website


    My Website Home Page<title> <br ><head> <br ><body> <br > <h1>Welcome to my website<h1> <br > <p>Now hosted on Amazon S3<p> <br ><body> <br ><html> <br >API Version 20060301 <br >466Amazon Simple Storage Service Developer Guide <br >Example Setting Up a Static <br >Website Using a Custom Domain <br >For stepbystep instructions go to Uploading Objects into Amazon S3 in the Amazon Simple <br >Storage Service Console User Guide <br >4 Configure permissions for your objects to make them publicly accessible <br >Attach the following bucket policy to the examplecom bucket substituting the name of your <br >bucket for examplecom For stepbystep instructions to attach a bucket policy go to Editing <br >Bucket Permissions in the Amazon Simple Storage Service Console User Guide <br >{ <br > Version20121017 <br > Statement[{ <br > SidAddPerm <br > EffectAllow <br > Principal * <br > Action[s3GetObject] <br > Resource[arnawss3examplecom* <br > ] <br > } <br > ] <br >} <br >You now have two buckets examplecom and wwwexamplecom and you have uploaded <br >your website content to the examplecom bucket In the next step you will configure <br >wwwexamplecom to redirect requests to your examplecom bucket By redirecting requests you <br >can maintain only one copy of your website content and both visitors who specify www in their <br >browsers and visitors that only specify the root domain will both be routed to the same website <br >content in your examplecom bucket <br >Step 22 Configure Buckets for Website Hosting <br >When you configure a bucket for website hosting you can access the website using the Amazon S3 <br >assigned bucket website endpoint <br >In this step you will configure both buckets for website hosting First you will configure examplecom <br >as a website and then you'll configure wwwexamplecom to redirect all requests to the examplecom <br >bucket <br >To configure examplecom bucket for website hosting <br >1 Configure examplecom bucket for website hosting In the Index Document box type the name <br >that you gave your index page <br >For stepbystepinstructions go to Managing Bucket Website Configuration in the Amazon Simple <br >Storage Service Console User Guide Make a note of the URL for the website endpoint You will <br >need it later <br >API Version 20060301 <br >467Amazon Simple Storage Service Developer Guide <br >Example Setting Up a Static <br >Website Using a Custom Domain <br >2 To test the website enter the Endpoint URL in your browser <br >Your browser will display the index document page Next you will configure wwwexamplecom <br >bucket to redirect all requests for wwwexamplecom to examplecom <br >To redirect requests from wwwexamplecom to examplecom <br >1 In the Amazon S3 console in the Buckets list rightclick wwwexamplecom and then click <br >Properties <br >2 Under Static Website Hosting click Redirect all requests to another host name In the <br >Redirect all requests box type examplecom <br >3 To test the website enter the Endpoint URL in your browser <br >Your request will be redirected and the browser will display the index document for examplecom <br >The following Amazon S3 bucket website endpoints are accessible to any internet user <br >examplecoms3websiteuseast1amazonawscom <br >API Version 20060301 <br >468Amazon Simple Storage Service Developer Guide <br >Example Setting Up a Static <br >Website Using a Custom Domain <br >httpwwwexamplecoms3websiteuseast1amazonawscom <br >Now you will do additional configuration to serve requests from the domain you registered in the <br >preceding step For example if you registered a domain examplecom you want to serve requests <br >from the following URLs <br >httpexamplecom <br >httpwwwexamplecom <br >In the next step we will use Amazon Route 53 to enable customers to use the URLs above to navigate <br >to your site <br >Step 3 Create and Configure Amazon Route 53 Hosted Zone <br >Now you will configure Amazon Route 53 as your Domain Name System (DNS) provider You must <br >use Amazon Route 53 if you want to serve content from your root domain such as examplecom You <br >will create a hosted zone which holds the DNS records associated with your domain <br >• An alias record that maps the domain examplecom to the examplecom bucket This is the bucket <br >that you configured as a website endpoint in step 22 <br >• Another alias record that maps the subdomain wwwexamplecom to the wwwexamplecom bucket <br >You configured this bucket to redirect requests to the examplecom bucket in step 22 <br >Step 31 Create a Hosted Zone for Your Domain <br >Go to the Amazon Route 53 console at httpsconsoleawsamazoncomroute53 and then create <br >a hosted zone for your domain For instructions go to Creating a Hosted Zone in the http <br >docsawsamazoncomRoute53latestDeveloperGuide <br >The following example shows the hosted zone created for the examplecom domain Write down the <br >Amazon Route 53 name servers (NS) for this domain You will need them later <br >Step 32 Add Alias Records for examplecom and wwwexamplecom <br >The alias records that you add to the hosted zone for your domain will map examplecom and <br >wwwexamplecom to the corresponding Amazon S3 buckets Instead of using IP addresses the alias <br >records use the Amazon S3 website endpoints Amazon Route 53 maintains a mapping between the <br >alias records and the IP addresses where the Amazon S3 buckets reside <br >For stepbystep instructions see Creating Resource Record Sets by Using the Amazon Route 53 <br >Console in the Amazon Route 53 Developer Guide <br >API Version 20060301 <br >469Amazon Simple Storage Service Developer Guide <br >Example Setting Up a Static <br >Website Using a Custom Domain <br >The following screenshot shows the alias record for examplecom as an illustration You will also need <br >to create an alias record for wwwexamplecom <br >To enable this hosted zone you must use Amazon Route 53 as the DNS server for your domain <br >examplecom Before you switch if you are moving an existing website to Amazon S3 you must <br >transfer DNS records associated with your domain examplecom to the hosted zone that you created <br >in Amazon Route 53 for your domain If you are creating a new website you can go directly to step 4 <br >Note <br >Creating changing and deleting resource record sets take time to propagate to the Route <br >53 DNS servers Changes generally propagate to all Route 53 name servers in a couple of <br >minutes In rare circumstances propagation can take up to 30 minutes <br >Step 33 Transfer Other DNS Records from Your Current DNS Provider to <br >Amazon Route 53 <br >Before you switch to Amazon Route 53 as your DNS provider you must transfer any remaining DNS <br >records from your current DNS provider including MX records CNAME records and A records to <br >Amazon Route 53 You don't need to transfer the following records <br >• NS records– Instead of transferring these you replace their values with the name server values that <br >are provided by Amazon Route 53 <br >• SOA record– Amazon Route 53 provides this record in the hosted zone with a default value <br >Migrating required DNS records is a critical step to ensure the continued availability of all the existing <br >services hosted under the domain name <br >Step 4 Switch to Amazon Route 53 as Your DNS Provider <br >To switch to Amazon Route 53 as your DNS provider you must go to your current DNS provider <br >and update the name server (NS) record to use the name servers in your delegation set in Amazon <br >Route 53 <br >Go to your DNS provider site and update the NS record with the delegation set values of the hosted <br >zone as shown in the following Amazon Route 53 console screenshot For more information go to <br >Updating Your DNS Service's Name Server Records in Amazon Route 53 Developer Guide <br >API Version 20060301 <br >470Amazon Simple Storage Service Developer Guide <br >Example Setting Up a Static <br >Website Using a Custom Domain <br >When the transfer to Amazon Route 53 is complete there are tools that you can use to verify the name <br >server for your domain has indeed changed On a Linux computer you can use the dig DNS lookup <br >utility For example this dig command <br >dig +recurse +trace wwwexamplecom any <br >returns the following output (only partial output is shown) The output shows the same four name <br >servers the name servers on Amazon Route 53 hosted zone you created for examplecom domain <br > <br >examplecom 172800 IN NS ns9999awsdns99com <br >examplecom 172800 IN NS ns9999awsdns99org <br >examplecom 172800 IN NS ns9999awsdns99couk <br >examplecom 172800 IN NS ns9999awsdns99net <br >wwwexamplecom 300 IN CNAME wwwexamplecoms3websiteus <br >east1amazonawscom <br > <br >Step 5 Testing <br >To verify that the website is working correctly in your browser try the following URLs <br >• httpexamplecom Displays the index document in the examplecom bucket <br >• httpwwwexamplecom Redirects your request to httpexamplecom <br >In some cases you may need to clear the cache to see the expected behavior <br >API Version 20060301 <br >471Amazon Simple Storage Service Developer Guide <br >Overview <br >Configuring Amazon S3 Event <br >Notifications <br >The Amazon S3 notification feature enables you to receive notifications when certain events happen <br >in your bucket To enable notifications you must first add a notification configuration identifying <br >the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to <br >send the event notifications You store this configuration in the notification subresource (see Bucket <br >Configuration Options (p 61)) associated with a bucket Amazon S3 provides an API for you to <br >manage this subresource <br >Topics <br >• Overview (p 472) <br >• How to Enable Event Notifications (p 473) <br >• Event Notification Types and Destinations (p 475) <br >• Configuring Notifications with Object Key Name Filtering (p 476) <br >• Granting Permissions to Publish Event Notification Messages to a Destination (p 481) <br >• Example Walkthrough 1 Configure a Bucket for Notifications (Message Destination SNS Topic <br >and SQS Queue) (p 483) <br >• Example Walkthrough 2 Configure a Bucket for Notifications (Message Destination AWS <br >Lambda) (p 489) <br >• Event Message Structure (p 489) <br >Overview <br >Currently Amazon S3 can publish the following events <br >• A new object created event—Amazon S3 supports multiple APIs to create objects You can request <br >notification when only a specific API is used (eg s3ObjectCreatedPut) or you can use a <br >wildcard (eg s3ObjectCreated*) to request notification when an object is created regardless <br >of the API used <br >• An object removal event—Amazon S3 supports deletes of versioned and unversioned objects For <br >information about object versioning see Object Versioning (p 106) and Using Versioning (p 423) <br >You can request notification when an object is deleted or a versioned object is <br >permanently deleted by using the s3ObjectRemovedDelete event type Or you <br >API Version 20060301 <br >472Amazon Simple Storage Service Developer Guide <br >How to Enable Event Notifications <br >can request notification when a delete marker is created for a versioned object by <br >using s3ObjectRemovedDeleteMarkerCreated You can also use a wildcard <br >s3ObjectRemoved* to request notification anytime an object is deleted For information about <br >deleting versioned objects see Deleting Object Versions (p 437) <br >• A Reduced Redundancy Storage (RRS) object lost event—Amazon S3 sends a notification message <br >when it detects that an object of the RRS storage class has been lost <br >For a list of supported event types see Supported Event Types (p 475) <br >Amazon S3 supports the following destinations where it can publish events <br >• Amazon Simple Notification Service (Amazon SNS) topic <br >Amazon SNS is a flexible fully managed push messaging service Using this service you can push <br >messages to mobile devices or distributed services With SNS you can publish a message once and <br >deliver it one or more times An SNS topic is an access point that allows recipients to dynamically <br >subscribe to for event notification For more information about SNS go to the Amazon SNS product <br >detail page <br >• Amazon Simple Queue Service (Amazon SQS) queue <br >Amazon SQS is a scalable and fully managed message queuing service You can use SQS <br >to transmit any volume of data without requiring other services to be always available In your <br >notification configuration you can request that Amazon S3 publish events to an SQS queue For <br >more information about SQS go to Amazon SQS product detail page <br >• AWS Lambda <br >AWS Lambda is a compute service that makes it easy for you to build applications that respond <br >quickly to new information AWS Lambda runs your code in response to events such as image <br >uploads inapp activity website clicks or outputs from connected devices You can use AWS <br >Lambda to extend other AWS services with custom logic or create your own backend that operates <br >at AWS scale performance and security With AWS Lambda you can easily create discrete event <br >driven applications that execute only when needed and scale automatically from a few requests per <br >day to thousands per second <br >AWS Lambda can run custom code in response to Amazon S3 bucket events You upload your <br >custom code to AWS Lambda and create what is called a Lambda function When Amazon S3 <br >detects an event of a specific type (for example an object created event) it can publish the event <br >to AWS Lambda and invoke your function in Lambda In response AWS Lambda executes your <br >function For more information go to AWS Lambda product detail page <br >The following sections offer more detail about how to enable event notifications on a bucket The <br >subtopics also provide example walkthroughs to help you explore the notification feature <br >• Example Walkthrough 1 Configure a Bucket for Notifications (Message Destination SNS Topic and <br >SQS Queue) (p 483) <br >• Example Walkthrough 2 Configure a Bucket for Notifications (Message Destination AWS <br >Lambda) (p 489) <br >How to Enable Event Notifications <br >Enabling notifications is a bucketlevel operation that is you store notification configuration information <br >in the notification subresource associated with a bucket You can use any of the following methods to <br >manage notification configuration <br >• Using the Amazon S3 console <br >API Version 20060301 <br >473Amazon Simple Storage Service Developer Guide <br >How to Enable Event Notifications <br >The console UI enables you to set a notification configuration on a bucket without having to write <br >any code For instruction go to Enabling Event Notifications in the Amazon Simple Storage Service <br >Console User Guide <br >• Programmatically using the AWS SDKs <br >Note <br >If you need to you can also make the Amazon S3 REST API calls directly from your code <br >However this can be cumbersome because it requires you to write code to authenticate <br >your requests <br >Internally both the console and the SDKs call the Amazon S3 REST API to manage notification <br >subresources associated with the bucket For notification configuration using AWS SDK examples <br >see the walkthrough link provided in the preceding section <br >Regardless of the method you use Amazon S3 stores the notification configuration as XML in the <br >notification subresource associated with a bucket For information about bucket subresources see <br >Bucket Configuration Options (p 61)) By default notifications are not enabled for any type of event <br >Therefore initially the notification subresource stores an empty configuration <br ><NotificationConfiguration xmlnshttps3amazonawscomdoc20060301> <br ><NotificationConfiguration> <br >To enable notifications for events of specific types you replace the XML with the appropriate <br >configuration that identifies the event types you want Amazon S3 to publish and the destination <br >where you want the events published For each destination you add a corresponding XML <br >configuration For example <br >• Publish event messages to an SQS queue—To set an SQS queue as the notification destination <br >for one or more event types you add the QueueConfiguration <br ><NotificationConfiguration> <br > <QueueConfiguration> <br > <Id>optionalidstring<Id> <br > <Queue>sqsqueuearn<Queue> <br > <Event>eventtype<Event> <br > <Event>eventtype<Event> <br > <br > <QueueConfiguration> <br > <br ><NotificationConfiguration> <br >• Publish event messages to an SNS topic—To set an SNS topic as the notification destination for <br >specific event types you add the TopicConfiguration <br ><NotificationConfiguration> <br > <TopicConfiguration> <br > <Id>optionalidstring<Id> <br > <Topic>snstopicarn<Topic> <br > <Event>eventtype<Event> <br > <Event>eventtype<Event> <br > <br > <TopicConfiguration> <br > <br ><NotificationConfiguration> <br >API Version 20060301 <br >474Amazon Simple Storage Service Developer Guide <br >Event Notification Types and Destinations <br >• Invoke the AWS Lambda function and provide an event message as an argument—To <br >set a Lambda function as the notification destination for specific event types you add the <br >CloudFunctionConfiguration <br ><NotificationConfiguration> <br > <CloudFunctionConfiguration> <br > <Id>optionalidstring<Id> <br > <CloudFunction>cloudfunctionarn<CloudFunction> <br > <Event>eventtype<Event> <br > <Event>eventtype<Event> <br > <br > <CloudFunctionConfiguration> <br > <br ><NotificationConfiguration> <br >To remove all notifications configured on a bucket you save an empty <br ><NotificationConfiguration> element in the notification subresource <br >When Amazon S3 detects an event of the specific type it publishes a message with the event <br >information For more information see Event Message Structure (p 489) <br >Event Notification Types and Destinations <br >This section describes the event notification types that are supported by Amazon S3 and the type of <br >destinations where the notifications can be published <br >Supported Event Types <br >Amazon S3 can publish events of the following types You specify these event types in the notification <br >configuration <br >Event types Description <br >s3ObjectCreated* <br >s3ObjectCreatedPut <br >s3ObjectCreatedPost <br >s3ObjectCreatedCopy <br >s3ObjectCreatedCompleteMultipartUpload <br >Amazon S3 APIs such as PUT POST and COPY can <br >create an object Using these event types you can enable <br >notification when an object is created using a specific API <br >or you can use the s3ObjectCreated* event type to request <br >notification regardless of the API that was used to create an <br >object <br >You will not receive event notifications from failed operations <br >s3ObjectRemoved* <br >s3ObjectRemovedDelete <br >s3ObjectRemovedDeleteMarkerCreated <br >By using the ObjectRemoved event types you can enable <br >notification when an object or a batch of objects is removed <br >from a bucket <br >You can request notification when an object is deleted or <br >a versioned object is permanently deleted by using the <br >s3ObjectRemovedDelete event type Or you can request <br >notification when a delete marker is created for a versioned <br >object by using s3ObjectRemovedDeleteMarkerCreated For <br >information about deleting versioned objects see Deleting <br >Object Versions (p 437) You can also use a wildcard <br >s3ObjectRemoved* to request notification anytime an <br >object is deleted <br >API Version 20060301 <br >475Amazon Simple Storage Service Developer Guide <br >Supported Destinations <br >Event types Description <br >You will not receive event notifications from automatic deletes <br >from lifecycle policies or from failed operations <br >s3ReducedRedundancyLostObject You can use this event type to request Amazon S3 to send a <br >notification message when Amazon S3 detects that an object <br >of the RRS storage class is lost <br >Supported Destinations <br >Amazon S3 can send event notification messages to the following destinations You specify the ARN <br >value of these destinations in the notification configuration <br >• Publish event messages to an Amazon Simple Notification Service (Amazon SNS) topic <br >• Publish event messages to an Amazon Simple Queue Service (Amazon SQS) queue <br >• Publish event messages to AWS Lambda by invoking a Lambda function and providing the event <br >message as an argument <br >You must grant Amazon S3 permissions to post messages to an Amazon SNS topic or an Amazon <br >SQS queue You must also grant Amazon S3 permission to invoke an AWS Lambda function on your <br >behalf For information about granting these permissions see Granting Permissions to Publish Event <br >Notification Messages to a Destination (p 481) <br >Configuring Notifications with Object Key Name <br >Filtering <br >You can configure notifications to be filtered by the prefix and suffix of the key name of objects For <br >example you can set up a configuration so that you are sent a notification only when image files with <br >a jpg extension are added to a bucket Or you can have a configuration that delivers a notification <br >to an Amazon SNS topic when an object with the prefix images is added to the bucket while having <br >notifications for objects with a logs prefix in the same bucket delivered to an AWS Lambda function <br >You can setup notification configurations that use object key name filtering in the Amazon S3 console <br >and by using Amazon S3 APIs through the AWS SDKs or the REST APIs directly For information <br >about using the console UI to set a notification configuration on a bucket go to Enabling Event <br >Notifications in the Amazon Simple Storage Service Console User Guide <br >Amazon S3 stores the notification configuration as XML in the notification subresource associated <br >with a bucket as described in How to Enable Event Notifications (p 473) You use the Filter XML <br >structure to define the rules for notifications to be filtered by the prefix andor suffix of an object key <br >name For information about the details of the Filter XML structure see PUT Bucket notification in <br >the Amazon Simple Storage Service API Reference <br >Notification configurations that use Filter cannot define filtering rules with overlapping prefixes <br >overlapping suffixes or prefix and suffix overlapping The following sections have examples of valid <br >notification configurations with object key name filtering and examples of notification configurations that <br >are invalid because of prefixsuffix overlapping <br >API Version 20060301 <br >476Amazon Simple Storage Service Developer Guide <br >Examples of Valid Notification Configurations <br >with Object Key Name Filtering <br >Examples of Valid Notification Configurations with <br >Object Key Name Filtering <br >The following notification configuration contains a queue configuration identifying an Amazon SQS <br >queue for Amazon S3 to publish events to of the s3ObjectCreatedPut type The events will be <br >published whenever an object that has a prefix of images and a jpg suffix is PUT to a bucket <br ><NotificationConfiguration> <br > <QueueConfiguration> <br > <Id>1<Id> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>prefix<Name> <br > <Value>images<Value> <br > <FilterRule> <br > <FilterRule> <br > <Name>suffix<Name> <br > <Value>jpg<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <Queue>arnawssqsuswest2444455556666s3notificationqueue<Queue> <br > <Event>s3ObjectCreatedPut<Event> <br > <QueueConfiguration> <br > <NotificationConfiguration> <br >The following notification configuration has multiple nonoverlapping prefixes The configuration defines <br >that notifications for PUT requests in the images folder will go to queueA while notifications for PUT <br >requests in the logs folder will go to queueB <br ><NotificationConfiguration> <br > <QueueConfiguration> <br > <Id>1<Id> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>prefix<Name> <br > <Value>images<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <Queue>arnawssqsuswest2444455556666sqsqueueA<Queue> <br > <Event>s3ObjectCreatedPut<Event> <br > <QueueConfiguration> <br > <QueueConfiguration> <br > <Id>2<Id> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>prefix<Name> <br > <Value>logs<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <Queue>arnawssqsuswest2444455556666sqsqueueB<Queue> <br >API Version 20060301 <br >477Amazon Simple Storage Service Developer Guide <br >Examples of Valid Notification Configurations <br >with Object Key Name Filtering <br > <Event>s3ObjectCreatedPut<Event> <br > <QueueConfiguration> <br > <NotificationConfiguration> <br >The following notification configuration has multiple nonoverlapping suffixes The configuration defines <br >that all jpg images newly added to the bucket will be processed by Lambda cloudfunctionA and all <br >newly added png images will be processed by cloudfunctionB The suffixes png and jpg are not <br >overlapping even though they have the same last letter Two suffixes are considered overlapping if a <br >given string can end with both suffixes A string cannot end with both png and jpg so the suffixes in <br >the example configuration are not overlapping suffixes <br ><NotificationConfiguration> <br > <CloudFunctionConfiguration> <br > <Id>1<Id> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>suffix<Name> <br > <Value>jpg<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <CloudFunction>arnawslambdauswest2444455556666cloudfunctionA< <br >CloudFunction> <br > <Event>s3ObjectCreatedPut<Event> <br > <CloudFunctionConfiguration> <br > <CloudFunctionConfiguration> <br > <Id>2<Id> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>suffix<Name> <br > <Value>png<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <CloudFunction>arnawslambdauswest2444455556666cloudfunctionB< <br >CloudFunction> <br > <Event>s3ObjectCreatedPut<Event> <br > <CloudFunctionConfiguration> <br > <NotificationConfiguration> <br >Your notification configurations that use Filter cannot define filtering rules with overlapping prefixes <br >for the same event types unless the overlapping prefixes are used with suffixes that do not overlap <br >The following example configuration shows how objects created with a common prefix but non <br >overlapping suffixes can be delivered to different destinations <br ><NotificationConfiguration> <br > <CloudFunctionConfiguration> <br > <Id>1<Id> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>prefix<Name> <br > <Value>images<Value> <br > <FilterRule> <br > <FilterRule> <br >API Version 20060301 <br >478Amazon Simple Storage Service Developer Guide <br >Examples of Notification Configurations <br >with Invalid PrefixSuffix Overlapping <br > <Name>suffix<Name> <br > <Value>jpg<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <CloudFunction>arnawslambdauswest2444455556666cloudfunctionA< <br >CloudFunction> <br > <Event>s3ObjectCreatedPut<Event> <br > <CloudFunctionConfiguration> <br > <CloudFunctionConfiguration> <br > <Id>2<Id> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>prefix<Name> <br > <Value>images<Value> <br > <FilterRule> <br > <FilterRule> <br > <Name>suffix<Name> <br > <Value>png<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <CloudFunction>arnawslambdauswest2444455556666cloudfunctionB< <br >CloudFunction> <br > <Event>s3ObjectCreatedPut<Event> <br > <CloudFunctionConfiguration> <br ><NotificationConfiguration> <br >Examples of Notification Configurations with Invalid <br >PrefixSuffix Overlapping <br >Your notification configurations that use Filter for the most part cannot define filtering rules with <br >overlapping prefixes overlapping suffixes or overlapping combinations of prefixes and suffixes for the <br >same event types (You can have overlapping prefixes as long as the suffixes do not overlap For an <br >example see Configuring Notifications with Object Key Name Filtering (p 476)) <br >You can use overlapping object key name filters with different event types For example you could <br >create a notification configuration that uses the prefix image for the ObjectCreatedPut event type <br >and the prefix image for the ObjectDeleted* event type <br >You will get an error if you try to save an notification configuration that has invalid overlapping name <br >filters for the same event types when using the AWS Amazon S3 console or when using the Amazon <br >S3 API This section shows examples of notification configurations that are invalid because of <br >overlapping name filters <br >Any existing notification configuration rule is assumed to have a default prefix and suffix that match <br >any other prefix and suffix respectively The following notification configuration is invalid because it <br >has overlapping prefixes where the root prefix overlaps with any other prefix (The same thing would <br >be true if we were using suffix instead of prefix in this example The root suffix overlaps with any other <br >suffix) <br ><NotificationConfiguration> <br > <TopicConfiguration> <br > <Topic>arnawssnsuswest2444455556666snsnotificationone< <br >Topic> <br > <Event>s3ObjectCreated*<Event> <br >API Version 20060301 <br >479Amazon Simple Storage Service Developer Guide <br >Examples of Notification Configurations <br >with Invalid PrefixSuffix Overlapping <br > <TopicConfiguration> <br > <TopicConfiguration> <br > <Topic>arnawssnsuswest2444455556666snsnotificationtwo< <br >Topic> <br > <Event>s3ObjectCreated*<Event> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>prefix<Name> <br > <Value>images<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <TopicConfiguration> <br > <NotificationConfiguration> <br >The following notification configuration is invalid because it has overlapping suffixes Two suffixes are <br >considered overlapping if a given string can end with both suffixes A string can end with jpg and pg <br >so the suffixes are overlapping (The same is true for prefixes two prefixes are considered overlapping <br >if a given string can begin with both prefixes) <br > <NotificationConfiguration> <br > <TopicConfiguration> <br > <Topic>arnawssnsuswest2444455556666snstopicone<Topic> <br > <Event>s3ObjectCreated*<Event> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>suffix<Name> <br > <Value>jpg<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <TopicConfiguration> <br > <TopicConfiguration> <br > <Topic>arnawssnsuswest2444455556666snstopictwo<Topic> <br > <Event>s3ObjectCreatedPut<Event> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>suffix<Name> <br > <Value>pg<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <TopicConfiguration> <br ><NotificationConfiguration <br >The following notification configuration is invalid because it has overlapping prefixes and suffixes <br ><NotificationConfiguration> <br > <TopicConfiguration> <br > <Topic>arnawssnsuswest2444455556666snstopicone<Topic> <br > <Event>s3ObjectCreated*<Event> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>prefix<Name> <br >API Version 20060301 <br >480Amazon Simple Storage Service Developer Guide <br >Granting Permissions to Publish Event <br >Notification Messages to a Destination <br > <Value>images<Value> <br > <FilterRule> <br > <FilterRule> <br > <Name>suffix<Name> <br > <Value>jpg<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <TopicConfiguration> <br > <TopicConfiguration> <br > <Topic>arnawssnsuswest2444455556666snstopictwo<Topic> <br > <Event>s3ObjectCreatedPut<Event> <br > <Filter> <br > <S3Key> <br > <FilterRule> <br > <Name>suffix<Name> <br > <Value>jpg<Value> <br > <FilterRule> <br > <S3Key> <br > <Filter> <br > <TopicConfiguration> <br ><NotificationConfiguration> <br >Granting Permissions to Publish Event <br >Notification Messages to a Destination <br >Before Amazon S3 can publish messages to a destination you must grant the Amazon S3 principal the <br >necessary permissions to call the relevant API to publish messages to an SNS topic an SQS queue or <br >a Lambda function <br >Granting Permissions to Invoke an AWS Lambda <br >Function <br >Amazon S3 publishes event messages to AWS Lambda by invoking a Lambda function and providing <br >the event message as an argument <br >When you use the Amazon S3 console to configure event notifications on an Amazon S3 bucket for <br >a Lambda function the Amazon S3 console will set up the necessary permissions on the Lambda <br >function so that Amazon S3 has permissions to invoke the function from the bucket For more <br >information see Enabling Event Notifications in the Amazon Simple Storage Service Console User <br >Guide <br >You can also grant Amazon S3 permissions from AWS Lambda to invoke your Lambda function For <br >more information see Tutorial Using AWS Lambda with Amazon S3 in the AWS Lambda Developer <br >Guide <br >Granting Permissions to Publish Messages to an <br >SNS Topic or an SQS Queue <br >You attach an IAM policy to the destination SNS topic or SQS queue to grant Amazon S3 permissions <br >to publish messages to the SNS topic or SQS queue <br >Example of an IAM policy that you attach to the destination SNS topic <br >API Version 20060301 <br >481Amazon Simple Storage Service Developer Guide <br >Granting Permissions to Publish Messages <br >to an SNS Topic or an SQS Queue <br >{ <br > Version 20081017 <br > Id exampleID <br > Statement [ <br > { <br > Sid examplestatementID <br > Effect Allow <br > Principal { <br > Service s3amazonawscom <br > } <br > Action [ <br > SNSPublish <br > ] <br > Resource SNSARN <br > Condition { <br > ArnLike { <br > awsSourceArn arnawss3**bucketname <br > } <br > } <br > } <br > ] <br >} <br >Example of an IAM policy that you attach to the destination SQS queue <br >{ <br > Version 20081017 <br > Id exampleID <br > Statement [ <br > { <br > Sid examplestatementID <br > Effect Allow <br > Principal { <br > AWS * <br > } <br > Action [ <br > SQSSendMessage <br > ] <br > Resource SQSARN <br > Condition { <br > ArnLike { <br > awsSourceArn arnawss3**bucketname <br > } <br > } <br > } <br > ] <br >} <br >Note that for both the Amazon SNS and Amazon SQS IAM policies you can specify the StringLike <br >condition in the policy instead of the ArnLike condition <br >Condition { <br > StringLike { <br > awsSourceArn arnawss3**bucketname <br > } <br > } <br >API Version 20060301 <br >482Amazon Simple Storage Service Developer Guide <br >Example Walkthrough 1 <br >For an example of how to attach a policy to a SNS topic or an SQS queue see Example Walkthrough <br >1 Configure a Bucket for Notifications (Message Destination SNS Topic and SQS Queue) (p 483) <br >For more information about permissions see the following topics <br >• Example Cases for Amazon SNS Access Control in the Amazon Simple Notification Service <br >Developer Guide <br >• Access Control Using AWS Identity and Access Management (IAM) in the Amazon Simple Queue <br >Service Developer Guide <br >Example Walkthrough 1 Configure a Bucket for <br >Notifications (Message Destination SNS Topic <br >and SQS Queue) <br >Topics <br >• Walkthrough Summary (p 483) <br >• Step 1 Create an Amazon SNS Topic (p 484) <br >• Step 2 Create an Amazon SQS Queue (p 484) <br >• Step 3 Add a Notification Configuration to Your Bucket (p 485) <br >• Step 4 Test the Setup (p 489) <br >Walkthrough Summary <br >In this walkthrough you add notification configuration on a bucket requesting Amazon S3 to <br >• Publish events of the s3ObjectCreated* type to an Amazon SQS topic <br >• Publish events of the s3ReducedRedundancyLostObject type to an Amazon SNS topic <br >For information about notification configuration see Configuring Amazon S3 Event <br >Notifications (p 472) <br >You can do all these steps using the console without writing any code In addition code examples <br >using AWS SDKs for Java and NET are also provided so you can add notification configuration <br >programmatically <br >You will do the following in this walkthrough <br >1 Create an Amazon SNS topic <br >Using the Amazon SNS console you create an SNS topic and subscribe to the topic so that any <br >events posted to it are delivered to you You will specify email as the communications protocol After <br >you create a topic Amazon SNS will send an email You must click a link in the email to confirm the <br >topic subscription <br >You will attach an access policy to the topic to grant Amazon S3 permission to post messages <br >2 Create an Amazon SQS queue <br >Using the Amazon SQS console you create an SQS queue You can access any messages <br >Amazon S3 sends to the queue programmatically But for this walkthrough you will verify notification <br >messages in the console <br >API Version 20060301 <br >483Amazon Simple Storage Service Developer Guide <br >Step 1 Create an Amazon SNS Topic <br >You will attach an access policy to the topic to grant Amazon S3 permission to post messages <br >3 Add notification configuration to a bucket <br >Step 1 Create an Amazon SNS Topic <br >Follow the steps to create and subscribe to an Amazon Simple Notification Service (Amazon SNS) <br >topic <br >1 Using Amazon SNS console create a topic For instructions go to Create a Topic in the Amazon <br >Simple Notification Service Developer Guide <br >2 Subscribe to the topic For this exercise use email as the communications protocol For <br >instructions go to Subscribe to a Topic in the Amazon Simple Notification Service Developer <br >Guide <br >You will get email requesting you to confirm your subscription to the topic Confirm the <br >subscription <br >3 Replace the access policy attached to the topic by the following policy You must update the policy <br >by providing the your SNS topic ARN and bucket name <br >{ <br > Version 20081017 <br > Id exampleID <br > Statement [ <br > { <br > Sid examplestatementID <br > Effect Allow <br > Principal { <br > AWS* <br > } <br > Action [ <br > SNSPublish <br > ] <br > Resource SNStopicARN <br > Condition { <br > ArnLike { <br > awsSourceArn arnawss3**bucketname <br > } <br > } <br > } <br > ] <br >} <br >4 Note the topic ARN <br >The SNS topic you created is another resource in your AWS account and it has a unique Amazon <br >Resource Name (ARN) You will need this ARN in the next step The ARN will be of the following <br >format <br >arnawssnsawsregionaccountidtopicname <br >Step 2 Create an Amazon SQS Queue <br >Follow the steps to create and subscribe to an Amazon Simple Queue Service (Amazon SQS) queue <br >API Version 20060301 <br >484Amazon Simple Storage Service Developer Guide <br >Step 3 Add a Notification Configuration to Your Bucket <br >1 Using the Amazon SQS console create a queue For instructions go to Create a Queue in the <br >Amazon Simple Queue Service Getting Started Guide <br >2 Replace the access policy attached to the queue with the following policy (in the SQS console you <br >select the queue and in the Permissions tab click Edit Policy Document (Advanced) <br >{ <br > Version 20081017 <br > Id exampleID <br > Statement [ <br > { <br > Sid examplestatementID <br > Effect Allow <br > Principal { <br > AWS* <br > } <br > Action [ <br > SQSSendMessage <br > ] <br > Resource SQSqueueARN <br > Condition { <br > ArnLike { <br > awsSourceArn arnawss3**bucketname <br > } <br > } <br > } <br > ] <br >} <br >3 Note the queue ARN <br >The SQS queue you created is another resource in your AWS account and it has a unique <br >Amazon Resource Name (ARN) You will need this ARN in the next step The ARN will be of the <br >following format <br >arnawssqsawsregionaccountidqueuename <br >Step 3 Add a Notification Configuration to Your <br >Bucket <br >You can enable bucket notifications either by using the Amazon S3 console or programmatically <br >by using AWS SDKs Choose any one of the options to configure notifications on your bucket This <br >section provides code examples using the AWS SDKs for Java and NET <br >Step 3 (option a) Enable Notifications on a Bucket Using the <br >Console <br >Using the Amazon S3 console add a notification configuration requesting Amazon S3 to <br >• Publish events of the s3ObjectCreated* type to your Amazon SQS queue <br >• Publish events of the s3ReducedRedundancyLostObject type to your Amazon SNS topic <br >After you save the notification configuration Amazon S3 will post a test message which you will get via <br >email <br >API Version 20060301 <br >485Amazon Simple Storage Service Developer Guide <br >Step 3 Add a Notification Configuration to Your Bucket <br >For instructions go to Enabling Notifications in the Amazon Simple Storage Service Console User <br >Guide <br >Step 3 (option b) Enable Notifications on a Bucket Using the <br >AWS SDK for NET <br >The following C# code example provides a complete code listing that adds a notification configuration <br >to a bucket You will need to update the code and provide your bucket name and SNS topic ARN For <br >information about how to create and test a working sample see Running the Amazon S3 NET Code <br >Examples (p 566) <br >using System <br >using SystemCollectionsGeneric <br >using AmazonS3 <br >using AmazonS3Model <br >namespace s3amazoncomdocsamples <br >{ <br > class EnableNotifications <br > { <br > static string bucketName ***bucket name*** <br > static string snsTopic ***SNS topic ARN*** <br > static string sqsQueue ***SQS queue ARN*** <br > <br > static string putEventType s3ObjectCreatedPut <br > static string rrsObjectLostType s3ObjectCreatedCopy <br > public static void Main(string[] args) <br > { <br > using (var client new <br > AmazonS3Client(AmazonRegionEndpointUSEast1)) <br > { <br > ConsoleWriteLine(Enabling Notification on a bucket) <br > EnableNotification(client) <br > } <br > ConsoleWriteLine(Press any key to continue) <br > ConsoleReadKey() <br > } <br > static void EnableNotification(IAmazonS3 client) <br > { <br > try <br > { <br > List<AmazonS3ModelTopicConfiguration> topicConfigurations <br > new List<TopicConfiguration>() <br > topicConfigurationsAdd(new TopicConfiguration() <br > { <br > Event rrsObjectLostType <br > Topic snsTopic <br > }) <br > List<AmazonS3ModelQueueConfiguration> queueConfigurations <br > new List<QueueConfiguration>() <br > queueConfigurationsAdd(new QueueConfiguration() <br > { <br > Events new List<string> { putEventType } <br >API Version 20060301 <br >486Amazon Simple Storage Service Developer Guide <br >Step 3 Add a Notification Configuration to Your Bucket <br > Queue sqsQueue <br > }) <br > PutBucketNotificationRequest request new <br > PutBucketNotificationRequest <br > { <br > BucketName bucketName <br > TopicConfigurations topicConfigurations <br > QueueConfigurations queueConfigurations <br > } <br > PutBucketNotificationResponse response <br > clientPutBucketNotification(request) <br > } <br > catch (AmazonS3Exception amazonS3Exception) <br > { <br > if (amazonS3ExceptionErrorCode null && <br > (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId) <br > || <br > amazonS3ExceptionErrorCodeEquals(InvalidSecurity))) <br > { <br > ConsoleWriteLine(Check the provided AWS Credentials) <br > ConsoleWriteLine( <br > To sign up for service go to httpawsamazoncom <br >s3) <br > } <br > else <br > { <br > ConsoleWriteLine( <br > Error occurred Message'{0}' when enabling <br > notifications <br > amazonS3ExceptionMessage) <br > } <br > } <br > } <br > } <br >} <br >Step 3 (option c) Enable Notifications on a Bucket Using the <br >AWS SDK for Java <br >The following Java code example provides a complete code listing that adds a notification configuration <br >to a bucket You will need to update the code and provide your bucket name and SNS topic <br >ARN For instructions on how to create and test a working sample see Testing the Java Code <br >Examples (p 564) <br >import javaioIOException <br >import javautilCollection <br >import javautilEnumSet <br >import javautilLinkedList <br >import comamazonawsAmazonClientException <br >import comamazonawsauthprofileProfileCredentialsProvider <br >import comamazonawsservicess3AmazonS3 <br >import comamazonawsservicess3AmazonS3Client <br >import comamazonawsservicess3modelAmazonS3Exception <br >import comamazonawsservicess3modelBucketNotificationConfiguration <br >API Version 20060301 <br >487Amazon Simple Storage Service Developer Guide <br >Step 3 Add a Notification Configuration to Your Bucket <br >import comamazonawsservicess3modelTopicConfiguration <br >import comamazonawsservicess3modelQueueConfiguration <br >import comamazonawsservicess3modelS3Event <br >import <br > comamazonawsservicess3modelSetBucketNotificationConfigurationRequest <br >public class NotificationConfigurationOnABucket { <br > private static String bucketName *** bucket name *** <br > private static String snsTopicARN *** SNS Topic ARN *** <br > private static String sqsQueueARN *** SQS Queue ARN *** <br > public static void main(String[] args) throws IOException { <br > AmazonS3 s3client new AmazonS3Client(new <br > ProfileCredentialsProvider()) <br > try { <br > Systemoutprintln(Setting notification configuration on a <br > bucket\n) <br > BucketNotificationConfiguration notificationConfiguration new <br > BucketNotificationConfiguration() <br > notificationConfigurationaddConfiguration( <br > snsTopicConfig <br > new TopicConfiguration(snsTopicARN EnumSet <br > of(S3EventReducedRedundancyLostObject))) <br > notificationConfigurationaddConfiguration( <br > sqsQueueConfig <br > new QueueConfiguration(sqsQueueARN EnumSet <br > of(S3EventObjectCreated))) <br > SetBucketNotificationConfigurationRequest request <br > new SetBucketNotificationConfigurationRequest(bucketName <br > notificationConfiguration) <br > s3clientsetBucketNotificationConfiguration(request) <br > } catch (AmazonS3Exception ase) { <br > Systemoutprintln(Caught an AmazonServiceException which <br > + means your request made it <br > + to Amazon S3 but was rejected with an error response <br > + for some reason) <br > Systemoutprintln(Error Message + asegetMessage()) <br > Systemoutprintln(HTTP Status Code + asegetStatusCode()) <br > Systemoutprintln(AWS Error Code + asegetErrorCode()) <br > Systemoutprintln(Error Type + asegetErrorType()) <br > Systemoutprintln(Request ID + asegetRequestId()) <br > Systemoutprintln(Error XML + asegetErrorResponseXml()) <br > } catch (AmazonClientException ace) { <br > Systemoutprintln(Caught an AmazonClientException which <br > + means the client encountered <br > + an internal error while trying to <br > + communicate with S3 <br > + such as not being able to access the network) <br > Systemoutprintln(Error Message + acegetMessage()) <br > } <br > } <br >} <br >API Version 20060301 <br >488Amazon Simple Storage Service Developer Guide <br >Step 4 Test the Setup <br >Step 4 Test the Setup <br >Now you can test the setup by uploading an object to your bucket and verify the event notification in <br >the Amazon SQS console For instructions go to Receiving a Message in the Amazon Simple Queue <br >Service Getting Started Guide <br >Example Walkthrough 2 Configure a Bucket <br >for Notifications (Message Destination AWS <br >Lambda) <br >For an example of using Amazon S3 notifications with AWS Lambda see Using AWS Lambda with <br >Amazon S3 in the AWS Lambda Developer Guide <br >Event Message Structure <br >The notification message Amazon S3 sends to publish an event is a JSON message with the following <br >structure Note the following <br >• The responseElements key value is useful if you want to trace the request by following up with <br >Amazon S3 support Both xamzrequestid and xamzid2 help Amazon S3 to trace the <br >individual request These values are the same as those that Amazon S3 returned in the response to <br >your original PUT request which initiated the event <br >• The s3 key provides information about the bucket and object involved in the event Note that the <br >object keyname value is URL encoded For example red flowerjpg becomes red+flowerjpg <br >• The sequencer key provides a way to determine the sequence of events Event notifications are not <br >guaranteed to arrive in the order that the events occurred However notifications from events that <br >create objects (PUTs) and delete objects contain a sequencer which can be used to determine the <br >order of events for a given object key <br >If you compare the sequencer strings from two event notifications on the same object key the <br >event notification with the greater sequencer hexadecimal value is the event that occurred later <br >If you are using event notifications to maintain a separate database or index of your Amazon S3 <br >objects you will probably want to compare and store the sequencer values as you process each <br >event notification <br >Note that <br >• sequencer cannot be used to determine order for events on different object keys <br >• The sequencers can be of different lengths So to compare these values you first right pad the <br >shorter value with zeros and then do lexicographical comparison <br >{ <br > Records[ <br > { <br > eventVersion20 <br > eventSourceawss3 <br > awsRegionuseast1 <br > eventTimeThe time in ISO8601 format for example <br > 19700101T000000000Z when S3 finished processing the request <br >API Version 20060301 <br >489Amazon Simple Storage Service Developer Guide <br >Event Message Structure <br > eventNameeventtype <br > userIdentity{ <br > principalIdAmazoncustomerIDoftheuserwhocausedthe <br >event <br > } <br > requestParameters{ <br > sourceIPAddressipaddresswhererequestcamefrom <br > } <br > responseElements{ <br > xamzrequestidAmazon S3 generated request ID <br > xamzid2Amazon S3 host that processed the request <br > } <br > s3{ <br > s3SchemaVersion10 <br > configurationIdID found in the bucket notification <br > configuration <br > bucket{ <br > namebucketname <br > ownerIdentity{ <br > principalIdAmazoncustomerIDofthebucketowner <br > } <br > arnbucketARN <br > } <br > object{ <br > keyobjectkey <br > sizeobjectsize <br > eTagobject eTag <br > versionIdobject version if bucket is versioningenabled <br > otherwise null <br > sequencer a string representation of a hexadecimal value <br > used to determine event sequence <br > only used with PUTs and DELETEs <br > } <br > } <br > } <br > { <br > Additional events <br > } <br > ] <br >} <br >The following are example messages <br >• Test message—When you configure an event notification on a bucket Amazon S3 sends the <br >following test message <br >{ <br > ServiceAmazon S3 <br > Events3TestEvent <br > Time20141013T155702089Z <br > Bucketbucketname <br > RequestId5582815E1AEA5ADF <br > HostId8cLeGAmw098X5cv4Zkwcmo8vvZa3eH3eKxsPzbB9wrR <br >+YstdA6Knx4Ip8EXAMPLE <br >} <br >• Example message when an object is created using the PUT request—The following message is an <br >example of a message Amazon S3 sends to publish an s3ObjectCreatedPut event <br >API Version 20060301 <br >490Amazon Simple Storage Service Developer Guide <br >Event Message Structure <br >{ <br > Records[ <br > { <br > eventVersion20 <br > eventSourceawss3 <br > awsRegionuseast1 <br > eventTime19700101T000000000Z <br > eventNameObjectCreatedPut <br > userIdentity{ <br > principalIdAIDAJDPLRKLG7UEXAMPLE <br > } <br > requestParameters{ <br > sourceIPAddress127001 <br > } <br > responseElements{ <br > xamzrequestidC3D13FE58DE4C810 <br > xamzid2FMyUVURIY8IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S <br >JRWeUWerMUE5JgHvANOjpD <br > } <br > s3{ <br > s3SchemaVersion10 <br > configurationIdtestConfigRule <br > bucket{ <br > namemybucket <br > ownerIdentity{ <br > principalIdA3NL1KOZZKExample <br > } <br > arnarnawss3mybucket <br > } <br > object{ <br > keyHappyFacejpg <br > size1024 <br > eTagd41d8cd98f00b204e9800998ecf8427e <br > versionId096fKKXTRTtl3on89fVOnfljtsv6qko <br > sequencer0055AED6DCD90281E5 <br > } <br > } <br > } <br > ] <br >} <br >API Version 20060301 <br >491Amazon Simple Storage Service Developer Guide <br >Usecase Scenarios <br >CrossRegion Replication <br >Crossregion replication is a bucketlevel feature that enables automatic asynchronous copying of <br >objects across buckets in different AWS regions To activate this feature you add a replication <br >configuration to your source bucket In the configuration you provide information such as the <br >destination bucket where you want objects replicated to You can request Amazon S3 to replicate all <br >or a subset of objects with specific key name prefixes For example you can configure crossregion <br >replication to replicate only objects with the key name prefix Tax This causes Amazon S3 to replicate <br >objects with a key such as Taxdoc1 or Taxdoc2 but not an object with the key Legaldoc3 <br >The object replicas in the destination bucket are exact replicas of the objects in the source bucket <br >They have the same key names and the same metadata—for example creation time owner user <br >defined metadata version ID ACL and storage class (assuming you did not explicitly specify different <br >storage class for object replicas in the replication configuration) Amazon S3 encrypts all data in transit <br >across AWS regions using SSL You can also optionally specify storage class to use when Amazon S3 <br >creates object replicas (if you don't specify this Amazon S3 assume storage class of the source object) <br >Usecase Scenarios <br >You might configure crossregion replication on a bucket for various reasons including these <br >• Compliance requirements – Although by default Amazon S3 stores your data across multiple <br >geographically distant Availability Zones compliance requirements might dictate that you store data <br >at even further distances Crossregion replication allows you to replicate data between distant AWS <br >regions to satisfy these compliance requirements <br >• Minimize latency – Your customers are in two geographic locations To minimize latency in <br >accessing objects you can maintain object copies in AWS regions that are geographically closer to <br >your users <br >• Operational reasons – You have compute clusters in two different regions that analyze the same set <br >of objects You might choose to maintain object copies in those regions <br >Optionally if you have cost considerations you can direct Amazon S3 to use the STANDARD_IA <br >storage class for object replicas For more information about cost considerations see Amazon S3 <br >Pricing <br >API Version 20060301 <br >492Amazon Simple Storage Service Developer Guide <br >Requirements <br >Requirements <br >Requirements for crossregion replication <br >• The source and destination buckets must be versioningenabled For more information about <br >versioning see Using Versioning (p 423) <br >• The source and destination buckets must be in different AWS regions For a list of AWS regions <br >where you can create a bucket see Regions and Endpoints in the AWS General Reference <br >• You can replicate objects from a source bucket to only one destination bucket <br >• Amazon S3 must have permission to replicate objects from that source bucket to the destination <br >bucket on your behalf <br >You can grant these permissions by creating an IAM role that Amazon S3 can assume You must <br >grant this role permissions for Amazon S3 actions so that when Amazon S3 assumes this role it can <br >perform replication tasks For more information about IAM roles see Create an IAM Role (p 495) <br >• If the source bucket owner also owns the object the bucket owner has full permissions to replicate <br >the object If not the source bucket owner must have permission for the Amazon S3 actions <br >s3GetObjectVersion and s3GetObjectVersionACL to read the object and object ACL For <br >more information about Amazon S3 actions see Specifying Permissions in a Policy (p 312) For <br >more information about resources and ownership see Amazon S3 Resources (p 267) <br >• If you are setting up crossregion replication in a crossaccount scenario (where the source and <br >destination buckets are owned by different AWS accounts) the source bucket owner must have <br >permission to replicate objects in the destination bucket <br >The destination bucket owner needs to grant these permissions via a bucket policy For an example <br >see Walkthrough 2 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by Different AWS Accounts (p 501) <br >Related Topics <br >What Is and Is Not Replicated (p 493) <br >How to Set Up CrossRegion Replication (p 495) <br >How to Find Replication Status of an Object (p 509) <br >CrossRegion Replication and Other Bucket Configurations (p 511) <br >Walkthrough 1 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by the Same AWS Account (p 500) <br >Walkthrough 2 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by Different AWS Accounts (p 501) <br >What Is and Is Not Replicated <br >This section explains what Amazon S3 replicates and what it does not replicate after you add a <br >replication configuration on a bucket <br >What Is Replicated <br >Amazon S3 replicates the following <br >API Version 20060301 <br >493Amazon Simple Storage Service Developer Guide <br >What Is Not Replicated <br >• Any new objects created after you add a replication configuration with exceptions described in the <br >next section <br >• Objects created with serverside encryption using the Amazon S3managed encryption key The <br >replicated copy of the object is also encrypted using serverside encryption using the Amazon S3 <br >managed encryption key <br >• Amazon S3 replicates only objects in the source bucket for which the bucket owner has permission <br >to read objects and read ACLs For more information about resource ownership see About the <br >Resource Owner (p 267) <br >• Any object ACL updates are replicated although there can be some delay before Amazon S3 can <br >bring the two in sync This applies only to objects created after you add a replication configuration to <br >the bucket <br >Delete Operation and CrossRegion Replication <br >If you delete an object from the source bucket the crossregion replication behavior is as follows <br >• If a DELETE request is made without specifying an object version ID Amazon S3 adds a delete <br >marker which crossregion replication replicates to the destination bucket For more information <br >about versioning and delete markers see Using Versioning (p 423) <br >• If a DELETE request specifies a particular object version ID to delete Amazon S3 deletes that <br >object version in the source bucket but it does not replicate the deletion in the destination bucket (in <br >other words it does not delete the same object version from the destination bucket) This behavior <br >protects data from malicious deletions <br >What Is Not Replicated <br >Amazon S3 does not replicate the following <br >• Amazon S3 does not retroactively replicate objects that existed before you added replication <br >configuration <br >• Objects created with serverside encryption using either customerprovided (SSEC) or AWS KMS– <br >managed encryption (SSEKMS) keys are not replicated For more information about serverside <br >encryption see Protecting Data Using ServerSide Encryption (p 381) <br >Amazon S3 does not keep the encryption keys you provide after the object is created in the source <br >bucket so it cannot decrypt the object for replication and therefore it does not replicate the object <br >• Amazon S3 does not replicate objects in the source bucket for which the bucket owner does not <br >have permissions If the object owner is different from the bucket owner see Granting Cross <br >Account Permissions to Upload Objects While Ensuring the Bucket Owner Has Full Control (p 340) <br >• Updates to bucketlevel subresources are not replicated This allows you to have different bucket <br >configurations on the source and destination buckets For more information about resources see <br >Amazon S3 Resources (p 267) <br >• Only customer actions are replicated Actions performed by lifecycle configuration are not replicated <br >For more information lifecycle configuration see Object Lifecycle Management (p 109) <br >For example if lifecycle configuration is enabled only on your source bucket Amazon S3 creates <br >delete markers for expired objects but it does not replicate those markers However you can have <br >the same lifecycle configuration on both the source and destination buckets if you want the same <br >lifecycle actions to happen to both buckets <br >• Objects in the source bucket that are replicas created by another crossregion replication are not <br >replicated <br >Suppose you configure crossregion replication where bucket A is the source and bucket B is the <br >destination Now suppose you add another crossregion replication where bucket B is the source and <br >API Version 20060301 <br >494Amazon Simple Storage Service Developer Guide <br >Related Topics <br >bucket C is the destination In this case objects in bucket B that are replicas of objects in bucket A <br >will not be replicated to bucket C <br >Related Topics <br >CrossRegion Replication (p 492) <br >How to Set Up CrossRegion Replication (p 495) <br >How to Find Replication Status of an Object (p 509) <br >How to Set Up CrossRegion Replication <br >To set up crossregion replication you need two buckets—source and destination These buckets must <br >be versioningenabled and in different AWS regions For a list of AWS regions where you can create a <br >bucket see Regions and Endpoints in the AWS General Reference <br >Important <br >If you have an object expiration lifecycle policy in your nonversioned bucket and you want to <br >maintain the same permanent delete behavior when you enable versioning you must add a <br >noncurrent expiration policy The noncurrent expiration lifecycle policy will manage the deletes <br >of the noncurrent object versions in the versionenabled bucket (A versionenabled bucket <br >maintains one current and zero or more noncurrent object versions) For more information <br >see Lifecycle Configuration for a Bucket with Versioning in the Amazon Simple Storage <br >Service Console User Guide <br >You can replicate objects from a source bucket to only one destination bucket If both of the buckets <br >are owned by the same AWS account do the following to set up crossregion replication from the <br >source to the destination bucket <br >• Create an IAM role to grant Amazon S3 permission to replicate objects on your behalf <br >• Add a replication configuration on the source bucket <br >In addition if the source and destination buckets are owned by two different AWS accounts the <br >destination bucket owner must also add a bucket policy to grant the source bucket owner permissions <br >to perform replication actions <br >Create an IAM Role <br >By default all Amazon S3 resources—buckets objects and related subresources—are private only <br >the resource owner can access the resource So Amazon S3 needs permission to read objects <br >from the source bucket and replicate them to the destination bucket You grant these permissions by <br >creating an IAM role When you create an IAM role you attach the following role policies <br >• A trust policy in which you trust Amazon S3 to assume the role as shown <br >{ <br > Version20121017 <br > Statement[ <br > { <br > EffectAllow <br > Principal{ <br > Services3amazonawscom <br >API Version 20060301 <br >495Amazon Simple Storage Service Developer Guide <br >Create an IAM Role <br > } <br > ActionstsAssumeRole <br > } <br > ] <br >} <br >Note <br >The Principal in the policy identifies Amazon S3 For more information about IAM roles <br >see IAM Roles in the IAM User Guide <br >• An access policy in which you grant the role permission to perform the replication task on your <br >behalf The following access policy grants these permissions <br >• The s3GetReplicationConfiguration and s3ListBucket permissions on the source <br >bucket so Amazon S3 can retrieve replication configuration and list bucket (the current permission <br >model requires the s3ListBucket permission to access the delete markers) <br >• The s3GetObjectVersion and s3GetObjectVersionAcl permissions on all objects in the <br >versioningenabled source bucket This allows Amazon S3 to get a specific object version and <br >ACL on it <br >• The s3ReplicateObject and s3ReplicateDelete permissions on objects in the destination <br >bucket so that Amazon S3 can replicate objects or delete markers from the destination bucket For <br >information about delete markers see Delete Operation and CrossRegion Replication (p 494) <br >For a list of Amazon S3 actions see Specifying Permissions in a Policy (p 312) <br >{ <br > Version20121017 <br > Statement[ <br > { <br > EffectAllow <br > Action[ <br > s3GetReplicationConfiguration <br > s3ListBucket <br > ] <br > Resource[ <br > arnawss3sourcebucket <br > ] <br > } <br > { <br > EffectAllow <br > Action[ <br > s3GetObjectVersion <br > s3GetObjectVersionAcl <br > ] <br > Resource[ <br > arnawss3sourcebucket* <br > ] <br > } <br > { <br > EffectAllow <br > Action[ <br > s3ReplicateObject <br > s3ReplicateDelete <br > ] <br > Resourcearnawss3destinationbucket* <br > } <br > ] <br >} <br >API Version 20060301 <br >496Amazon Simple Storage Service Developer Guide <br >Add Replication Configuration <br >Add Replication Configuration <br >When you add a replication configuration to a bucket Amazon S3 stores the configuration as XML The <br >following are example configurations For more information about the XML structure see PUT Bucket <br >replication in the Amazon Simple Storage Service API Reference <br >Example 1 Replication Configuration with One Rule Requesting <br >The following replication configuration has one rule It requests Amazon S3 to replicate all objects <br >to the specified destination bucket The rule specifies an empty prefix indicating all objects The <br >configuration also specifies an IAM role Amazon S3 can assume to replicate objects on your behalf <br ><xml version10 encodingUTF8> <br ><ReplicationConfiguration xmlnshttps3amazonawscomdoc20060301> <br > <Role>arnawsiamaccountidrolerolename<Role> <br > <Rule> <br > <Status>Enabled<Status> <br > <Prefix><Prefix> <br > <Destination><Bucket>arnawss3destinationbucket<Bucket>< <br >Destination> <br > <Rule> <br ><ReplicationConfiguration> <br >If the <Rule> does not specify storage class Amazon S3 uses the storage class of the source object <br >to create object replica You can optionally specify a storage class as shown which Amazon S3 uses <br >to create replicas Note that the <StorageClass> element cannot be empty <br ><xml version10 encodingUTF8> <br ><ReplicationConfiguration xmlnshttps3amazonawscomdoc20060301> <br > <Role>arnawsiamaccountidrolerolename<Role> <br > <Rule> <br > <Status>Enabled<Status> <br > <Prefix><Prefix> <br > <Destination> <br > <Bucket>arnawss3destinationbucket<Bucket> <br > <StorageClass>storageclass<StorageClass> <br > <Destination> <br > <Rule> <br ><ReplicationConfiguration> <br >The storage class you specify can be any of the storage classes that Amazon S3 supports except the <br >GLACIER storage class You can only transition objects to the GLACIER storage class using lifecycle <br >For more information see PUT Bucket replication For more information about lifecycle management <br >see Object Lifecycle Management (p 109) For more information about storage classes see Storage <br >Classes (p 103) <br >API Version 20060301 <br >497Amazon Simple Storage Service Developer Guide <br >Add Replication Configuration <br >Example 2 Replication Configuration with Two Rules Each Specifying a Key Name <br >Prefix <br >The following replication configuration specifies two rules The first rule requests Amazon S3 to <br >replicate objects with the key name prefix TaxDocs The second rule requests Amazon S3 to <br >replicate objects with key name prefix ProjectDocs For example Amazon S3 replicates objects <br >with key names TaxDocsdoc1pdf and ProjectDocsproject1txt but it does not replicate <br >any object with the key name PersonalDocdocumentA Note that both rules specify the same <br >destination bucket <br ><xml version10 encodingUTF8> <br ><ReplicationConfiguration xmlnshttps3amazonawscomdoc20060301> <br > <Role>arnawsiamaccountidrolerolename<Role> <br > <Rule> <br > <Prefix>TaxDocs<Prefix> <br > <br > <Rule> <br > <Rule> <br > <Prefix>ProjectDocs<Prefix> <br > <br > <Rule> <br ><ReplicationConfiguration> <br >Note that you cannot specify overlapping prefixes The following example configuration has two rules <br >specifying overlapping prefixes TaxDocs and TaxDocs2015 which is not allowed <br ><ReplicationConfiguration> <br > <Role>arnawsiamaccountidrolerolename<Role> <br > <Rule> <br > <Prefix>TaxDocs<Prefix> <br > <Status>Enabled<Status> <br > <Destination> <br > <Bucket>arnawss3destinationbucket<Bucket> <br > <Destination> <br > <Rule> <br > <Rule> <br > <Prefix>TaxDocs2015<Prefix> <br > <Status>Enabled<Status> <br > <Destination> <br > <Bucket>arnawss3destinationbucket<Bucket> <br > <Destination> <br > <Rule> <br ><ReplicationConfiguration> <br >When adding replication configuration to a bucket you have two scenarios to consider depending on <br >who owns the source and destination buckets <br >Scenario 1 Buckets Owned by the Same AWS Account <br >When both the source and destination buckets are owned by the same AWS account you can use <br >the Amazon S3 console to set up crossregion replication Assuming you have source and destination <br >buckets that are both versioningenabled you can use the console to add replication configuration on <br >the source bucket For more information see the following topics <br >• Walkthrough 1 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by the Same AWS Account (p 500) <br >• Enabling CrossRegion Replication in the Amazon Simple Storage Service Console User Guide <br >API Version 20060301 <br >498Amazon Simple Storage Service Developer Guide <br >Add Replication Configuration <br >Scenario 2 Buckets Owned by Different AWS Accounts <br >When the source and destination buckets are owned by two different AWS accounts you cannot add <br >replication configuration using the console because you cannot specify that a destination bucket is <br >owned by another AWS account in the console Instead you need to add replication configuration <br >programmatically using AWS SDKs or the AWS Command Line Interface To do this you need to <br >specify a replication configuration as XML The following is an example replication configuration <br ><xml version10 encodingUTF8> <br ><ReplicationConfiguration xmlnshttps3amazonawscomdoc20060301> <br > <Role>arnawsiam46173exampleroleCrrRoleName<Role> <br > <Rule> <br > <Status>Enabled<Status> <br > <Prefix>TaxDocs<Prefix> <br > <Destination><Bucket>arnawss3destinationbucket<Bucket>< <br >Destination> <br > <Rule> <br ><ReplicationConfiguration> <br >The configuration requests Amazon S3 to replicate objects with the key prefix TaxDocs to the <br >destinationbucket The configuration also specifies an IAM role that Amazon S3 can assume <br >to replicate objects on your behalf For more information about the XML structure see PUT Bucket <br >replication in the Amazon Simple Storage Service API Reference <br >Because the destination bucket is owned by another AWS account the destination bucket owners <br >must also grant the source bucket owner permissions to replicate (replicate and delete) objects as <br >shown <br >{ <br > Version20081017 <br > Statement[ <br > { <br > EffectAllow <br > Principal{ <br > AWSarnawsiamAWS account ID that owns the source <br > bucketroot <br > } <br > Action[s3ReplicateObject s3ReplicateDelete] <br > Resourcearnawss3destination bucket* <br > } <br > ] <br >} <br >This bucket policy on the destination bucket grants source bucket owner permissions for the Amazon <br >S3 object operations (s3ReplicateObject and s3ReplicateDelete) on the destination bucket <br >For an example walkthrough see Walkthrough 2 Configure CrossRegion Replication Where Source <br >and Destination Buckets Are Owned by Different AWS Accounts (p 501) <br >Related Topics <br >CrossRegion Replication (p 492) <br >What Is and Is Not Replicated (p 493) <br >Walkthrough 1 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by the Same AWS Account (p 500) <br >API Version 20060301 <br >499Amazon Simple Storage Service Developer Guide <br >Walkthrough 1 Same AWS Account <br >Walkthrough 2 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by Different AWS Accounts (p 501) <br >How to Find Replication Status of an Object (p 509) <br >Troubleshooting CrossRegion Replication in Amazon S3 (p 511) <br >Walkthrough 1 Configure CrossRegion Replication <br >Where Source and Destination Buckets Are Owned <br >by the Same AWS Account <br >In this section you create two buckets (source and destination) in different AWS regions enable <br >versioning on both the buckets and then configure crossregion replication on the source bucket <br >1 Create two buckets <br >a Create a source bucket in an AWS region For example US West (Oregon) (uswest2) For <br >instructions see Creating a Bucket in the Amazon Simple Storage Service Console User <br >Guide <br >b Create a destination bucket in another AWS region For example US East (N Virginia) region <br >(useast1) <br >2 Enable versioning on both buckets For instructions see Enabling Bucket Versioning in the <br >Amazon Simple Storage Service Console User Guide <br >Important <br >If you have an object expiration lifecycle policy in your nonversioned bucket and you <br >want to maintain the same permanent delete behavior when you enable versioning you <br >must add a noncurrent expiration policy The noncurrent expiration lifecycle policy will <br >manage the deletes of the noncurrent object versions in the versionenabled bucket <br >(A versionenabled bucket maintains one current and zero or more noncurrent object <br >versions) For more information see Lifecycle Configuration for a Bucket with Versioning <br >in the Amazon Simple Storage Service Console User Guide <br >3 Enable crossregion replication on the source bucket You decide if you want to replicate all <br >objects or only objects with a specific prefix (when using the console think of this as deciding if <br >you want to replicate only objects from a specific folder) For instructions see Enabling Cross <br >Region Replication in the Amazon Simple Storage Service Console User Guide <br >4 Test the setup as follows <br >a Create objects in the source bucket and verify that Amazon S3 replicated the objects in <br >the destination bucket The amount of time it takes for Amazon S3 to replicate an object <br >depends on the object size For information about finding replication status see How to Find <br >Replication Status of an Object (p 509) <br >b Update the object's ACL in the source bucket and verify that changes appear in the <br >destination bucket For instructions see Editing Object Permissions in the Amazon Simple <br >Storage Service Console User Guide <br >c Update the object's metadata and verify that the changes appear in the destination bucket <br >For instructions see Editing Object Metadata in the Amazon Simple Storage Service Console <br >User Guide <br >Remember that the replicas are exact copies of the objects in the source bucket <br >Related Topics <br >CrossRegion Replication (p 492) <br >API Version 20060301 <br >500Amazon Simple Storage Service Developer Guide <br >Walkthrough 2 Different AWS Accounts <br >Walkthrough 2 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by Different AWS Accounts (p 501) <br >What Is and Is Not Replicated (p 493) <br >How to Find Replication Status of an Object (p 509) <br >Walkthrough 2 Configure CrossRegion Replication <br >Where Source and Destination Buckets Are Owned <br >by Different AWS Accounts <br >In this walkthrough you set up crossregion replication on the source bucket owned by one account to <br >replicate objects in a destination bucket owned by another account <br >The process is the same as setting up crossregion replication when both buckets are owned by the <br >same account except that you do one extra step—the destination bucket owner must create a bucket <br >policy granting the source bucket owner permission for replication actions <br >In this exercise you perform all of the steps using the console except creating an IAM role and setting <br >a replication configuration on the source bucket You perform these steps using either the AWS CLI or <br >the AWS SDK for Java <br >1 Create two buckets <br >a Create a source bucket in an AWS region For example Oregon (uswest2) in Account A <br >For instructions go to Creating a Bucket in the Amazon Simple Storage Service Console User <br >Guide <br >b Create a destination bucket in another AWS region For example US East (N Virginia) region <br >(useast1) in Account B <br >2 Enable versioning on both the buckets For instructions see Enabling Bucket Versioning in the <br >Amazon Simple Storage Service Console User Guide <br >Important <br >If you have an object expiration lifecycle policy in your nonversioned bucket and you <br >want to maintain the same permanent delete behavior when you enable versioning you <br >must add a noncurrent expiration policy The noncurrent expiration lifecycle policy will <br >manage the deletes of the noncurrent object versions in the versionenabled bucket <br >(A versionenabled bucket maintains one current and zero or more noncurrent object <br >versions) For more information see Lifecycle Configuration for a Bucket with Versioning <br >in the Amazon Simple Storage Service Console User Guide <br >3 Add the following bucket policy on the destination bucket to allow the source bucket owner <br >permission for replication actions <br >{ <br > Version20081017 <br > Id <br > Statement[ <br > { <br > SidStmt123 <br > EffectAllow <br > Principal{ <br > AWSarnawsiamAWSIDAccountAroot <br > } <br > Action[s3ReplicateObject s3ReplicateDelete] <br > Resourcearnawss3destinationbucket* <br > } <br >API Version 20060301 <br >501Amazon Simple Storage Service Developer Guide <br >Walkthrough 2 Different AWS Accounts <br > ] <br >} <br >For instructions see Editing Bucket Permissions in the Amazon Simple Storage Service Console <br >User Guide <br >4 Create an IAM role in Account A Then Account A specifies this role when adding replication <br >configuration on the source bucket in the following step <br >Use the AWS CLI to create this IAM role For instructions about how to setup the AWS CLI see <br >Setting Up the Tools for the Example Walkthroughs (p 281) <br >a Copy the following policy and save it to a file called S3roletrustpolicyjson The <br >policy grants Amazon S3 permission to assume the role <br >{ <br > Version20121017 <br > Statement[ <br > { <br > EffectAllow <br > Principal{ <br > Services3amazonawscom <br > } <br > ActionstsAssumeRole <br > } <br > ] <br >} <br >b Copy the following policy and save it to a file called S3rolepermissionspolicyjson <br >This access policy grants permission for various Amazon S3 bucket and object actions In the <br >following step you add the policy to the IAM role you are creating <br >{ <br > Version20121017 <br > Statement[ <br > { <br > EffectAllow <br > Action[ <br > s3GetObjectVersion <br > s3GetObjectVersionAcl <br > ] <br > Resource[ <br > arnawss3sourcebucket* <br > ] <br > } <br > { <br > EffectAllow <br > Action[ <br > s3ListBucket <br > s3GetReplicationConfiguration <br > ] <br > Resource[ <br > arnawss3sourcebucket <br > ] <br > } <br > { <br > EffectAllow <br > Action[ <br >API Version 20060301 <br >502Amazon Simple Storage Service Developer Guide <br >Walkthrough 2 Different AWS Accounts <br > s3ReplicateObject <br > s3ReplicateDelete <br > ] <br > Resourcearnawss3destinationbucket* <br > } <br > ] <br >} <br >c Run the following CLI command to create a role <br > aws iam createrole \ <br >rolename RoleForS3CrossAccountCrossRegionReplication \ <br >assumerolepolicydocument fileS3roletrustpolicyjson <br >d Run the following CLI command to create a policy <br > aws iam createpolicy \ <br >policyname PolicyForS3CrossAccountCrossRegionReplication \ <br >policydocument fileS3rolepermissionspolicyjson <br >e Write down the policy ARN that is returned in the output by the preceding command <br >f Run the following CLI command to attach the policy to the role <br > aws iam attachrolepolicy \ <br >rolename RoleForS3CrossAccountCrossRegionReplication \ <br >policyarn policyarn <br >Now Account A has created a role that the necessary Amazon S3 actions so it can replicate <br >objects <br >5 Enable crossregion replication on the source bucket in Account A In the replication configuration <br >you add one rule requesting Amazon S3 to replicate objects with the key name prefix Tax to the <br >specified destination bucket Amazon S3 saves the replication configuration as XML as shown in <br >the following example <br ><ReplicationConfiguration xmlnshttps3amazonawscomdoc20060301> <br > <Role>arnawsiamAWSIDAccountArolerolename<Role> <br > <Rule> <br > <Status>Enabled<Status> <br > <Prefix>Tax<Prefix> <br > <Destination><Bucket>arnawss3destinationbucket<Bucket>< <br >Destination> <br > <Rule> <br ><ReplicationConfiguration> <br >You can add the replication configuration to your source bucket using either the AWS CLI or AWS <br >SDK <br >• Using AWS CLI <br >The AWS CLI requires you to specify the configuration as JSON Save the following JSON in a <br >file (replicationjson) You need to provide your bucket name and IAM role ARN <br >{ <br > Role arnawsiamAWSIDAccountArolerolename <br > Rules [ <br > { <br >API Version 20060301 <br >503Amazon Simple Storage Service Developer Guide <br >Walkthrough 2 Different AWS Accounts <br > Prefix Tax <br > Status Enabled <br > Destination { <br > Bucket arnawss3destinationbucket <br > } <br > } <br > ] <br >} <br >Then run the CLI command to add replication configuration to your source bucket <br > aws s3api putbucketreplication \ <br >bucket sourcebucket \ <br >replicationconfiguration filereplicationjson <br >For instructions on how to set up the AWS CLI see Setting Up the Tools for the Example <br >Walkthroughs (p 281) <br >Account A can use the getbucketreplication command to retrieve the replication <br >configuration <br > aws s3api getbucketreplication \ <br >bucket sourcebucket <br >• Using the AWS SDK for Java <br >For a code example see How to Set Up CrossRegion Replication Using the AWS SDK for <br >Java (p 505) <br >6 Test the setup as follows <br >• Using Account A credentials create objects in the source bucket and verify that Amazon S3 <br >replicated the objects in the destination bucket owned by Account B Time it takes for Amazon <br >S3 to replicate an object depends on the object size For information about finding replication <br >status see How to Find Replication Status of an Object (p 509) <br >Note <br >When you upload objects in the source bucket the object key name must have a Tax <br >prefix (for example Taxdocumentpdf) Accordingly to the replication configuration <br >Account A added to the source bucket Amazon S3 will only replicate objects with the <br >Tax prefix <br >• Update an object's ACL in the source bucket and verify that changes appear in the destination <br >bucket <br >For instructions go to Editing Object Permissions in the Amazon Simple Storage Service <br >Console User Guide <br >• Update the object's metadata and verify that the changes appear in the destination bucket <br >For instructions go to Editing Object Metadata in the Amazon Simple Storage Service Console <br >User Guide <br >Remember the replicas are exact copies of the objects in the source bucket <br >Related Topics <br >CrossRegion Replication (p 492)API Version 20060301 <br >504Amazon Simple Storage Service Developer Guide <br >Using the Console <br >What Is and Is Not Replicated (p 493) <br >How to Find Replication Status of an Object (p 509) <br >Walkthrough 1 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by the Same AWS Account (p 500) <br >How to Set Up CrossRegion Replication Using the <br >Console <br >When both the source and destination buckets are owned by the same AWS account you can add <br >replication configuration on the source bucket using the Amazon S3 console For more information see <br >the following topics <br >• Walkthrough 1 Configure CrossRegion Replication Where Source and Destination Buckets Are <br >Owned by the Same AWS Account (p 500) <br >• Managing CrossRegion Replication in the Amazon Simple Storage Service Console User Guide <br >• CrossRegion Replication (p 492) <br >• How to Set Up CrossRegion Replication (p 495) <br >How to Set Up CrossRegion Replication Using the <br >AWS SDK for Java <br >When the source and destination buckets are owned by two different AWS accounts you can use <br >either the AWS CLI or one of the AWS SDKs to add replication configuration on the source bucket <br >You cannot use the console to add the replication configuration because the console does not provide <br >a way for you to specify a destination bucket owned by another AWS account at the time you add <br >replication configuration on a source bucket For more information see How to Set Up CrossRegion <br >Replication (p 495) <br >The following AWS SDK for Java code example first adds replication configuration to a bucket <br >and then retrieves it You need to update the code by providing your bucket names and IAM role <br >ARN For instructions on how to create and test a working sample see Testing the Java Code <br >Examples (p 564) <br >import javaioIOException <br >import javautilHashMap <br >import javautilMap <br >import comamazonawsAmazonClientException <br >import comamazonawsAmazonServiceException <br >import comamazonawsauthprofileProfileCredentialsProvider <br >import comamazonawsservicess3AmazonS3 <br >import comamazonawsservicess3AmazonS3Client <br >import comamazonawsservicess3modelBucketReplicationConfiguration <br >import comamazonawsservicess3modelReplicationDestinationConfig <br >import comamazonawsservicess3modelReplicationRule <br >import comamazonawsservicess3modelReplicationRuleStatus <br >public class CrossRegionReplicationComplete { <br > private static String sourceBucketName sourcebucket <br > private static String roleARN arnawsiamaccountidrolerole <br >name <br > private static String destinationBucketArn arnawss3destination <br >bucket <br >API Version 20060301 <br >505Amazon Simple Storage Service Developer Guide <br >Using the AWS SDK for Java <br > <br > public static void main(String[] args) throws IOException { <br > AmazonS3 s3Client new AmazonS3Client(new <br > ProfileCredentialsProvider()) <br > try { <br > Map<String ReplicationRule> replicationRules new <br > HashMap<String ReplicationRule>() <br > replicationRulesput( <br > asampleruleid <br > new ReplicationRule() <br > withPrefix(Tax) <br > withStatus(ReplicationRuleStatusEnabled) <br > withDestinationConfig( <br > new ReplicationDestinationConfig() <br > withBucketARN(destinationBucketArn) <br > <br > withStorageClass(StorageClassStandard_Infrequently_Accessed) <br > ) <br > ) <br > s3ClientsetBucketReplicationConfiguration( <br > sourceBucketName <br > new BucketReplicationConfiguration() <br > withRoleARN(roleARN) <br > withRules(replicationRules) <br > ) <br > BucketReplicationConfiguration replicationConfig <br > s3ClientgetBucketReplicationConfiguration(sourceBucketName) <br > <br > ReplicationRule rule replicationConfiggetRule(asamplerule <br >id) <br > <br > Systemoutprintln(Destination Bucket ARN + <br > rulegetDestinationConfig()getBucketARN()) <br > Systemoutprintln(Prefix + rulegetPrefix()) <br > Systemoutprintln(Status + rulegetStatus()) <br > <br > } catch (AmazonServiceException ase) { <br > Systemoutprintln(Caught an AmazonServiceException which + <br > means your request made it + <br > to Amazon S3 but was rejected with an error response + <br > for some reason) <br > Systemoutprintln(Error Message + asegetMessage()) <br > Systemoutprintln(HTTP Status Code + asegetStatusCode()) <br > Systemoutprintln(AWS Error Code + asegetErrorCode()) <br > Systemoutprintln(Error Type + asegetErrorType()) <br > Systemoutprintln(Request ID + asegetRequestId()) <br > } catch (AmazonClientException ace) { <br > Systemoutprintln(Caught an AmazonClientException which <br > means+ <br > the client encountered + <br > a serious internal problem while trying to + <br > communicate with Amazon S3 + <br > such as not being able to access the network) <br > Systemoutprintln(Error Message + acegetMessage()) <br > } <br > } <br >} <br >API Version 20060301 <br >506Amazon Simple Storage Service Developer Guide <br >Using the AWS SDK for NET <br >Related Topics <br >CrossRegion Replication (p 492) <br >How to Set Up CrossRegion Replication (p 495) <br >How to Set Up CrossRegion Replication Using the <br >AWS SDK for NET <br >When the source and destination buckets are owned by two different AWS accounts you can use <br >either the AWS CLI or one of the AWS SDKs to add replication configuration on the source bucket <br >You cannot use the console to add the replication configuration because the console does not provide <br >a way for you to specify a destination bucket owned by another AWS account at the time you add <br >replication configuration on a source bucket For more information see How to Set Up CrossRegion <br >Replication (p 495) <br >The following AWS SDK for NET code example first adds replication configuration to a bucket and <br >then retrieves it You need to update the code by providing your bucket names and IAM role ARN <br >For instructions on how to create and test a working sample see Running the Amazon S3 NET Code <br >Examples (p 566) <br >using System <br >using SystemCollectionsGeneric <br >using AmazonS3 <br >using AmazonS3Model <br >namespace s3amazoncomdocsamples <br >{ <br > class CrossRegionReplication <br > { <br > static string sourceBucket sourcebucket <br > static string destinationBucketArn arnawss3destination <br >bucket <br > static string roleArn arnawsiamaccount <br >idrolerolename <br > public static void Main(string[] args) <br > { <br > try <br > { <br > using (var client new <br > AmazonS3Client(AmazonRegionEndpointUSEast1)) <br > { <br > EnableReplication(client) <br > RetrieveReplicationConfiguration(client) <br > } <br > ConsoleWriteLine(Press any key to continue) <br > ConsoleReadKey() <br > } <br > catch (AmazonS3Exception amazonS3Exception) <br > { <br > if (amazonS3ExceptionErrorCode null && <br > (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId) <br > || <br > amazonS3ExceptionErrorCodeEquals(InvalidSecurity))) <br >API Version 20060301 <br >507Amazon Simple Storage Service Developer Guide <br >Using the AWS SDK for NET <br > { <br > ConsoleWriteLine(Check the provided AWS Credentials) <br > ConsoleWriteLine( <br > To sign up for service go to httpawsamazoncom <br >s3) <br > } <br > else <br > { <br > ConsoleWriteLine( <br > Error occurred Message'{0}' when enabling <br > notifications <br > amazonS3ExceptionMessage) <br > } <br > } <br > } <br > static void EnableReplication(IAmazonS3 client) <br > { <br > ReplicationConfiguration replConfig new <br > ReplicationConfiguration <br > { <br > Role roleArn <br > Rules <br > { <br > new ReplicationRule <br > { <br > Prefix Tax <br > Status ReplicationRuleStatusEnabled <br > Destination new ReplicationDestination <br > { <br > BucketArn destinationBucketArn <br > } <br > } <br > } <br > } <br > PutBucketReplicationRequest putRequest new <br > PutBucketReplicationRequest <br > { <br > BucketName sourceBucket <br > Configuration replConfig <br > } <br > PutBucketReplicationResponse putResponse <br > clientPutBucketReplication(putRequest) <br > } <br > private static void RetrieveReplicationConfiguration(IAmazonS3 <br > client) <br > { <br > Retrieve the configuration <br > GetBucketReplicationRequest getRequest new <br > GetBucketReplicationRequest <br > { <br > BucketName sourceBucket <br > } <br > GetBucketReplicationResponse getResponse <br > clientGetBucketReplication(getRequest) <br >API Version 20060301 <br >508Amazon Simple Storage Service Developer Guide <br >Replication Status Information <br > Print <br > ConsoleWriteLine(Printing replication configuration <br > information) <br > ConsoleWriteLine(Role ARN {0} <br > getResponseConfigurationRole) <br > foreach (var rule in getResponseConfigurationRules) <br > { <br > ConsoleWriteLine(ID {0} ruleId) <br > ConsoleWriteLine(Prefix {0} rulePrefix) <br > ConsoleWriteLine(Status {0} ruleStatus) <br > } <br > } <br > } <br >} <br >Related Topics <br >CrossRegion Replication (p 492) <br >How to Set Up CrossRegion Replication (p 495) <br >How to Find Replication Status of an Object <br >In crossregion replication you have a source bucket on which you configure replication and a <br >destination bucket where Amazon S3 replicates objects When you request an object (GET Object) or <br >object metadata (HEAD Object) from these buckets Amazon S3 returns the xamzreplication <br >status header in the response as follow <br >• If requesting an object from the source bucket — Amazon S3 returns the xamzreplication <br >status header if the object in your request is eligible for replication <br >For example suppose in your replication configuration you specify the object prefix TaxDocs <br >requesting Amazon S3 to replicate objects with the key name prefix TaxDocs Then any objects <br >you upload with this key name prefix—for example TaxDocsdocument1pdf—are eligible <br >for replication For any object request with this key name prefix Amazon S3 returns the xamz <br >replicationstatus header with one of the following values for the object's replication status <br >PENDING COMPLETED or FAILED <br >• If requesting an object from the destination bucket — Amazon S3 returns the xamz <br >replicationstatus header with value REPLICA if the object in your request is a replica that <br >Amazon S3 created <br >You can find the object replication state in the console using the AWS CLI or programmatically using <br >the AWS SDK <br >• In the console you choose the object and choose Properties to view object properties including the <br >replication status <br >• You can use the headobject AWS CLI command as shown to retrieve object metadata <br >information <br >aws s3api headobject bucket sourcebucket key objectkey version <br >id objectversionid <br >The command returns object metadata information including the ReplicationStatus as shown in <br >the following example response <br >API Version 20060301 <br >509Amazon Simple Storage Service Developer Guide <br >Related Topics <br >{ <br > AcceptRangesbytes <br > ContentTypeimagejpeg <br > LastModifiedMon 23 Mar 2015 210229 GMT <br > ContentLength3191 <br > ReplicationStatusCOMPLETED <br > VersionIdjfnWHIMOfYiD_9rGbSkmroXsFj3fqZ <br > ETag\6805f2cfc46c0f04559748bb039d69ae\ <br > Metadata{ <br > } <br >} <br >• You can use the AWS SDKs to retrieve replication state of an object Following are code fragments <br >using AWS SDK for Java and AWS SDK for NET <br >• AWS SDK for Java <br >GetObjectMetadataRequest metadataRequest new <br > GetObjectMetadataRequest(bucketName bucketName) <br >metadataRequestsetKey(key) <br >ObjectMetadata metadata s3ClientgetObjectMetadata(metadataRequest) <br >Systemoutprintln(Replication Status + <br > metadatagetRawMetadataValue(HeadersOBJECT_REPLICATION_STATUS)) <br >• AWS SDK for NET <br >GetObjectMetadataRequest getmetadataRequest new GetObjectMetadataRequest <br > { <br > BucketName sourceBucket <br > Key objectKey <br > } <br >GetObjectMetadataResponse getmetadataResponse <br > clientGetObjectMetadata(getmetadataRequest) <br >ConsoleWriteLine(Object replication status {0} <br > getmetadataResponseReplicationStatus) <br >Note <br >If you decide to delete an object from a source bucket that has replication enabled you <br >should check the replication status of the object before deletion to ensure the object has been <br >replicated <br >If lifecycle configuration is enabled on the source bucket Amazon S3 puts any lifecycle <br >actions on hold until it marks the objects status as either COMPLETED or FAILED <br >Related Topics <br >CrossRegion Replication (p 492) <br >API Version 20060301 <br >510Amazon Simple Storage Service Developer Guide <br >Troubleshooting <br >Troubleshooting CrossRegion Replication in <br >Amazon S3 <br >After configuring crossregion replication if you don't see the object replica created in the destination <br >bucket try the following troubleshooting methods <br >• The time it takes for Amazon S3 to replicate an object depends on the object size For large objects <br >it can take up to several hours If the object in question is large check to see if the replicated object <br >appears in the destination bucket again at a later time <br >• In the replication configuration on the source bucket <br >• Verify that the destination bucket ARN is correct <br >• Verify that the key name prefix is correct For example if you set the configuration to replicate <br >objects with the prefix Tax then only objects with key names such as Taxdocument1 or Tax <br >document2 are replicated An object with the key name document3 will not be replicated <br >• Verify the status is enabled <br >• If the destination bucket is owned by another AWS account verify that the bucket owner has a <br >bucket policy on the destination bucket that allows the source bucket owner to replicate objects <br >• If an object replica does not appear in the destination bucket note the following <br >• An object in a source bucket that is itself a replica created by another replication configuration <br >Amazon S3 does not replicate the replica For example if you set replication configuration from <br >bucket A to bucket B to bucket C Amazon S3 will not replicate object replicas in bucket B <br >• A bucket owner can grant other AWS accounts permission to upload objects By default the <br >bucket owner does not have any permissions on the objects created by the other account And <br >the replication configuration will replicate only the objects for which the bucket owner has access <br >permissions The bucket owner can grant other AWS accounts permissions to create objects <br >conditionally requiring explicit access permissions on those objects For an example policy see <br >Granting CrossAccount Permissions to Upload Objects While Ensuring the Bucket Owner Has <br >Full Control (p 340) <br >Related Topics <br >CrossRegion Replication (p 492) <br >CrossRegion Replication and Other Bucket <br >Configurations <br >In addition to replication configuration Amazon S3 supports several other bucket configuration options <br >including <br >• Configure versioning on a bucket For more information see Using Versioning (p 423) <br >• Configure a bucket for website hosting For more information see Hosting a Static Website on <br >Amazon S3 (p 449) <br >• Configure bucket access via a bucket policy or ACL For more information see Using Bucket <br >Policies and User Policies (p 308) and see Managing Access with ACLs (p 364) <br >• Configure a bucket to store access logs For more information Server Access Logging (p 546) <br >• Configure the lifecycle for objects in the bucket For more information see Object Lifecycle <br >Management (p 109) <br >API Version 20060301 <br >511Amazon Simple Storage Service Developer Guide <br >Lifecycle Configuration and Object Replicas <br >This section explains how bucket replication configuration influences behavior of other bucket <br >configurations <br >Lifecycle Configuration and Object Replicas <br >The time it takes for Amazon S3 to replicate an object depends on object size For large objects it can <br >take several hours Even though it might take some time before a replica is available in the destination <br >bucket creation time of the replica remains the same as the corresponding object in the source bucket <br >Therefore if you have a lifecycle policy on the destination bucket note that lifecycle rules honor the <br >original creation time of the object not when the replica became available in the destination bucket <br >Versioning Configuration and Replication <br >Configuration <br >Both the source and destination buckets must be versioningenabled when you configure replication <br >on a bucket After you enable versioning on both the source and destination buckets and configure <br >replication on the source bucket note that <br >• If you attempt to disable versioning on the source bucket Amazon S3 returns an error You must <br >remove the replication configuration before you can disable versioning on the source bucket <br >• If you disable versioning on the destination bucket Amazon S3 stops replication <br >Logging Configuration and Replication Configuration <br >If you have logging enabled on any bucket and Amazon S3 is delivering logs to your source bucket <br >where you also have replication enabled Amazon S3 replicates the log objects <br >Related Topics <br >CrossRegion Replication (p 492) <br >API Version 20060301 <br >512Amazon Simple Storage Service Developer Guide <br >Request Redirection and the REST API <br >Request Routing <br >Topics <br >• Request Redirection and the REST API (p 513) <br >• DNS Considerations (p 516) <br >Programs that make requests against buckets created using the <CreateBucketConfiguration> API <br >must support redirects Additionally some clients that do not respect DNS TTLs might encounter <br >issues <br >This section describes routing and DNS issues to consider when designing your service or application <br >for use with Amazon S3 <br >Request Redirection and the REST API <br >Overview <br >Amazon S3 uses the Domain Name System (DNS) to route requests to facilities that can process <br >them This system works very effectively However temporary routing errors can occur <br >If a request arrives at the wrong Amazon S3 location Amazon S3 responds with a temporary redirect <br >that tells the requester to resend the request to a new endpoint <br >If a request is incorrectly formed Amazon S3 uses permanent redirects to provide direction on how to <br >perform the request correctly <br >Important <br >Every Amazon S3 program must be designed to handle redirect responses The only <br >exception is for programs that work exclusively with buckets that were created without <br ><CreateBucketConfiguration> For more information on location constraints see <br >Accessing a Bucket (p 60) <br >API Version 20060301 <br >513Amazon Simple Storage Service Developer Guide <br >DNS Routing <br >DNS Routing <br >DNS routing routes requests to appropriate Amazon S3 facilities <br >The following figure shows an example of DNS routing <br >1 The client makes a DNS request to get an object stored on Amazon S3 <br >2 The client receives one or more IP addresses for facilities that can process the request <br >3 The client makes a request to Amazon S3 Facility B <br >4 Facility B returns a copy of the object <br >Temporary Request Redirection <br >A temporary redirect is a type of error response that signals to the requester that he should resend his <br >request to a different endpoint <br >Due to the distributed nature of Amazon S3 requests can be temporarily routed to the wrong facility <br >This is most likely to occur immediately after buckets are created or deleted For example if you <br >create a new bucket and immediately make a request to the bucket you might receive a temporary <br >redirect depending on the location constraint of the bucket If you created the bucket in the US East <br >API Version 20060301 <br >514Amazon Simple Storage Service Developer Guide <br >Temporary Request Redirection <br >(N Virginia) region (s3amazonawscom endpoint) you will not see the redirect because this is also the <br >default endpoint However if bucket is created in any other region any requests for the bucket will go <br >to the default endpoint while the bucket's DNS entry is propagated The default endpoint will redirect <br >the request to the correct endpoint with a HTTP 302 response <br >Temporary redirects contain a URI to the correct facility which you can use to immediately resend the <br >request <br >Important <br >Do not reuse an endpoint provided by a previous redirect response It might appear to work <br >(even for long periods of time) but might provide unpredictable results and will eventually fail <br >without notice <br >The following figure shows an example of a temporary redirect <br >1 The client makes a DNS request to get an object stored on Amazon S3 <br >2 The client receives one or more IP addresses for facilities that can process the request <br >3 The client makes a request to Amazon S3 Facility B <br >4 Facility B returns a redirect indicating the object is available from Location C <br >5 The client resends the request to Facility C <br >6 Facility C returns a copy of the object <br >API Version 20060301 <br >515Amazon Simple Storage Service Developer Guide <br >Permanent Request Redirection <br >Permanent Request Redirection <br >A permanent redirect indicates that your request addressed a resource inappropriately For example <br >permanent redirects occur if you use a pathstyle request to access a bucket that was created using <br ><CreateBucketConfiguration> For more information see Accessing a Bucket (p 60) <br >To help you find these errors during development this type of redirect does not contain a Location <br >HTTP header that allows you to automatically follow the request to the correct location Consult the <br >resulting XML error document for help using the correct Amazon S3 endpoint <br >Example REST API Redirect <br >HTTP11 307 Temporary Redirect <br >Location httpjohnsmiths3gztb4pa9sqamazonawscomphotospuppyjpg <br >rke2c69a31 <br >ContentType applicationxml <br >TransferEncoding chunked <br >Date Fri 12 Oct 2007 011256 GMT <br >Server AmazonS3 <br ><xml version10 encodingUTF8> <br ><Error> <br > <Code>TemporaryRedirect<Code> <br > <Message>Please resend this request to the specified temporary endpoint <br > Continue to use the original request endpoint for future requests< <br >Message> <br > <Endpoint>johnsmiths3gztb4pa9sqamazonawscom<Endpoint> <br ><Error> <br >Example SOAP API Redirect <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br ><soapenvBody> <br > <soapenvFault> <br > <Faultcode>soapenvClientTemporaryRedirect<Faultcode> <br > <Faultstring>Please resend this request to the specified temporary <br > endpoint <br > Continue to use the original request endpoint for future requests< <br >Faultstring> <br > <Detail> <br > <Bucket>images<Bucket> <br > <Endpoint>s3gztb4pa9sqamazonawscom<Endpoint> <br > <Detail> <br > <soapenvFault> <br ><soapenvBody> <br >DNS Considerations <br >One of the design requirements of Amazon S3 is extremely high availability One of the ways we meet <br >this requirement is by updating the IP addresses associated with the Amazon S3 endpoint in DNS <br >API Version 20060301 <br >516Amazon Simple Storage Service Developer Guide <br >DNS Considerations <br >as needed These changes are automatically reflected in shortlived clients but not in some long <br >lived clients Longlived clients will need to take special action to reresolve the Amazon S3 endpoint <br >periodically to benefit from these changes For more information about virtual machines (VMs) refer to <br >the following <br >• For Java Sun's JVM caches DNS lookups forever by default go to the InetAddress Caching <br >section of the InetAddress documentation for information on how to change this behavior <br >• For PHP the persistent PHP VM that runs in the most popular deployment configurations caches <br >DNS lookups until the VM is restarted Go to the getHostByName PHP docs <br >API Version 20060301 <br >517Amazon Simple Storage Service Developer Guide <br >Request Rate and Performance Considerations <br >Performance Optimization <br >This section discusses Amazon S3 best practices for optimizing performance in the following topics <br >Topics <br >• Request Rate and Performance Considerations (p 518) <br >• TCP Window Scaling (p 521) <br >• TCP Selective Acknowledgement (p 522) <br >Note <br >For more information about high performance tuning see Enabling High Performance Data <br >Transfers at the Pittsburgh Supercomputing Center (PSC) website <br >Request Rate and Performance Considerations <br >This topic discusses Amazon S3 best practices for optimizing performance depending on your request <br >rates If your workload in an Amazon S3 bucket routinely exceeds 100 PUTLISTDELETE requests <br >per second or more than 300 GET requests per second follow the guidelines in this topic to ensure the <br >best performance and scalability <br >Amazon S3 scales to support very high request rates If your request rate grows steadily Amazon S3 <br >automatically partitions your buckets as needed to support higher request rates However if you expect <br >a rapid increase in the request rate for a bucket to more than 300 PUTLISTDELETE requests per <br >second or more than 800 GET requests per second we recommend that you open a support case to <br >prepare for the workload and avoid any temporary limits on your request rate To open a support case <br >go to Contact Us <br >Note <br >The Amazon S3 best practice guidelines in this topic apply only if you are routinely processing <br >100 or more requests per second If your typical workload involves only occasional bursts of <br >100 requests per second and less than 800 requests per second you don't need to follow <br >these guidelines <br >If your workload in Amazon S3 uses ServerSide Encryption with AWS Key Management <br >Service (SSEKMS) go to Limits in the AWS Key Management Service Developer Guide to <br >get more information on the request rates supported for your use case <br >The Amazon S3 best practice guidance given in this topic is based on two types of workloads <br >• Workloads that include a mix of request types – If your requests are typically a mix of GET PUT <br >DELETE or GET Bucket (list objects) choosing appropriate key names for your objects will ensure <br >API Version 20060301 <br >518Amazon Simple Storage Service Developer Guide <br >Workloads with a Mix of Request Types <br >better performance by providing lowlatency access to the Amazon S3 index It will also ensure <br >scalability regardless of the number of requests you send per second <br >• Workloads that are GETintensive – If the bulk of your workload consists of GET requests we <br >recommend using the Amazon CloudFront content delivery service <br >Topics <br >• Workloads with a Mix of Request Types (p 519) <br >• GETIntensive Workloads (p 521) <br >Workloads with a Mix of Request Types <br >When uploading a large number of objects customers sometimes use sequential numbers or date <br >and time values as part of their key names For example you might choose key names that use some <br >combination of the date and time as shown in the following example where the prefix includes a <br >timestamp <br >examplebucket20132605150000cust1234234photo1jpg <br >examplebucket20132605150000cust3857422photo2jpg <br >examplebucket20132605150000cust1248473photo2jpg <br >examplebucket20132605150000cust8474937photo2jpg <br >examplebucket20132605150000cust1248473photo3jpg <br > <br >examplebucket20132605150001cust1248473photo4jpg <br >examplebucket20132605150001cust1248473photo5jpg <br >examplebucket20132605150001cust1248473photo6jpg <br >examplebucket20132605150001cust1248473photo7jpg <br > <br >The sequence pattern in the key names introduces a performance problem To understand the issue <br >let's look at how Amazon S3 stores key names <br >Amazon S3 maintains an index of object key names in each AWS region Object keys are stored in <br >UTF8 binary ordering across multiple partitions in the index The key name dictates which partition <br >the key is stored in Using a sequential prefix such as timestamp or an alphabetical sequence <br >increases the likelihood that Amazon S3 will target a specific partition for a large number of your keys <br >overwhelming the IO capacity of the partition If you introduce some randomness in your key name <br >prefixes the key names and therefore the IO load will be distributed across more than one partition <br >If you anticipate that your workload will consistently exceed 100 requests per second you should avoid <br >sequential key names If you must use sequential numbers or date and time patterns in key names <br >add a random prefix to the key name The randomness of the prefix more evenly distributes key names <br >across multiple index partitions Examples of introducing randomness are provided later in this topic <br >Note <br >The guidelines provided for the key name prefixes in the following section also apply to the <br >bucket name When Amazon S3 stores a key name in the index it stores the bucket names as <br >part of the key name (for example examplebucketobjectjpg) <br >Example 1 Add a Hex Hash Prefix to Key Name <br >One way to introduce randomness to key names is to add a hash string as prefix to the key name For <br >example you can compute an MD5 hash of the character sequence that you plan to assign as the <br >key name From the hash pick a specific number of characters and add them as the prefix to the key <br >name The following example shows key names with a fourcharacter hash <br >API Version 20060301 <br >519Amazon Simple Storage Service Developer Guide <br >Workloads with a Mix of Request Types <br >Note <br >A hashed prefix of three or four characters should be sufficient We strongly recommend using <br >a hexadecimal hash as the prefix <br >examplebucket232a20132605150000cust1234234photo1jpg <br >examplebucket7b5420132605150000cust3857422photo2jpg <br >examplebucket921c20132605150000cust1248473photo2jpg <br >examplebucketba6520132605150000cust8474937photo2jpg <br >examplebucket876120132605150000cust1248473photo3jpg <br >examplebucket2e4f20132605150001cust1248473photo4jpg <br >examplebucket981020132605150001cust1248473photo5jpg <br >examplebucket7e3420132605150001cust1248473photo6jpg <br >examplebucketc34a20132605150001cust1248473photo7jpg <br > <br >Note that this randomness does introduce some interesting challenges Amazon S3 provides a GET <br >Bucket (List Objects) operation which returns a UTF8 binary ordered list of key names Here are <br >some sideeffects <br >• Because of the hashed prefixes however the listing will appear randomly ordered <br >• The problem gets compounded if you want to list object keys with specific date in the key name The <br >preceding example uses 4 character hex hash so there are 65536 possible character combinations <br >(4 character prefix and each character can be any of the hex characters 0f) So you will be sending <br >65536 List Bucket requests each with a specific prefix that is a combination of 4digit hash and the <br >date For example suppose you want to find all keys with 20132605 in the key name Then you will <br >send List Bucket requests with prefixes such [0f][0f][0f][0f]20132605 <br >You can optionally add more prefixes in your key name before the hash string to group objects The <br >following example adds animations and videos prefixes to the key names <br >examplebucketanimations232a20132605150000cust1234234animation1obj <br >examplebucketanimations7b5420132605150000cust3857422animation2obj <br >examplebucketanimations921c20132605150000cust1248473animation3obj <br >examplebucketvideosba6520132605150000cust8474937video2mpg <br >examplebucketvideos876120132605150000cust1248473video3mpg <br >examplebucketvideos2e4f20132605150001cust1248473video4mpg <br >examplebucketvideos981020132605150001cust1248473video5mpg <br >examplebucketvideos7e3420132605150001cust1248473video6mpg <br >examplebucketvideosc34a20132605150001cust1248473video7mpg <br > <br >In this case the ordered list returned by the GET Bucket (List Objects) operation will be grouped by the <br >prefixes animations and videos <br >Note <br >Again the prefixes you add to group objects should not have sequences or you will again <br >overwhelm a single index partition <br >Example 2 Reverse the Key Name String <br >Suppose your application uploads objects with key names whose prefixes include an increasing <br >sequence of application IDs <br >examplebucket2134857datastartpng <br >API Version 20060301 <br >520Amazon Simple Storage Service Developer Guide <br >GETIntensive Workloads <br >examplebucket2134857dataresourcersrc <br >examplebucket2134857dataresultstxt <br >examplebucket2134858datastartpng <br >examplebucket2134858dataresourcersrc <br >examplebucket2134858dataresultstxt <br >examplebucket2134859datastartpng <br >examplebucket2134859dataresourcersrc <br >examplebucket2134859dataresultstxt <br >In this key naming scheme write operations will overwhelm a single index partition If you reverse the <br >application ID strings however you have the key names with random prefixes <br >examplebucket7584312datastartpng <br >examplebucket7584312dataresourcersrc <br >examplebucket7584312dataresultstxt <br >examplebucket8584312datastartpng <br >examplebucket8584312dataresourcersrc <br >examplebucket8584312dataresultstxt <br >examplebucket9584312datastartpng <br >examplebucket9584312dataresourcersrc <br >examplebucket9584312dataresultstxt <br >Reversing the key name string lays the groundwork for Amazon S3 to start with the following partitions <br >one for each distinct first character in the key name The examplebucket refers to the name of the <br >bucket where you upload application data <br >examplebucket7 <br >examplebucket8 <br >examplebucket9 <br >This example illustrate how Amazon S3 can use the first character of the key name for partitioning <br >but for very large workloads (more than 2000 requests per seconds or for bucket that contain billions <br >of objects) Amazon S3 can use more characters for the partitioning scheme Amazon S3 can <br >automatically split these partitions further as the key count and request rate increase over time <br >GETIntensive Workloads <br >If your workload is mainly sending GET requests in addition to the preceding guidelines you should <br >consider using Amazon CloudFront for performance optimization <br >Integrating Amazon CloudFront with Amazon S3 you can distribute content to your users with low <br >latency and a high data transfer rate You will also send fewer direct requests to Amazon S3 which will <br >reduce your costs <br >For example suppose you have a few objects that are very popular Amazon CloudFront will fetch <br >those objects from Amazon S3 and cache them Amazon CloudFront can then serve future requests <br >for the objects from its cache reducing the number of GET requests it sends to Amazon S3 For more <br >information go to the Amazon CloudFront product detail page <br >TCP Window Scaling <br >TCP window scaling allows you to improve network throughput performance between your operating <br >system and application layer and Amazon S3 by supporting window sizes larger than 64 KB At <br >API Version 20060301 <br >521Amazon Simple Storage Service Developer Guide <br >TCP Selective Acknowledgement <br >the start of the TCP session a client advertises its supported receive window WSCALE factor and <br >Amazon S3 responds with its supported receive window WSCALE factor for the upstream direction <br >Although TCP window scaling can improve performance it can be challenging to set correctly Make <br >sure to adjust settings at both the application and kernel level For more information about TCP window <br >scaling refer to your operating system's documentation and go to RFC 1323 <br >TCP Selective Acknowledgement <br >TCP selective acknowledgement is designed to increase recovery time after a large number of packet <br >losses TCP selective acknowledgement is supported by most newer operating systems but might <br >have to be enabled For more information about TCP selective acknowledgements refer to the <br >documentation that accompanied your operating system and go to RFC 2018 <br >API Version 20060301 <br >522Amazon Simple Storage Service Developer Guide <br >Amazon S3 CloudWatch Metrics <br >Monitoring Amazon S3 with Amazon <br >CloudWatch <br >You can use Amazon CloudWatch to monitor your Amazon S3 buckets tracking metrics such as <br >object counts and bytes stored CloudWatch is a monitoring service for AWS cloud resources and the <br >applications you run on AWS <br >You can use the AWS Management Console the AWS SDK and command line tools or the APIs to <br >retrieve the Amazon S3 metrics from CloudWatch similar to how you can retrieve metrics for other <br >AWS services <br >You can receive notifications or take automated actions by setting Amazon CloudWatch alarms on <br >any of the Amazon S3 metrics For example when a specific Amazon S3 metric crosses your alarm <br >threshold you can use Amazon Simple Notification Service to notify your application <br >Amazon S3 storage metrics are received and aggregated daily Daily storage metrics for Amazon S3 <br >are provided to all customers at no additional cost For more information about monitoring and alarming <br >pricing see Amazon CloudWatch Pricing <br >Topics <br >• Amazon S3 CloudWatch Metrics (p 523) <br >• Amazon S3 CloudWatch Dimensions (p 524) <br >• Accessing Amazon S3 Metrics in Amazon CloudWatch (p 524) <br >• Related Resources (p 525) <br >Amazon S3 CloudWatch Metrics <br >The AWSS3 namespace includes the following metrics <br >Metric Description <br >BucketSizeBytes The amount of data in bytes stored in a bucket in the Standard storage <br >class Standard Infrequent Access (Standard_IA) storage class or the <br >Reduced Redundancy Storage (RRS) class <br >Valid storage type filters StandardStorage or StandardIAStorage or <br >ReducedRedundancyStorage (see StorageType dimension) <br >API Version 20060301 <br >523Amazon Simple Storage Service Developer Guide <br >Amazon S3 CloudWatch Dimensions <br >Metric Description <br >NumberOfObjects The total number of objects stored in a bucket for all storage classes <br >except for the GLACIER storage class <br >Valid storage type filters AllStorageTypes only (see StorageType <br >dimension) <br >Amazon S3 CloudWatch Dimensions <br >The following dimensions are used to filter Amazon S3 metrics <br >Dimension Description <br >BucketName This dimension filters the data you request for the identified bucket <br >only <br >StorageType This dimension filters the data you have stored in a bucket <br >by the type of storage The types are StandardStorage for <br >the Standard storage class StandardIAStorage for the <br >Standard_IA storage class ReducedRedundancyStorage <br >for the Reduced Redundancy Storage (RRS) class and <br >AllStorageTypes The AllStorageTypes type includes the <br >Standard Standard_IA and RRS storage classes it does not <br >include the GLACIER storage class <br >Accessing Amazon S3 Metrics in Amazon <br >CloudWatch <br >To access metrics using the CloudWatch console <br >1 Open the CloudWatch console at httpsconsoleawsamazoncomcloudwatch <br >2 From the navigation bar select a region <br >3 In the navigation pane click Metrics <br >4 In the CloudWatch Metrics by Category pane select S3 Metrics <br >5 (Optional) In the graph pane select a statistic and a time period and then create a CloudWatch <br >alarm using these settings <br >To access metrics using the AWS CLI <br >• Use the listmetrics and getmetricstatistics commands <br >To access metrics using the CloudWatch CLI <br >• Use the monlistmetrics and mongetstats commands <br >To access metrics using the CloudWatch API <br >• Use the ListMetrics and GetMetricStatistics operations <br >API Version 20060301 <br >524Amazon Simple Storage Service Developer Guide <br >Related Resources <br >For more information about using Amazon CloudWatch to access the Amazon S3 metrics go to the <br >Amazon CloudWatch Developer Guide <br >Related Resources <br >• Amazon CloudWatch Logs API Reference <br >• Amazon CloudWatch Developer Guide <br >API Version 20060301 <br >525Amazon Simple Storage Service Developer Guide <br >Amazon S3 Information in CloudTrail <br >Logging Amazon S3 API Calls By <br >Using AWS CloudTrail <br >Amazon S3 is integrated with CloudTrail a service that captures specific API calls made to Amazon S3 <br >from your AWS account and delivers the log files to an Amazon S3 bucket that you specify CloudTrail <br >captures API calls made from the Amazon S3 console or from the Amazon S3 API <br >Using the information collected by CloudTrail you can determine what request was made to Amazon <br >S3 the source IP address from which the request was made who made the request when it was <br >made and so on This information helps you to track changes made to your AWS resources and to <br >troubleshoot operational issues CloudTrail makes it easier to ensure compliance with internal policies <br >and regulatory standards To learn more about CloudTrail including how to configure and enable it <br >see the AWS CloudTrail User Guide <br >Amazon S3 Information in CloudTrail <br >When CloudTrail logging is enabled in your AWS account API calls made to certain Amazon S3 <br >actions are tracked in CloudTrail log files Amazon S3 records are written together with other AWS <br >service records in a log file CloudTrail determines when to create and write to a new file based on a <br >time period and file size <br >The tables in this section list the Amazon S3 actions that are supported for logging by CloudTrail <br >Amazon S3 Actions Tracked by CloudTrail Logging <br >REST API Name API Event Name Used in CloudTrail Log <br >DELETE Bucket DeleteBucket <br >DELETE Bucket cors DeleteBucketCors <br >DELETE Bucket lifecycle DeleteBucketLifecycle <br >DELETE Bucket policy DeleteBucketPolicy <br >DELETE Bucket replication DeleteBucketReplication <br >DELETE Bucket tagging DeleteBucketTagging <br >DELETE Bucket website DeleteBucketWebsite <br >GET Bucket acl GetBucketAcl <br >API Version 20060301 <br >526Amazon Simple Storage Service Developer Guide <br >Amazon S3 Information in CloudTrail <br >REST API Name API Event Name Used in CloudTrail Log <br >GET Bucket cors GetBucketCors <br >GET Bucket lifecycle GetBucketLifecycle <br >GET Bucket policy GetBucketPolicy <br >GET Bucket location GetBucketLocation <br >GET Bucket logging GetBucketLogging <br >GET Bucket notification GetBucketNotification <br >GET Bucket replication GetBucketReplication <br >GET Bucket tagging GetBucketTagging <br >GET Bucket requestPayment GetBucketRequestPay <br >GET Bucket versioning GetBucketVersioning <br >GET Bucket website GetBucketWebsite <br >GET Service (List all buckets) ListBuckets <br >PUT Bucket CreateBucket <br >PUT Bucket acl PutBucketAcl <br >PUT Bucket cors PutBucketCors <br >PUT Bucket lifecycle PutBucketLifecycle <br >PUT Bucket policy PutBucketPolicy <br >PUT Bucket logging PutBucketLogging <br >PUT Bucket notification PutBucketNotification <br >PUT Bucket replication PutBucketReplication <br >PUT Bucket requestPayment PutBucketRequestPay <br >PUT Bucket tagging PutBucketTagging <br >PUT Bucket versioning PutBucketVersioning <br >PUT Bucket website PutBucketWebsite <br >CloudTrail tracks Amazon S3 SOAP API calls Amazon S3 SOAP support over HTTP is deprecated <br >but it is still available over HTTPS For more information about Amazon S3 SOAP support see <br >Appendix A Using the SOAP API (p 570) <br >Important <br >Newer Amazon S3 features are not supported for SOAP We recommend that you use either <br >the REST API or the AWS SDKs <br >Amazon S3 SOAP Actions Tracked by CloudTrail Logging <br >SOAP API Name API Event Name Used in CloudTrail Log <br >ListAllMyBuckets ListBuckets <br >API Version 20060301 <br >527Amazon Simple Storage Service Developer Guide <br >Using CloudTrail Logs with Amazon S3 <br >Server Access Logs and CloudWatch Logs <br >SOAP API Name API Event Name Used in CloudTrail Log <br >CreateBucket CreateBucket <br >DeleteBucket DeleteBucket <br >GetBucketAccessControlPolicy GetBucketAcl <br >SetBucketAccessControlPolicy PutBucketAcl <br >GetBucketLoggingStatus GetBucketLogging <br >SetBucketLoggingStatus PutBucketLogging <br >Every log entry contains information about who generated the request The user identity information <br >in the log helps you determine whether the request was made with root or IAM user credentials with <br >temporary security credentials for a role or federated user or by another AWS service For more <br >information see the userIdentity field in the CloudTrail Event Reference <br >You can store your log files in your bucket for as long as you want but you can also define Amazon <br >S3 lifecycle rules to archive or delete log files automatically By default your log files are encrypted by <br >using Amazon S3 serverside encryption (SSE) <br >You can choose to have CloudTrail publish Amazon SNS notifications when new log files are delivered <br >if you want to take quick action upon log file delivery For more information see Configuring Amazon <br >Simple Notification Service Notifications for CloudTrail <br >You can also aggregate Amazon S3 log files from multiple AWS regions and multiple AWS accounts <br >into a single Amazon S3 bucket For more information see Receiving CloudTrail Log Files from <br >Multiple Regions <br >Using CloudTrail Logs with Amazon S3 Server <br >Access Logs and CloudWatch Logs <br >You can use AWS CloudTrail logs together with server access logs for Amazon S3 CloudTrail logs <br >provide you with detailed API tracking for operations on your S3 bucket while server access logs for <br >Amazon S3 provide you visibility into objectlevel operations on your data in Amazon S3 For more <br >information about server access logs see Server Access Logging (p 546) <br >You can also use CloudTrail logs together with CloudWatch for Amazon S3 CloudTrail integration <br >with CloudWatch logs delivers S3 bucket level API activity captured by CloudTrail to a CloudWatch log <br >stream in the CloudWatch log group you specify You can create CloudWatch alarms for monitoring <br >specific API activity and receive email notifications when the specific API activity occurs For more <br >information about CloudWatch alarms for monitoring specific API activity see the AWS CloudTrail User <br >Guide For more information about using CloudWatch with Amazon S3 see Monitoring Amazon S3 <br >with Amazon CloudWatch (p 523) <br >Understanding Amazon S3 Log File Entries <br >CloudTrail log files contain one or more log entries where each entry is made up of multiple JSON <br >formatted events A log entry represents a single request from any source and includes information <br >about the requested action any parameters the date and time of the action and so on The log entries <br >are not guaranteed to be in any particular order That is they are not an ordered stack trace of the <br >public API calls <br >API Version 20060301 <br >528Amazon Simple Storage Service Developer Guide <br >Understanding Amazon S3 Log File Entries <br >The following example shows a CloudTrail log entry that demonstrates the DELETE Bucket policy <br >PUT Bucket acl and GET Bucket versioning actions <br >{ <br > Records [ <br > { <br > eventVersion 103 <br > userIdentity { <br > type IAMUser <br > principalId 111122223333 <br > arn arnawsiam111122223333usermyUserName <br > accountId 111122223333 <br > accessKeyId AKIAIOSFODNN7EXAMPLE <br > userName myUserName <br > } <br > eventTime 20150826T204631Z <br > eventSource s3amazonawscom <br > eventName DeleteBucketPolicy <br > awsRegion uswest2 <br > sourceIPAddress 127001 <br > userAgent [] <br > requestParameters { <br > bucketName myawsbucket <br > } <br > responseElements null <br > requestID 47B8E8D397DCE7A6 <br > eventID cdc4b7ede1714cef975aad829d4123e8 <br > eventType AwsApiCall <br > recipientAccountId 111122223333 <br > } <br > { <br > eventVersion 103 <br > userIdentity { <br > type IAMUser <br > principalId 111122223333 <br > arn arnawsiam111122223333usermyUserName <br > accountId 111122223333 <br > accessKeyId AKIAIOSFODNN7EXAMPLE <br > userName myUserName <br > } <br > eventTime 20150826T204631Z <br > eventSource s3amazonawscom <br > eventName PutBucketAcl <br > awsRegion uswest2 <br > sourceIPAddress <br > userAgent [] <br > requestParameters { <br > bucketName <br > AccessControlPolicy { <br > AccessControlList { <br > Grant { <br > Grantee { <br > xsitype CanonicalUser <br > xmlnsxsi httpwwww3org2001XMLSchema <br >instance <br > ID <br > d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example <br > } <br > Permission FULL_CONTROL <br >API Version 20060301 <br >529Amazon Simple Storage Service Developer Guide <br >Related Resources <br > } <br > } <br > xmlns https3amazonawscomdoc20060301 <br > Owner { <br > ID <br > d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example <br > } <br > } <br > } <br > responseElements null <br > requestID BD8798EACDD16751 <br > eventID 607b9532142341c7b048ec2641693c47 <br > eventType AwsApiCall <br > recipientAccountId 111122223333 <br > } <br > { <br > eventVersion 103 <br > userIdentity { <br > type IAMUser <br > principalId 111122223333 <br > arn arnawsiam111122223333usermyUserName <br > accountId 111122223333 <br > accessKeyId AKIAIOSFODNN7EXAMPLE <br > userName myUserName <br > } <br > eventTime 20150826T204631Z <br > eventSource s3amazonawscom <br > eventName GetBucketVersioning <br > awsRegion uswest2 <br > sourceIPAddress <br > userAgent [] <br > requestParameters { <br > bucketName myawsbucket <br > } <br > responseElements null <br > requestID 07D681279BD94AED <br > eventID f2b287f30df14961a2f4c4bdfed47657 <br > eventType AwsApiCall <br > recipientAccountId 111122223333 <br > } <br > ] <br >} <br >Related Resources <br >• AWS CloudTrail User Guide <br >• CloudTrail Event Reference <br >API Version 20060301 <br >530Amazon Simple Storage Service Developer Guide <br >How You are Charged for BitTorrent Delivery <br >Using BitTorrent with Amazon S3 <br >Topics <br >• How You are Charged for BitTorrent Delivery (p 531) <br >• Using BitTorrent to Retrieve Objects Stored in Amazon S3 (p 532) <br >• Publishing Content Using Amazon S3 and BitTorrent (p 533) <br >BitTorrent is an open peertopeer protocol for distributing files You can use the BitTorrent protocol <br >to retrieve any publiclyaccessible object in Amazon S3 This section describes why you might want to <br >use BitTorrent to distribute your data out of Amazon S3 and how to do so <br >Amazon S3 supports the BitTorrent protocol so that developers can save costs when distributing <br >content at high scale Amazon S3 is useful for simple reliable storage of any data The default <br >distribution mechanism for Amazon S3 data is via clientserver download In clientserver distribution <br >the entire object is transferred pointtopoint from Amazon S3 to every authorized user who requests <br >that object While clientserver delivery is appropriate for a wide variety of use cases it is not optimal <br >for everybody Specifically the costs of clientserver distribution increase linearly as the number of <br >users downloading objects increases This can make it expensive to distribute popular objects <br >BitTorrent addresses this problem by recruiting the very clients that are downloading the object as <br >distributors themselves Each client downloads some pieces of the object from Amazon S3 and <br >some from other clients while simultaneously uploading pieces of the same object to other interested <br >peers The benefit for publishers is that for large popular files the amount of data actually supplied by <br >Amazon S3 can be substantially lower than what it would have been serving the same clients via client <br >server download Less data transferred means lower costs for the publisher of the object <br >Note <br >You can get torrent only for objects that are less than 5 GB in size <br >How You are Charged for BitTorrent Delivery <br >There is no extra charge for use of BitTorrent with Amazon S3 Data transfer via the BitTorrent <br >protocol is metered at the same rate as clientserver delivery To be precise whenever a downloading <br >API Version 20060301 <br >531Amazon Simple Storage Service Developer Guide <br >Using BitTorrent to Retrieve Objects Stored in Amazon S3 <br >BitTorrent client requests a piece of an object from the Amazon S3 seeder charges accrue just <br >as if an anonymous request for that piece had been made using the REST or SOAP protocol These <br >charges will appear on your Amazon S3 bill and usage reports in the same way The difference is that <br >if a lot of clients are requesting the same object simultaneously via BitTorrent then the amount of data <br >Amazon S3 must serve to satisfy those clients will be lower than with clientserver delivery This is <br >because the BitTorrent clients are simultaneously uploading and downloading amongst themselves <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >The data transfer savings achieved from use of BitTorrent can vary widely depending on how popular <br >your object is Less popular objects require heavier use of the seeder to serve clients and thus the <br >difference between BitTorrent distribution costs and clientserver distribution costs might be small for <br >such objects In particular if only one client is ever downloading a particular object at a time the cost of <br >BitTorrent delivery will be the same as direct download <br >Using BitTorrent to Retrieve Objects Stored in <br >Amazon S3 <br >Any object in Amazon S3 that can be read anonymously can also be downloaded via BitTorrent <br >Doing so requires use of a BitTorrent client application Amazon does not distribute a BitTorrent client <br >application but there are many free clients available The Amazon S3BitTorrent implementation has <br >been tested to work with the official BitTorrent client (go to httpwwwbittorrentcom) <br >The starting point for a BitTorrent download is a torrent file This small file describes for BitTorrent <br >clients both the data to be downloaded and where to get started finding that data A torrent file is a <br >small fraction of the size of the actual object to be downloaded Once you feed your BitTorrent client <br >application an Amazon S3 generated torrent file it should start downloading immediately from Amazon <br >S3 and from any peer BitTorrent clients <br >Retrieving a torrent file for any publicly available object is easy Simply add a torrent query string <br >parameter at the end of the REST GET request for the object No authentication is required Once you <br >have a BitTorrent client installed downloading an object using BitTorrent download might be as easy <br >as opening this URL in your web browser <br >There is no mechanism to fetch the torrent for an Amazon S3 object using the SOAP API <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >API Version 20060301 <br >532Amazon Simple Storage Service Developer Guide <br >Publishing Content Using Amazon S3 and BitTorrent <br >Example <br >This example retrieves the Torrent file for the Nelson object in the quotes bucket <br >Sample Request <br >GET quotesNelsontorrent HTTP10 <br >Date Wed 25 Nov 2009 120000 GMT <br >Sample Response <br >HTTP11 200 OK <br >xamzrequestid 7CD745EBB7AB5ED9 <br >Date Wed 25 Nov 2009 120000 GMT <br >ContentDisposition attachment filenameNelsontorrent <br >ContentType applicationxbittorrent <br >ContentLength 537 <br >Server AmazonS3 <br ><body a Bencoded dictionary as defined by the BitTorrent specification> <br >Publishing Content Using Amazon S3 and <br >BitTorrent <br >Every anonymously readable object stored in Amazon S3 is automatically available for download using <br >BitTorrent The process for changing the ACL on an object to allow anonymous READ operations is <br >described in Managing Access Permissions to Your Amazon S3 Resources (p 266) <br >You can direct your clients to your BitTorrent accessible objects by giving them the torrent file directly <br >or by publishing a link to the torrent URL of your object One important thing to note is that the torrent <br >file describing an Amazon S3 object is generated ondemand the first time it is requested (via the <br >REST torrent resource) Generating the torrent for an object takes time proportional to the size of that <br >object For large objects this time can be significant Therefore before publishing a torrent link we <br >suggest making the first request for it yourself Amazon S3 might take several minutes to respond to <br >this first request as it generates the torrent file Unless you update the object in question subsequent <br >requests for the torrent will be fast Following this procedure before distributing a torrent link will <br >ensure a smooth BitTorrent downloading experience for your customers <br >To stop distributing a file using BitTorrent simply remove anonymous access to it This can be <br >accomplished by either deleting the file from Amazon S3 or modifying your access control policy to <br >prohibit anonymous reads After doing so Amazon S3 will no longer act as a seeder in the BitTorrent <br >network for your file and will no longer serve the torrent file via the torrent REST API However <br >after a torrent for your file is published this action might not stop public downloads of your object that <br >happen exclusively using the BitTorrent peer to peer network <br >API Version 20060301 <br >533Amazon Simple Storage Service Developer Guide <br >Amazon S3 Customer Data Isolation <br >Using Amazon DevPay with <br >Amazon S3 <br >Topics <br >• Amazon S3 Customer Data Isolation (p 534) <br >• Amazon DevPay Token Mechanism (p 535) <br >• Amazon S3 and Amazon DevPay Authentication (p 535) <br >• Amazon S3 Bucket Limitation (p 536) <br >• Amazon S3 and Amazon DevPay Process (p 537) <br >• Additional Information (p 537) <br >Amazon DevPay enables you to charge customers for using your Amazon S3 product through <br >Amazon's authentication and billing infrastructure You can charge any amount for your product <br >including usage charges (storage transactions and bandwidth) monthly fixed charges and a onetime <br >charge <br >Once a month Amazon bills your customers for you AWS then deducts the fixed Amazon DevPay <br >transaction fee and pays you the difference AWS then separately charges you for the Amazon S3 <br >usage costs incurred by your customers and the percentagebased Amazon DevPay fee <br >If your customers do not pay their bills AWS turns off access to Amazon S3 (and your product) AWS <br >handles all payment processing <br >Amazon S3 Customer Data Isolation <br >Amazon DevPay requests store and access data on behalf of the users of your product The resources <br >created by your application are owned by your users unless you modify the ACL you cannot read or <br >modify the user's data <br >Data stored by your product is isolated from other Amazon DevPay products and general Amazon <br >S3 access Customers that store data in Amazon S3 through your product can only access that <br >data through your product The data cannot be accessed through other Amazon DevPay products or <br >through a personal AWS account <br >Two users of a product can only access each others data if your application explicitly grants access <br >through the ACL <br >API Version 20060301 <br >534Amazon Simple Storage Service Developer Guide <br >Example <br >Example <br >The following figure illustrates allowed disallowed and conditional (discretionary) data access <br >Betty's access is limited as follows <br >• She can access Lolcatz data through the Lolcatz product If she attempts to access her Lolcatz data <br >through another product or a personal AWS account her requests will be denied <br >• She can access Alvin's eScrapBook data through the eScrapBook product if access is explicitly <br >granted <br >Amazon DevPay Token Mechanism <br >To enable you to make requests on behalf of your customers and ensure that your customers are billed <br >for use of your application your application must send two tokens with each request the product token <br >and the user token <br >The product token identifies your product you must have one product token for each Amazon DevPay <br >product that you provide The user token identifies a user in relationship to your product you must <br >have a user token for each userproduct combination For example if you provide two products and a <br >user subscribes to each you must obtain a separate user token for each product <br >For information on obtaining product and user tokens refer to the Amazon DevPay Amazon DevPay <br >Getting Started Guide <br >Amazon S3 and Amazon DevPay Authentication <br >Although the token mechanism uniquely identifies a customer and product it does not provide <br >authentication <br >API Version 20060301 <br >535Amazon Simple Storage Service Developer Guide <br >Amazon S3 Bucket Limitation <br >Normally your applications communicate directly with Amazon S3 using your Access Key ID and <br >Secret Access Key For Amazon DevPay Amazon S3 authentication works a little differently <br >If your Amazon DevPay product is a web application you securely store the Secret Access Key on <br >your servers and use the user token to specify the customer for which requests are being made <br >However if your Amazon S3 application is installed on your customers' computers your application <br >must obtain an Access Key ID and a Secret Access Key for each installation and must use those <br >credentials when communicating with Amazon S3 <br >The following figure shows the differences between authentication for web applications and user <br >applications <br >Amazon S3 Bucket Limitation <br >Each of your customers can have up to 100 buckets for each Amazon DevPay product that you sell <br >For example if a customer uses three of your products the customer can have up to 300 buckets <br >(100 * 3) plus any buckets outside of your Amazon DevPay products (ie buckets in Amazon DevPay <br >products from other developers and the customer's personal AWS account) <br >If your customers require more than 100 buckets in an account they can submit a bucket limit increase <br >request For information about how to increase your bucket limit go to AWS Service Limits in the AWS <br >General Reference <br >API Version 20060301 <br >536Amazon Simple Storage Service Developer Guide <br >Amazon S3 and Amazon DevPay Process <br >Amazon S3 and Amazon DevPay Process <br >Following is a highlevel overview of the Amazon DevPay process <br >Launch Process <br >1 A customer signs up for your product through Amazon <br >2 The customer receives an activation key <br >3 The customer enters the activation key into your application <br >4 Your application communicates with Amazon and obtains the user's token If your application <br >is installed on the user's computer it also obtains an Access Key ID and Secret Access Key on <br >behalf of the customer <br >5 Your application provides the customer's token and the application product token when <br >making Amazon S3 requests on behalf of the customer If your application is installed on the <br >customer's computer it authenticates with the customer's credentials <br >6 Amazon uses the customer's token and your product token to determine who to bill for the <br >Amazon S3 usage <br >7 Once a month Amazon processes usage data and bills your customers according to the terms <br >you defined <br >8 AWS deducts the fixed Amazon DevPay transaction fee and pays you the difference AWS <br >then separately charges you for the Amazon S3 usage costs incurred by your customers and <br >the percentagebased Amazon DevPay fee <br >Additional Information <br >For information about using setting up and integrating with Amazon DevPay go to Amazon DevPay <br >API Version 20060301 <br >537Amazon Simple Storage Service Developer Guide <br >The REST Error Response <br >Handling REST and SOAP Errors <br >Topics <br >• The REST Error Response (p 538) <br >• The SOAP Error Response (p 540) <br >• Amazon S3 Error Best Practices (p 540) <br >This section describes REST and SOAP errors and how to handle them <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >The REST Error Response <br >Topics <br >• Response Headers (p 539) <br >• Error Response (p 539) <br >If a REST request results in an error the HTTP reply has <br >• An XML error document as the response body <br >• ContentType applicationxml <br >• An appropriate 3xx 4xx or 5xx HTTP status code <br >Following is an example of a REST Error Response <br ><xml version10 encodingUTF8> <br ><Error> <br > <Code>NoSuchKey<Code> <br > <Message>The resource you requested does not exist<Message> <br > <Resource>mybucketmyfotojpg<Resource> <br > <RequestId>4442587FB7D0A2F9<RequestId> <br >API Version 20060301 <br >538Amazon Simple Storage Service Developer Guide <br >Response Headers <br ><Error> <br >For more information about Amazon S3 errors go to ErrorCodeList <br >Response Headers <br >Following are response headers returned by all operations <br >• xamzrequestid A unique ID assigned to each request by the system In the unlikely event <br >that you have problems with Amazon S3 Amazon can use this to help troubleshoot the problem <br >• xamzid2 A special token that will help us to troubleshoot problems <br >Error Response <br >Topics <br >• Error Code (p 539) <br >• Error Message (p 539) <br >• Further Details (p 539) <br >When an Amazon S3 request is in error the client receives an error response The exact format of <br >the error response is API specific For example the REST error response differs from the SOAP error <br >response However all error responses have common elements <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >Error Code <br >The error code is a string that uniquely identifies an error condition It is meant to be read and <br >understood by programs that detect and handle errors by type Many error codes are common <br >across SOAP and REST APIs but some are APIspecific For example NoSuchKey is universal but <br >UnexpectedContent can occur only in response to an invalid REST request In all cases SOAP fault <br >codes carry a prefix as indicated in the table of error codes so that a NoSuchKey error is actually <br >returned in SOAP as ClientNoSuchKey <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >Error Message <br >The error message contains a generic description of the error condition in English It is intended for <br >a human audience Simple programs display the message directly to the end user if they encounter <br >an error condition they don't know how or don't care to handle Sophisticated programs with more <br >exhaustive error handling and proper internationalization are more likely to ignore the error message <br >Further Details <br >Many error responses contain additional structured data meant to be read and understood by a <br >developer diagnosing programming errors For example if you send a ContentMD5 header with a <br >REST PUT request that doesn't match the digest calculated on the server you receive a BadDigest <br >API Version 20060301 <br >539Amazon Simple Storage Service Developer Guide <br >The SOAP Error Response <br >error The error response also includes as detail elements the digest we calculated and the digest <br >you told us to expect During development you can use this information to diagnose the error In <br >production a wellbehaved program might include this information in its error log <br >The SOAP Error Response <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >In SOAP an error result is returned to the client as a SOAP fault with the HTTP response code 500 <br >If you do not receive a SOAP fault then your request was successful The Amazon S3 SOAP fault <br >code is comprised of a standard SOAP 11 fault code (either Server or Client) concatenated with <br >the Amazon S3specific error code For example ServerInternalError or ClientNoSuchBucket The <br >SOAP fault string element contains a generic human readable error message in English Finally the <br >SOAP fault detail element contains miscellaneous information relevant to the error <br >For example if you attempt to delete the object Fred which does not exist the body of the SOAP <br >response contains a NoSuchKey SOAP fault <br >Example <br ><soapenvBody> <br > <soapenvFault> <br > <Faultcode>soapenvClientNoSuchKey<Faultcode> <br > <Faultstring>The specified key does not exist<Faultstring> <br > <Detail> <br > <Key>Fred<Key> <br > <Detail> <br > <soapenvFault> <br ><soapenvBody> <br >For more information about Amazon S3 errors go to ErrorCodeList <br >Amazon S3 Error Best Practices <br >When designing an application for use with Amazon S3 it is important to handle Amazon S3 errors <br >appropriately This section describes issues to consider when designing your application <br >Retry InternalErrors <br >Internal errors are errors that occur within the Amazon S3 environment <br >Requests that receive an InternalError response might not have processed For example if a PUT <br >request returns InternalError a subsequent GET might retrieve the old value or the updated value <br >If Amazon S3 returns an InternalError response retry the request <br >Tune Application for Repeated SlowDown errors <br >As with any distributed system S3 has protection mechanisms which detect intentional or unintentional <br >resource overconsumption and react accordingly SlowDown errors can occur when a high request <br >API Version 20060301 <br >540Amazon Simple Storage Service Developer Guide <br >Isolate Errors <br >rate triggers one of these mechanisms Reducing your request rate will decrease or eliminate errors <br >of this type Generally speaking most users will not experience these errors regularly however if you <br >would like more information or are experiencing high or unexpected SlowDown errors please post <br >to our Amazon S3 developer forum httpsforumsawsamazoncom or sign up for AWS Premium <br >Support httpawsamazoncompremiumsupport <br >Isolate Errors <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >Amazon S3 provides a set of error codes that are used by both the SOAP and REST API The SOAP <br >API returns standard Amazon S3 error codes The REST API is designed to look like a standard HTTP <br >server and interact with existing HTTP clients (eg browsers HTTP client libraries proxies caches <br >and so on) To ensure the HTTP clients handle errors properly we map each Amazon S3 error to an <br >HTTP status code <br >HTTP status codes are less expressive than Amazon S3 error codes and contain less information <br >about the error For example the NoSuchKey and NoSuchBucket Amazon S3 errors both map to the <br >HTTP 404 Not Found status code <br >Although the HTTP status codes contain less information about the error clients that understand <br >HTTP but not the Amazon S3 API will usually handle the error correctly <br >Therefore when handling errors or reporting Amazon S3 errors to end users use the Amazon S3 error <br >code instead of the HTTP status code as it contains the most information about the error Additionally <br >when debugging your application you should also consult the human readable <Details> element of <br >the XML error response <br >API Version 20060301 <br >541Amazon Simple Storage Service Developer Guide <br >General Getting my Amazon S3 request IDs <br >Troubleshooting Amazon S3 <br >The following section discusses common issues that you might encounter when you work with Amazon <br >S3 <br >General Getting my Amazon S3 request IDs <br >Whenever you need to contact AWS Support due to encountering errors or unexpected behavior <br >in Amazon S3 you will need to get the request IDs associated with the failed action Getting these <br >request IDs enables AWS Support to help you resolve the problems you're experiencing Request IDs <br >come in pairs are returned in every response that Amazon S3 processes (even the erroneous ones) <br >and can be accessed through verbose logs There are a number of common methods for getting your <br >request IDs <br >Once you've recovered these logs copy and retain those two values as you'll need the pair of them <br >when you contact AWS Support <br >Topics <br >• Using HTTP (p 542) <br >• Using a Web Browser (p 543) <br >• Using an AWS SDK (p 543) <br >• Using the AWS CLI (p 544) <br >• Using Windows PowerShell (p 544) <br >Using HTTP <br >You can obtain your request IDs xamzrequestid and xamzid2 by logging the bits of an <br >HTTP request before the it reaches the target application There are a variety of 3rd party tools that <br >can be used to recover verbose logs for HTTP requests Choose one you trust and run the tool <br >listening on the port that your Amazon S3 traffic travels on as you send out another Amazon S3 HTTP <br >request <br >For HTTP requests the pair of request IDs will look like the following examples <br >xamzrequestid 79104EXAMPLEB723 <br >API Version 20060301 <br >542Amazon Simple Storage Service Developer Guide <br >Using a Web Browser <br >xamzid2 IOWQ4fDEXAMPLEQM+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK+Jd1vEXAMPLEa3Km <br >Note <br >HTTPS requests are encrypted and hidden in most packet captures <br >Using a Web Browser <br >Most web browsers have developer tools that allow you to view request headers <br >For web browser based requests that return an error the pair of requests IDs will look like the following <br >examples <br ><Error><Code>AccessDenied<Code><Message>Access Denied<Message> <br ><RequestId>79104EXAMPLEB723<RequestId><HostId>IOWQ4fDEXAMPLEQM <br >+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK+Jd1vEXAMPLEa3Km<HostId><Error> <br >For obtaining the request ID pair from successful requests you'll need to use the developer tools to <br >look at the HTTP response headers For information about developer tools for specific browsers see <br >Amazon S3 Troubleshooting How to recover your S3 request IDs in the AWS Developer Forums <br >Using an AWS SDK <br >The following sections include information for configuring logging using an AWS SDK While you can <br >enable verbose logging on every request and response you should not enable logging in production <br >systems since large requestsresponses can cause significant slow down in an application <br >For AWS SDK requests the pair of request IDs will look like the following examples <br >Status Code 403 AWS Service Amazon S3 AWS Request ID 79104EXAMPLEB723 <br >AWS Error Code AccessDenied AWS Error Message Access Denied <br >S3 Extended Request ID IOWQ4fDEXAMPLEQM+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK <br >+Jd1vEXAMPLEa3Km <br >Using the SDK for PHP <br >You can configure logging using PHP For more information see How can I see what data is sent over <br >the wire in the FAQ for the AWS SDK for PHP <br >Using the SDK for Java <br >You can enable logging for specific requests or responses allowing you to catch and return only the <br >relevant headers To do this import the comamazonawsservicess3s3ResponseMetadata <br >class Afterwards you can store the request in a variable before performing the actual request Call <br >getCachedResponseMetadata(AmazonWebServiceRequest request)getRequestID() to get <br >the logged request or response <br >Example <br >PutObjectRequest req new PutObjectRequest(bucketName key <br > createSampleFile()) <br >s3putObject(req) <br >S3ResponseMetadata md s3getCachedResponseMetadata(req) <br >Systemoutprintln(Host ID + mdgetHostId() + RequestID + <br > mdgetRequestId()) <br >API Version 20060301 <br >543Amazon Simple Storage Service Developer Guide <br >Using the AWS CLI <br >Alternatively you can use verbose logging of every Java request and response For more information <br >see Verbose Wire Logging in the Logging AWS SDK for Java Calls topic in the AWS SDK for Java <br >Developer Guide <br >Using the AWS SDK for NET <br >You can configure logging in AWS SDK for NET using the built in SystemDiagnostics logging tool <br >For more information see the Logging with the AWS SDK for NET NET Development blog post <br >Note <br >By default the returned log will only contain error information The config file needs to have <br >AWSLogMetrics (and optionally AWSResponseLogging) added to get the request IDs <br >Using the SDK for Python <br >You can configure logging in Python by adding the following lines to your code to output debug <br >information to a file <br >import logging <br >loggingbasicConfig(filenamemyloglog levelloggingDEBUG) <br >If you’re using the Boto Python interface for AWS you can set the debug level to two as per the Boto <br >docs here <br >Using the SDK for Ruby <br >You can get your request IDs using either the SDK for Ruby Version 1 or Version 2 <br >• Using the SDK for Ruby Version 1– You can enable HTTP wire logging globally with the following <br >line of code <br >s3 AWSS3new(logger > Loggernew(stdout) http_wire_trace > true) <br >• Using the SDK for Ruby Version 2– You can enable HTTP wire logging globally with the following <br >line of code <br >s3 AwsS3Clientnew(logger > Loggernew(stdout) http_wire_trace > <br > true) <br >Using the AWS CLI <br >You can get your request IDs in the AWS CLI by adding debug to your command <br >Using Windows PowerShell <br >For information on recovering logs with Windows PowerShell see the Response Logging in AWS <br >Tools for Windows PowerShell NET Development blog post <br >Related Topics <br >For other troubleshooting and support topics see the following <br >API Version 20060301 <br >544Amazon Simple Storage Service Developer Guide <br >Related Topics <br >Troubleshooting CORS Issues (p 142) <br >Handling REST and SOAP Errors (p 538) <br >AWS Support Documentation <br >For troubleshooting information regarding third party tools see Getting Amazon S3 request IDs in the <br >AWS Developer Forums <br >API Version 20060301 <br >545Amazon Simple Storage Service Developer Guide <br >Overview <br >Server Access Logging <br >Overview <br >In order to track requests for access to your bucket you can enable access logging Each access log <br >record provides details about a single access request such as the requester bucket name request <br >time request action response status and error code if any Access log information can be useful in <br >security and access audits It can also help you learn about your customer base and understand your <br >Amazon S3 bill <br >Note <br >There is no extra charge for enabling server access logging on an Amazon S3 bucket <br >however any log files the system delivers to you will accrue the usual charges for storage <br >(You can delete the log files at any time) No data transfer charges will be assessed for log file <br >delivery but access to the delivered log files is charged the same as any other data transfer <br >By default logging is disabled To enable access logging you must do the following <br >• Turn on the log delivery by adding logging configuration on the bucket for which you want Amazon <br >S3 to deliver access logs We will refer to this bucket as the source bucket <br >• Grant the Amazon S3 Log Delivery group write permission on the bucket where you want the access <br >logs saved We will refer to this bucket as the target bucket <br >To turn on log delivery you provide the following logging configuration information <br >• Name of the target bucket name where you want Amazon S3 to save the access logs as objects <br >You can have logs delivered to any bucket that you own including the source bucket We <br >recommend that you save access logs in a different bucket so you can easily manage the logs If you <br >choose to save access logs in the same bucket as the source bucket we recommend you specify a <br >prefix to all log object keys so that you can easily identify the log objects <br >Note <br >Both the source and target buckets must be owned by the same AWS account <br >•(Optional) A prefix for Amazon S3 to assign to all log object keys The prefix will make it simpler for <br >you to locate the log objects <br >For example if you specify the prefix value logs each log object that Amazon S3 creates will <br >begin with the logs prefix in its key as in this example <br >API Version 20060301 <br >546Amazon Simple Storage Service Developer Guide <br >Log Object Key Format <br >logs20131101213216E568B2907131C0C0 <br >The key prefix can help when you delete the logs For example you can set a lifecycle configuration <br >rule for Amazon S3 to delete objects with a specific key prefix For more information see Deleting <br >Log Files (p 559) <br >•(Optional) Permissions so that others can access the generated logs By default the bucket owner <br >always has full access to the log objects You can optionally grant access to other users <br >Log Object Key Format <br >Amazon S3 uses the following object key format for the log objects it uploads in the target bucket <br >TargetPrefixYYYYmmDDHHMMSSUniqueString <br >In the key YYYY mm DD HH MM and SS are the digits of the year month day hour minute and <br >seconds (respectively) when the log file was delivered <br >A log file delivered at a specific time can contain records written at any point before that time There is <br >no way to know whether all log records for a certain time interval have been delivered or not <br >The UniqueString component of the key is there to prevent overwriting of files It has no meaning and <br >log processing software should ignore it <br >How are Logs Delivered <br >Amazon S3 periodically collects access log records consolidates the records in log files and then <br >uploads log files to your target bucket as log objects If you enable logging on multiple source buckets <br >that identify the same target bucket the target bucket will have access logs for all those source <br >buckets but each log object will report access log records for a specific source bucket <br >Amazon S3 uses a special log delivery account called the Log Delivery group to write access logs <br >These writes are subject to the usual access control restrictions You will need to grant the Log <br >Delivery group write permission on the target bucket by adding a grant entry in the bucket's access <br >control list (ACL) If you use the Amazon S3 console to enable logging on a bucket the console will <br >both enable logging on the source bucket and update the ACL on the target bucket to grant write <br >permission to the Log Delivery group <br >Best Effort Server Log Delivery <br >Server access log records are delivered on a best effort basis Most requests for a bucket that is <br >properly configured for logging will result in a delivered log record and most log records will be <br >delivered within a few hours of the time that they were recorded <br >The completeness and timeliness of server logging however is not guaranteed The log record for a <br >particular request might be delivered long after the request was actually processed or it might not be <br >delivered at all The purpose of server logs is to give you an idea of the nature of traffic against your <br >bucket It is not meant to be a complete accounting of all requests It is rare to lose log records but <br >server logging is not meant to be a complete accounting of all requests <br >It follows from the besteffort nature of the server logging feature that the usage reports available at the <br >AWS portal (Billing and Cost Management reports on the AWS Management Console) might include <br >one or more access requests that do not appear in a delivered server log <br >API Version 20060301 <br >547Amazon Simple Storage Service Developer Guide <br >Bucket Logging Status Changes Take Effect Over Time <br >Bucket Logging Status Changes Take Effect Over <br >Time <br >Changes to the logging status of a bucket take time to actually affect the delivery of log files For <br >example if you enable logging for a bucket some requests made in the following hour might be <br >logged while others might not If you change the target bucket for logging from bucket A to bucket B <br >some logs for the next hour might continue to be delivered to bucket A while others might be delivered <br >to the new target bucket B In all cases the new settings will eventually take effect without any further <br >action on your part <br >Related Topics <br >For more information about server access logging see the following topics <br >• Enabling Logging Using the Console (p 548) <br >• Enabling Logging Programmatically (p 550) <br >• Server Access Log Format (p 553) <br >• Deleting Log Files (p 559) <br >Enabling Logging Using the Console <br >To enable logging (see Server Access Logging (p 546)) the Amazon S3 console provides a Logging <br >section in the bucket Properties <br >When you enable logging on a bucket the console will both enable logging on the source bucket and <br >add a grant in the target bucket's ACL granting write permission to the Log Delivery group <br >API Version 20060301 <br >548Amazon Simple Storage Service Developer Guide <br >Enabling Logging Using the Console <br >To enable logging on a bucket <br >1 Sign in to the AWS Management Console and open the Amazon S3 console at https <br >consoleawsamazoncoms3 <br >2 Under All Buckets click the bucket for which access requests will be logged <br >3 In the Details pane click Properties <br >4 Under Logging do the following <br >• Select the Enabled check box <br >• In the Target Bucket box click the name of the bucket that will receive the log objects <br >•(optional) To specify a key prefix for log objects in the Target Prefix box type the prefix that <br >you want <br >5 Click Save <br >To disable logging on a bucket <br >1 Sign in to the AWS Management Console and open the Amazon S3 console at https <br >consoleawsamazoncoms3 <br >2 Under All Buckets click the bucket for which access requests will be logged <br >3 In the Details pane click Properties Under Logging clear the Enabled check box <br >4 Click Save <br >For information about enable logging programmatically see Enabling Logging <br >Programmatically (p 550) <br >For information about the log record format including the list of fields and their descriptions see Server <br >Access Log Format (p 553) <br >API Version 20060301 <br >549Amazon Simple Storage Service Developer Guide <br >Enabling Logging Programmatically <br >Enabling Logging Programmatically <br >Topics <br >• Enabling logging (p 550) <br >• Granting the Log Delivery Group WRITE and READ_ACP Permissions (p 550) <br >• Example AWS SDK for NET (p 551) <br >You can enable or disable logging programmatically by using either the Amazon S3 API or the AWS <br >SDKs To do so you both enable logging on the bucket and grant the Log Delivery group permission to <br >write logs to the target bucket <br >Enabling logging <br >To enable logging you submit a PUT Bucket logging request to add the logging configuration on <br >source bucket The request specifies the target bucket and optionally the prefix to be used with all log <br >object keys The following example identifies logbucket as the target bucket and logs as the prefix <br ><BucketLoggingStatus xmlnshttpdocs3amazonawscom20060301> <br > <LoggingEnabled> <br > <TargetBucket>logbucket<TargetBucket> <br > <TargetPrefix>logs<TargetPrefix> <br > <LoggingEnabled> <br ><BucketLoggingStatus> <br >The log objects are written and owned by the Log Delivery account and the bucket owner is granted full <br >permissions on the log objects In addition you can optionally grant permissions to other users so that <br >they may access the logs For more information see PUT Bucket logging <br >Amazon S3 also provides the GET Bucket logging API to retrieve logging configuration on a <br >bucket To delete logging configuration you send the PUT Bucket logging request with empty <br ><BucketLoggingStatus> empty <br ><BucketLoggingStatus xmlnshttpdocs3amazonawscom20060301> <br ><BucketLoggingStatus> <br >You can use either the Amazon S3 API or the AWS SDK wrapper libraries to enable logging on a <br >bucket <br >Granting the Log Delivery Group WRITE and <br >READ_ACP Permissions <br >Amazon S3 writes the log files to the target bucket as a member of the predefined Amazon S3 group <br >Log Delivery These writes are subject to the usual access control restrictions You will need to grant <br >s3GetObjectAcl and s3PutObject permissions to this group by adding grants to the access control list <br >(ACL) of the target bucket The Log Delivery group is represented by the following URL <br >httpacsamazonawscomgroupss3LogDelivery <br >To grant WRITE and READ_ACP permissions you have to add the following grants For information <br >about ACLs see Managing Access with ACLs (p 364) <br >API Version 20060301 <br >550Amazon Simple Storage Service Developer Guide <br >Example AWS SDK for NET <br ><Grant> <br > <Grantee xmlnsxsihttpwwww3org2001XMLSchemainstance <br > xsitypeGroup> <br > <URI>httpacsamazonawscomgroupss3LogDelivery<URI> <br > <Grantee> <br > <Permission>WRITE<Permission> <br ><Grant> <br ><Grant> <br > <Grantee xmlnsxsihttpwwww3org2001XMLSchemainstance <br > xsitypeGroup> <br > <URI>httpacsamazonawscomgroupss3LogDelivery<URI> <br > <Grantee> <br > <Permission>READ_ACP<Permission> <br ><Grant> <br >For examples of adding ACL grants programmatically using AWS SDKs see Managing ACLs Using <br >the AWS SDK for Java (p 370) and Managing ACLs Using the AWS SDK for NET (p 374) <br >Example AWS SDK for NET <br >The following C# example enables logging on a bucket You will need to create two buckets source <br >bucket and target bucket The example first grants the Log Delivery group necessary permission to <br >write logs to the target bucket and then enable logging on the source bucket For more information see <br >Enabling Logging Programmatically (p 550) For instructions on how to create and test a working <br >sample see Running the Amazon S3 NET Code Examples (p 566) <br >using System <br >using AmazonS3 <br >using AmazonS3Model <br >namespace s3amazoncomdocsamples <br >{ <br > class ServerAccesLogging <br > { <br > static string sourceBucket *** Provide bucket name *** On <br > which to enable logging <br > static string targetBucket *** Provide bucket name *** Where <br > access logs can be stored <br > static string logObjectKeyPrefix Logs <br > static IAmazonS3 client <br > public static void Main(string[] args) <br > { <br > using (client new <br > AmazonS3Client(AmazonRegionEndpointUSEast1)) <br > { <br > ConsoleWriteLine(Enabling logging on source bucket) <br > try <br > { <br > Step 1 Grant Log Delivery group permission to write <br > log to the target bucket <br > GrantLogDeliveryPermissionToWriteLogsInTargetBucket() <br > Step 2 Enable logging on the source bucket <br > EnableDisableLogging() <br > } <br > catch (AmazonS3Exception amazonS3Exception) <br > { <br >API Version 20060301 <br >551Amazon Simple Storage Service Developer Guide <br >Example AWS SDK for NET <br > if (amazonS3ExceptionErrorCode null && <br > <br > (amazonS3ExceptionErrorCodeEquals(InvalidAccessKeyId) <br > || <br > <br > amazonS3ExceptionErrorCodeEquals(InvalidSecurity))) <br > { <br > ConsoleWriteLine(Check the provided AWS <br > Credentials) <br > ConsoleWriteLine( <br > To sign up for service go to httpawsamazoncom <br >s3) <br > } <br > else <br > { <br > ConsoleWriteLine( <br > Error occurred Message'{0}' when enabling <br > logging <br > amazonS3ExceptionMessage) <br > } <br > } <br > } <br > ConsoleWriteLine(Press any key to continue) <br > ConsoleReadKey() <br > } <br > static void GrantLogDeliveryPermissionToWriteLogsInTargetBucket() <br > { <br > S3AccessControlList bucketACL new S3AccessControlList() <br > GetACLResponse aclResponse clientGetACL(new GetACLRequest <br > { BucketName targetBucket }) <br > bucketACL aclResponseAccessControlList <br > bucketACLAddGrant(new S3Grantee { URI http <br >acsamazonawscomgroupss3LogDelivery } S3PermissionWRITE) <br > bucketACLAddGrant(new S3Grantee { URI http <br >acsamazonawscomgroupss3LogDelivery } S3PermissionREAD_ACP) <br > PutACLRequest setACLRequest new PutACLRequest <br > { <br > AccessControlList bucketACL <br > BucketName targetBucket <br > } <br > clientPutACL(setACLRequest) <br > } <br > static void EnableDisableLogging() <br > { <br > S3BucketLoggingConfig loggingConfig new S3BucketLoggingConfig <br > { <br > TargetBucketName targetBucket <br > TargetPrefix logObjectKeyPrefix <br > } <br > Send request <br > PutBucketLoggingRequest putBucketLoggingRequest new <br > PutBucketLoggingRequest <br > { <br > BucketName sourceBucket <br > LoggingConfig loggingConfig <br >API Version 20060301 <br >552Amazon Simple Storage Service Developer Guide <br >Log Format <br > } <br > PutBucketLoggingResponse response <br > clientPutBucketLogging(putBucketLoggingRequest) <br > } <br > } <br >} <br >Server Access Log Format <br >The server access log files consist of a sequence of newline delimited log records Each log record <br >represents one request and consists of space delimited fields The following is an example log <br >consisting of six log records <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > mybucket [06Feb2014000038 +0000] 192023 <br > 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > 3E57427F3EXAMPLE RESTGETVERSIONING GET mybucketversioning HTTP11 <br > 200 113 7 S3Console04 <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > mybucket [06Feb2014000038 +0000] 192023 <br > 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > 891CE47D2EXAMPLE RESTGETLOGGING_STATUS GET mybucketlogging HTTP11 <br > 200 242 11 S3Console04 <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > mybucket [06Feb2014000038 +0000] 192023 <br > 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > A1206F460EXAMPLE RESTGETBUCKETPOLICY GET mybucketpolicy HTTP11 404 <br > NoSuchBucketPolicy 297 38 S3Console04 <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > mybucket [06Feb2014000100 +0000] 192023 <br > 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > 7B4A0FABBEXAMPLE RESTGETVERSIONING GET mybucketversioning HTTP11 <br > 200 113 33 S3Console04 <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > mybucket [06Feb2014000157 +0000] 192023 <br > 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > DD6CC733AEXAMPLE RESTPUTOBJECT s3dgpdf PUT mybuckets3dgpdf <br > HTTP11 200 4406583 41754 28 S3Console04 <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > mybucket [06Feb2014000321 +0000] 192023 <br > 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br > BC3C074D0EXAMPLE RESTGETVERSIONING GET mybucketversioning HTTP11 <br > 200 113 28 S3Console04 <br >Note <br >Any field can be set to to indicate that the data was unknown or unavailable or that the <br >field was not applicable to this request <br >The following list describes the log record fields <br >Bucket Owner <br >The canonical user ID of the owner of the source bucket <br >Example Entry <br >API Version 20060301 <br >553Amazon Simple Storage Service Developer Guide <br >Log Format <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br >Bucket <br >The name of the bucket that the request was processed against If the system receives a <br >malformed request and cannot determine the bucket the request will not appear in any server <br >access log <br >Example Entry <br >mybucket <br >Time <br >The time at which the request was received The format using strftime() terminology is as <br >follows [dbYHMS z] <br >Example Entry <br >[06Feb2014000038 +0000] <br >Remote IP <br >The apparent Internet address of the requester Intermediate proxies and firewalls might obscure <br >the actual address of the machine making the request <br >Example Entry <br >192023 <br >Requester <br >The canonical user ID of the requester or the string Anonymous for unauthenticated requests If <br >the requester was an IAM user this field will return the requester's IAM user name along with the <br >AWS root account that the IAM user belongs to This identifier is the same one used for access <br >control purposes <br >Example Entry <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br >Request ID <br >The request ID is a string generated by Amazon S3 to uniquely identify each request <br >Example Entry <br >3E57427F33A59F07 <br >Operation <br >The operation listed here is declared as SOAPoperation <br >RESTHTTP_methodresource_type WEBSITEHTTP_methodresource_type or <br >BATCHDELETEOBJECT <br >Example Entry <br >RESTPUTOBJECT <br >API Version 20060301 <br >554Amazon Simple Storage Service Developer Guide <br >Log Format <br >Key <br >The key part of the request URL encoded or if the operation does not take a key parameter <br >Example Entry <br >photos201408puppyjpg <br >RequestURI <br >The RequestURI part of the HTTP request message <br >Example Entry <br >GET mybucketphotos201408puppyjpgxfoobar <br >HTTP status <br >The numeric HTTP status code of the response <br >Example Entry <br >200 <br >Error Code <br >The Amazon S3 Error Code (p 539) or if no error occurred <br >Example Entry <br >NoSuchBucket <br >Bytes Sent <br >The number of response bytes sent excluding HTTP protocol overhead or if zero <br >Example Entry <br >2662992 <br >Object Size <br >The total size of the object in question <br >Example Entry <br >3462992 <br >Total Time <br >The number of milliseconds the request was in flight from the server's perspective This value is <br >measured from the time your request is received to the time that the last byte of the response is <br >sent Measurements made from the client's perspective might be longer due to network latency <br >Example Entry <br >70 <br >TurnAround Time <br >The number of milliseconds that Amazon S3 spent processing your request This value is <br >measured from the time the last byte of your request was received until the time the first byte of <br >the response was sent <br >API Version 20060301 <br >555Amazon Simple Storage Service Developer Guide <br >Custom Access Log Information <br >Example Entry <br >10 <br >Referrer <br >The value of the HTTP Referrer header if present HTTP useragents (eg browsers) typically set <br >this header to the URL of the linking or embedding page when making a request <br >Example Entry <br >httpwwwamazoncomwebservices <br >UserAgent <br >The value of the HTTP UserAgent header <br >Example Entry <br >curl7151 <br >Version Id <br >The version ID in the request or if the operation does not take a versionId parameter <br >Example Entry <br >3HL4kqtJvjVBH40Nrjfkd <br >Custom Access Log Information <br >You can include custom information to be stored in the access log record for a request by adding <br >a custom querystring parameter to the URL for the request Amazon S3 will ignore querystring <br >parameters that begin with x but will include those parameters in the access log record for <br >the request as part of the RequestURI field of the log record For example a GET request for <br >s3amazonawscommybucketphotos201408puppyjpgxuserjohndoe will work the same as <br >the same request for s3amazonawscommybucketphotos201408puppyjpg except that the x <br >userjohndoe string will be included in the RequestURI field for the associated log record This <br >functionality is available in the REST interface only <br >Programming Considerations for Extensible Server <br >Access Log Format <br >From time to time we might extend the access log record format by adding new fields to the end of <br >each line Code that parses server access logs must be written to handle trailing fields that it does not <br >understand <br >Additional Logging for Copy Operations <br >A copy operation involves a GET and a PUT For that reason we log two records when performing a <br >copy operation The previous table describes the fields related to the PUT part of the operation The <br >following list describes the fields in the record that relate to the GET part of the copy operation <br >Bucket Owner <br >The canonical user ID of the bucket that stores the object being copied <br >API Version 20060301 <br >556Amazon Simple Storage Service Developer Guide <br >Additional Logging for Copy Operations <br >Example Entry <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br >Bucket <br >The name of the bucket that stores the object being copied <br >Example Entry <br >mybucket <br >Time <br >The time at which the request was received The format using strftime() terminology is as <br >follows [dBYHMS z] <br >Example Entry <br >[06Feb2014000038 +0000] <br >Remote IP <br >The apparent Internet address of the requester Intermediate proxies and firewalls might obscure <br >the actual address of the machine making the request <br >Example Entry <br >192023 <br >Requester <br >The canonical user ID of the requester or the string Anonymous for unauthenticated requests If <br >the requester was an IAM user this field will return the requester's IAM user name along with the <br >AWS root account that the IAM user belongs to This identifier is the same one used for access <br >control purposes <br >Example Entry <br >79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be <br >Request ID <br >The request ID is a string generated by Amazon S3 to uniquely identify each request <br >Example Entry <br >3E57427F33A59F07 <br >Operation <br >The operation listed here is declared as SOAPoperation <br >RESTHTTP_methodresource_type WEBSITEHTTP_methodresource_type or <br >BATCHDELETEOBJECT <br >Example Entry <br >RESTCOPYOBJECT_GET <br >Key <br >The key of the object being copied or if the operation does not take a key parameter <br >API Version 20060301 <br >557Amazon Simple Storage Service Developer Guide <br >Additional Logging for Copy Operations <br >Example Entry <br >photos201408puppyjpg <br >RequestURI <br >The RequestURI part of the HTTP request message <br >Example Entry <br >GET mybucketphotos201408puppyjpgxfoobar <br >HTTP status <br >The numeric HTTP status code of the GET portion of the copy operation <br >Example Entry <br >200 <br >Error Code <br >The Amazon S3 Error Code (p 539) of the GET portion of the copy operation or if no error <br >occurred <br >Example Entry <br >NoSuchBucket <br >Bytes Sent <br >The number of response bytes sent excluding HTTP protocol overhead or if zero <br >Example Entry <br >2662992 <br >Object Size <br >The total size of the object in question <br >Example Entry <br >3462992 <br >Total Time <br >The number of milliseconds the request was in flight from the server's perspective This value is <br >measured from the time your request is received to the time that the last byte of the response is <br >sent Measurements made from the client's perspective might be longer due to network latency <br >Example Entry <br >70 <br >TurnAround Time <br >The number of milliseconds that Amazon S3 spent processing your request This value is <br >measured from the time the last byte of your request was received until the time the first byte of <br >the response was sent <br >API Version 20060301 <br >558Amazon Simple Storage Service Developer Guide <br >Deleting Log Files <br >Example Entry <br >10 <br >Referrer <br >The value of the HTTP Referrer header if present HTTP useragents (eg browsers) typically set <br >this header to the URL of the linking or embedding page when making a request <br >Example Entry <br >httpwwwamazoncomwebservices <br >UserAgent <br >The value of the HTTP UserAgent header <br >Example Entry <br >curl7151 <br >Version Id <br >The version ID of the object being copied or if the xamzcopysource header didn’t specify a <br >versionId parameter as part of the copy source <br >Example Entry <br >3HL4kqtJvjVBH40Nrjfkd <br >Deleting Log Files <br >A logging enabled bucket (see Server Access Logging (p 546)) can have many server log objects <br >created over time Your application might need these access logs for a specific period after creation <br >and after that you may want to delete them You can use Amazon S3 lifecycle configuration to set rules <br >so that Amazon S3 automatically queues these objects for deletion at the end of their life <br >If you specified a prefix in your logging configuration you can set lifecycle configuration rule to delete <br >log objects with that prefix For example if you log objects have prefix logs after a specified time <br >you can set lifecycle configuration rule to delete objects with prefix logs For more information about <br >lifecycle configuration see Object Lifecycle Management (p 109) <br >API Version 20060301 <br >559Amazon Simple Storage Service Developer Guide <br >Using the AWS SDKs CLI and <br >Explorers <br >Topics <br >• Specifying Signature Version in Request Authentication (p 561) <br >• Set Up the AWS CLI (p 562) <br >• Using the AWS SDK for Java (p 563) <br >• Using the AWS SDK for NET (p 565) <br >• Using the AWS SDK for PHP and Running PHP Examples (p 566) <br >• Using the AWS SDK for Ruby Version 2 (p 568) <br >• Using the AWS SDK for Python (Boto) (p 569) <br >You can use the AWS SDKs when developing applications with Amazon S3 The AWS SDKs simplify <br >your programming tasks by wrapping the underlying REST API Mobile SDKs are also available for <br >building connected mobile applications using AWS This section provides an overview of using AWS <br >SDKs for developing Amazon S3 applications This section also describes how you can test the AWS <br >SDK code samples provided in this guide <br >In addition to the AWS SDKs AWS Explorers are available for Visual Studio and Eclipse for Java IDE <br >In this case the SDKs and the explorers are available bundled together as AWS Toolkits <br >You can also use the AWS Command Line Interface (CLI) to manage Amazon S3 buckets and objects <br >AWS Toolkit for Eclipse <br >The AWS Toolkit for Eclipse includes both the AWS SDK for Java and AWS Explorer for Eclipse The <br >AWS Explorer for Eclipse is an open source plugin for Eclipse for Java IDE that makes it easier for <br >developers to develop debug and deploy Java applications using AWS The easy to use GUI interface <br >enables you to access and administer your AWS infrastructure including Amazon S3 You can perform <br >common operations such as manage your buckets and objects set IAM policies while developing <br >applications all from within the context of Eclipse for Java IDE For set up instructions see Set up the <br >Toolkit For examples of using the explorer see How to Access AWS Explorer <br >API Version 20060301 <br >560Amazon Simple Storage Service Developer Guide <br >Specifying Signature Version in Request Authentication <br >AWS Toolkit for Visual Studio <br >AWS Explorer for Visual Studio is an extension for Microsoft Visual Studio that makes it easier for <br >developers to develop debug and deploy NET applications using Amazon Web Services The easy <br >touse GUI enables you to access and administer your AWS infrastructure including Amazon S3 You <br >can perform common operations such as managing your buckets and objects or setting IAM policies <br >while developing applications all from within the context of Visual Studio For set up instructions go to <br >Setting Up the AWS Toolkit for Visual Studio For examples of using Amazon S3 using the explorer go <br >to Using Amazon S3 from AWS Explorer <br >AWS SDKs <br >You can download only the SDKs For information about downloading the SDK libraries go to Sample <br >Code Libraries <br >AWS CLI <br >The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services including <br >Amazon S3 For information about downloading the AWS CLI go to AWS Command Line Interface <br >Specifying Signature Version in Request <br >Authentication <br >In the Asia Pacific (Mumbai) Asia Pacific (Seoul) EU (Frankfurt) and China (Beijing) regions Amazon <br >S3 supports only Signature Version 4 In all other regions Amazon S3 supports both Signature Version <br >4 and Signature Version 2 <br >For all AWS regions AWS SDKs use Signature Version 4 by default to authenticate requests When <br >using AWS SDKs that were released before May 2016 you may be required to request Signature <br >Version 4 as shown in the following table <br >SDK Requesting Signature Version 4 for Request Authentication <br >AWS CLI For the default profile run the following command <br > aws configure set defaults3signature_version <br > s3v4 <br >For a custom profile run the following command <br > aws configure set <br > profileyour_profile_names3signature_version <br > s3v4 <br >Java SDK Add the following in your code <br >SystemsetProperty(SDKGlobalConfigurationENABLE_S3_SIGV4_SYSTEM_PROPERTY <br > true) <br >Or on the command line specify the following <br >Dcomamazonawsservicess3enableV4 <br >API Version 20060301 <br >561Amazon Simple Storage Service Developer Guide <br >Set Up the AWS CLI <br >SDK Requesting Signature Version 4 for Request Authentication <br >JavaScript SDK Set the signatureVersion parameter to v4 when constructing <br >the client <br >var s3 new AWSS3({signatureVersion 'v4'}) <br >PHP SDK Set the signature parameter to v4 when constructing the <br >Amazon S3 service client <br ><php <br > <br >s3 \Aws\S3\S3Clientfactory(array('signature' <br > > 'v4')) <br >PythonBoto SDK Specify the following in the boto default config file <br >[s3] usesigv4 True <br >Ruby SDK Ruby SDK Version 1 Set the s3_signature_version <br >parameter to v4 when constructing the client <br >s3 AWSS3Clientnew(s3_signature_version <br > > v4) <br >Ruby SDK Version 2 Set the signature_version parameter <br >to v4 when constructing the client <br >s3 AwsS3Clientnew(signature_version 'v4') <br >NET SDK Add the following to the code before creating the S3 client <br >AWSConfigsS3UseSignatureVersion4 true <br >Or add the following to the config file <br ><appSettings> <br > <add keyAWSS3UseSignatureVersion4 <br > value'true> <br ><appSettings> <br >Set Up the AWS CLI <br >Follow the steps to download and configure AWS Command Line Interface (AWS CLI) <br >Note <br >Services in AWS such as Amazon S3 require that you provide credentials when you <br >access them so that the service can determine whether you have permissions to access the <br >resources owned by that service The console requires your password You can create access <br >keys for your AWS account to access the AWS CLI or API However we don't recommend <br >API Version 20060301 <br >562Amazon Simple Storage Service Developer Guide <br >Using the AWS SDK for Java <br >that you access AWS using the credentials for your AWS account Instead we recommend <br >that you use AWS Identity and Access Management (IAM) Create an IAM user add the user <br >to an IAM group with administrative permissions and then grant administrative permissions <br >to the IAM user that you created You can then access AWS using a special URL and that <br >IAM user's credentials For instructions go to Creating Your First IAM User and Administrators <br >Group in the IAM User Guide <br >To set up the AWS CLI <br >1 Download and configure the AWS CLI For instructions see the following topics in the AWS <br >Command Line Interface User Guide <br >• Getting Set Up with the AWS Command Line Interface <br >• Configuring the AWS Command Line Interface <br >2 Add a named profile for the administrator user in the AWS CLI config file You use this profile <br >when executing the AWS CLI commands <br >[adminuser] <br >aws_access_key_id adminuser access key ID <br >aws_secret_access_key adminuser secret access key <br >region awsregion <br >For a list of available AWS regions see Regions and Endpoints in the AWS General Reference <br >3 Verify the setup by entering the following commands at the command prompt <br >• Try the help command to verify that the AWS CLI is installed on your computer <br >aws help <br >• Try an S3 command to verify the user can reach Amazon S3 This command lists buckets in <br >your account The AWS CLI uses the adminuser credentials to authenticate the request <br > aws s3 ls profile adminuser <br >Using the AWS SDK for Java <br >The AWS SDK for Java provides an API for the Amazon S3 bucket and object operations For object <br >operations in addition to providing the API to upload objects in a single operation the SDK provides <br >API to upload large objects in parts (see Uploading Objects Using Multipart Upload API (p 165)) The <br >API gives you the option of using a highlevel or lowlevel API <br >LowLevel API <br >The lowlevel APIs correspond to the underlying Amazon S3 REST operations such as create update <br >and delete operations that apply to buckets and objects When you upload large objects using the low <br >level multipart upload API it provides greater control such as letting you pause and resume multipart <br >uploads vary part sizes during the upload or to begin uploads when you do not know the size of the <br >data in advance If you do not have these requirements use the highlevel API to upload objects <br >HighLevel API <br >For uploading objects the SDK provides a higher level of abstraction by providing the <br >TransferManager class The highlevel API is a simpler API where in just a few lines of code you <br >API Version 20060301 <br >563Amazon Simple Storage Service Developer Guide <br >The Java API Organization <br >can upload files and streams to Amazon S3 You should use this API to upload data unless you need <br >to control the upload as described in the preceding LowLevel API section <br >For smaller data size the TransferManager API uploads data in a single operation However <br >the TransferManager switches to using the multipart upload API when data size reaches certain <br >threshold When possible the TransferManager uses multiple threads to concurrently upload the <br >parts If a part upload fails the API retries the failed part upload up to three times However these are <br >configurable options using the TransferManagerConfiguration class <br >Note <br >When using a stream for the source of data the TransferManager class will not do <br >concurrent uploads <br >The Java API Organization <br >The following packages in the AWS SDK for Java provide the API <br >• comamazonawsservicess3—Provides the implementation APIs for Amazon S3 bucket and object <br >operations <br >For example it provides methods to create buckets upload objects get objects delete objects and <br >to list keys <br >• comamazonawsservicess3transfer—Provides the highlevel API data upload <br >This highlevel API is designed to further simplify uploading objects to Amazon S3 It includes the <br >TransferManager class It is particularly useful when uploading large objects in parts It also <br >include the TransferManagerConfiguration class which you can use to configure the minimum <br >part size for uploading parts and the threshold in bytes of when to use multipart uploads <br >• comamazonawsservicess3model—Provides the lowlevel API classes to create requests and <br >process responses <br >For example it includes the GetObjectRequest class to describe your get object <br >request the ListObjectRequest class to describe your list keys requests and the <br >InitiateMultipartUploadRequest and InitiateMultipartUploadResult classes when <br >initiating a multipart upload <br >For more information about the AWS SDK for Java API go to AWS SDK for Java API Reference <br >Testing the Java Code Examples <br >The easiest way to get started with the Java code examples is to install the latest AWS Toolkit for <br >Eclipse For information on setting up your Java development environment and the AWS Toolkit for <br >Eclipse see Installing the AWS SDK for Java in the AWS SDK for Java Developer Guide <br >The following tasks guide you through the creation and testing of the Java code examples provided in <br >this guide <br >General Process of Creating Java Code Examples <br >1 Create an AWS credentials profile file as described in Set Up your AWS Credentials for <br >Use with the AWS SDK for Java in the AWS SDK for Java Developer Guide <br >2 Create a new AWS Java project in Eclipse The project is preconfigured with the AWS <br >SDK for Java <br >3 Copy the code from the section you are reading to your project <br >4 Update the code by providing any required data For example if uploading a file provide <br >the file path and the bucket name <br >API Version 20060301 <br >564Amazon Simple Storage Service Developer Guide <br >Using the AWS SDK for NET <br >5 Run the code Verify that the object is created by using the AWS Management <br >Console For more information about the AWS Management Console go to http <br >awsamazoncomconsole <br >Using the AWS SDK for NET <br >Topics <br >• The NET API Organization (p 565) <br >• Running the Amazon S3 NET Code Examples (p 566) <br >The AWS SDK for NET provides the API for the Amazon S3 bucket and object operations For object <br >operations in addition to providing the API to upload objects in a single operation the SDK provides <br >the API to upload large objects in parts (see Uploading Objects Using Multipart Upload API (p 165)) <br >The API gives you the option of using a highlevel or lowlevel API <br >LowLevel API <br >The lowlevel APIs correspond to the underlying Amazon S3 REST operations including the create <br >update and delete operations that apply to buckets and objects When you upload large objects using <br >the lowlevel multipart upload API (see Uploading Objects Using Multipart Upload API (p 165)) it <br >provides greater control such as letting you pause and resume multipart uploads vary part sizes <br >during the upload or to begin uploads when you do not know the size of the data in advance If you do <br >not have these requirements use the highlevel API for uploading objects <br >HighLevel API <br >For uploading objects the SDK provides a higher level of abstraction by providing the <br >TransferUtility class The highlevel API is a simpler API where in just a few lines of code you <br >can upload files and streams to Amazon S3 You should use this API to upload data unless you need <br >to control the upload as described in the preceding LowLevel API section <br >For smaller data size the TransferUtility API uploads data in a single operation However <br >the TransferUtility switches to using the multipart upload API when data size reaches certain <br >threshold By default it uses multiple threads to concurrently upload the parts If a part upload fails the <br >API retries the failed part upload up to three times However these are configurable options <br >Note <br >When using a stream for the source of data the TransferUtility class will not do <br >concurrent uploads <br >The NET API Organization <br >When writing Amazon S3 applications using the AWS SDK for NET you use the AWSSDKdll The <br >following namespaces in this assembly provide the multipart upload API <br >• AmazonS3Transfer—Provides the highlevel API to upload your data in parts <br >It includes the TransferUtility class that enables you to specify a file directory or <br >stream for uploading your data It also includes the TransferUtilityUploadRequest and <br >TransferUtilityUploadDirectoryRequest classes to configure advanced settings such <br >as the number of concurrent threads part size object metadata the storage class (STANDARD <br >REDUCED_REDUNDANCY) and object ACL <br >• AmazonS3—Provides the implementation for the lowlevel APIs <br >It provides methods that correspond to the Amazon S3 REST multipart upload API (see Using the <br >REST API for Multipart Upload (p 205)) <br >API Version 20060301 <br >565Amazon Simple Storage Service Developer Guide <br >Running the Amazon S3 NET Code Examples <br >• AmazonS3Model—Provides the lowlevel API classes to create requests and process responses <br >For example it provides the InitiateMultipartUploadRequest and <br >InitiateMultipartUploadResponse classes you can use when initiating a multipart upload and <br >the UploadPartRequest and UploadPartResponse classes when uploading parts <br >For more information about the AWS SDK for NET API go to AWS SDK for NET Reference <br >Running the Amazon S3 NET Code Examples <br >The easiest way to get started with the NET code examples is to install the AWS SDK for NET For <br >more information go to AWS SDK for NET <br >Note <br >The examples in this guide are AWS SDK for NET version 20 compliant <br >The following tasks guide you through creating and testing the C# code samples provided in this <br >section <br >General Process of Creating NET Code Examples <br >1 Create a credentials profile for your AWS credentials as described in the AWS SDK <br >for NET topic Configuring AWS Credentials <br >2 Create a new Visual Studio project using the AWS Empty Project template <br >3 Replace the code in the project file Programcs with the code in the section you are <br >reading <br >4 Run the code Verify that the object is created using the AWS Management Console <br >For more information about AWS Management Console go to httpawsamazoncom <br >console <br >Using the AWS SDK for PHP and Running PHP <br >Examples <br >The AWS SDK for PHP provides access to the API for Amazon S3 bucket and object operations The <br >SDK gives you the option of using the service's lowlevel API or using higherlevel abstractions <br >The SDK is available at AWS SDK for PHP which also has instructions for installing and getting <br >started with the SDK <br >Note <br >The setup for using the AWS SDK for PHP depends on your environment and how you want <br >to run your application To set up your environment to run the examples in this documentation <br >see the AWS SDK for PHP Getting Started Guide <br >AWS SDK for PHP Levels <br >LowLevel API <br >The lowlevel APIs correspond to the underlying Amazon S3 REST operations including the create <br >update and delete operations on buckets and objects The lowlevel APIs provide greater control over <br >these operations For example you can batch your requests and execute them in parallel or when <br >using the multipart upload API (see Uploading Objects Using Multipart Upload API (p 165)) you can <br >API Version 20060301 <br >566Amazon Simple Storage Service Developer Guide <br >Running PHP Examples <br >manage the object parts individually Note that these lowlevel API calls return a result that includes all <br >the Amazon S3 response details <br >HighLevel Abstractions <br >The highlevel abstractions are intended to simplify common use cases For <br >example for uploading large objects using the lowlevel API you must first <br >call Aws\S3\S3ClientcreateMultipartUpload() then call the Aws <br >\S3\S3ClientuploadPart() method to uploads object parts and then call the Aws <br >\S3\S3ClientcompleteMultipartUpload() method to complete the upload Instead you could <br >use the higherlevel Aws\S3\Model\MultipartUpload\UploadBuilder object that simplifies <br >creating a multipart upload <br >Another example of using a higherlevel abstraction is when enumerating objects in a bucket you can <br >use the iterators feature of the AWS SDK for PHP to return all the object keys regardless of how many <br >objects you have stored in the bucket If you use the lowlevel API the response returns only up to <br >1000 keys and if you have more than a 1000 objects in the bucket the result will be truncated and <br >you will have to manage the response and check for any truncation <br >Running PHP Examples <br >The following procedure describes how to run the PHP code examples in this guide <br >To Run the PHP Code Examples <br >1 Download and install the AWS SDK for PHP and then verify that your environment meets <br >the minimum requirements as described in the AWS SDK for PHP Getting Started Guide <br >2 Install the AWS SDK for PHP according to the instructions in the AWS SDK for PHP <br >Getting Started Guide Depending on the installation method that you use you might <br >have to modify your code to resolve dependencies among the PHP extensions <br >All of the PHP code samples in this document use the Composer dependency manager <br >that is described in the AWS SDK for PHP Getting Started Guide Each code sample <br >includes the following line to include its dependencies <br >require 'vendorautoloadphp' <br >3 Create a credentials profile for your AWS credentials as described in the AWS SDK for <br >PHP topic Using the AWS credentials file and credential profiles At run time when you <br >create a new Amazon S3 client object the client will obtain your AWS credentials from <br >the credentials profile <br >4 Copy the example code from the document to your project Depending upon your <br >environment you might need to add lines to the code example that reference your <br >configuration and SDK files <br >For example to load a PHP example in a browser add the following to the top of the PHP <br >code and then save it as a PHP file (extension php) in the Web application directory <br >(such as www or htdocs) <br ><php <br >header('ContentType textplain charsetutf8') <br > Include the AWS SDK using the Composer autoloader <br >require 'vendorautoloadphp' <br >API Version 20060301 <br >567Amazon Simple Storage Service Developer Guide <br >Related Resources <br >5 Test the example according to your setup <br >Related Resources <br >• AWS SDK for PHP for Amazon S3 <br >• AWS SDK for PHP Documentation <br >Using the AWS SDK for Ruby Version 2 <br >The AWS SDK for Ruby provides an API for Amazon S3 bucket and object operations For object <br >operations you can use the API to upload objects in a single operation or upload large objects in <br >parts (see Uploading Objects Using Multipart Upload) However the API for a single operation upload <br >can accept large objects as well and behind the scenes manage the upload in parts for you thereby <br >reducing the amount of script you need to write <br >The Ruby API Organization <br >When creating Amazon S3 applications using the AWS SDK for Ruby you must install the SDK for <br >Ruby gem For more information see the AWS SDK for Ruby Version 2 Once installed you can <br >access the API including the following key classes <br >• AwsS3Resource—Represents the interface to Amazon S3 for the Ruby SDK and provides <br >methods for creating and enumerating buckets <br >The S3 class provides the #buckets instance method for accessing existing buckets or creating <br >new ones <br >• AwsS3Bucket—Represents an Amazon S3 bucket <br >The Bucket class provides the #object(key) and #objects methods for accessing the objects <br >in a bucket as well as methods to delete a bucket and return information about a bucket like the <br >bucket policy <br >• AwsS3Object—Represents an Amazon S3 object identified by its key <br >The Object class provides methods for getting and setting properties of an object specifying the <br >storage class for storing objects and setting object permissions using access control lists The <br >Object class also has methods for deleting uploading and copying objects When uploading objects <br >in parts this class provides options for you to specify the order of parts uploaded and the part size <br >For more information about the AWS SDK for Ruby API go to AWS SDK for Ruby Version 2 <br >Testing the Ruby Script Examples <br >The easiest way to get started with the Ruby script examples is to install the latest AWS SDK for <br >Ruby gem For information about installing or updating to the latest gem go to AWS SDK for Ruby <br >Version 2 The following tasks guide you through the creation and testing of the Ruby script examples <br >assuming that you have installed the AWS SDK for Ruby <br >General Process of Creating and Testing Ruby Script Examples <br >1 To access AWS you must provide a set of credentials for your SDK for Ruby application <br >For more information see Setting up AWS Credentials for Use with the SDK for Ruby <br >API Version 20060301 <br >568Amazon Simple Storage Service Developer Guide <br >Using the AWS SDK for Python (Boto) <br >2 Create a new SDK for Ruby script and add the following lines to the top of the script <br >#usrbinenv ruby <br >require 'rubygems' <br >require 'awssdk' <br > <br >The first line is the interpreter directive and the two require statements import two <br >required gems into your script <br >3 Copy the code from the section you are reading to your script <br >4 Update the code by providing any required data For example if uploading a file provide <br >the file path and the bucket name <br >5 Run the script Verify changes to buckets and objects by using the AWS Management <br >Console For more information about the AWS Management Console go to http <br >awsamazoncomconsole <br >Ruby Samples <br >The following links contain samples to help get you started with the SDK for Ruby Version 2 <br >• Using the AWS SDK for Ruby Version 2 (p 67) <br >• Upload an Object Using the AWS SDK for Ruby (p 163) <br >Using the AWS SDK for Python (Boto) <br >Boto is a Python package that provides interfaces to AWS including Amazon S3 For more information <br >about Boto go to the AWS SDK for Python (Boto) The getting started link on this page provides step <br >bystep instructions to get started <br >API Version 20060301 <br >569Amazon Simple Storage Service Developer Guide <br >Appendix A Using the SOAP API <br >Appendices <br >This Amazon Simple Storage Service Developer Guide appendix include the following sections <br >Topics <br >• Appendix A Using the SOAP API (p 570) <br >• Appendix B Authenticating Requests (AWS Signature Version 2) (p 573) <br >Appendix A Using the SOAP API <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >This section contains information specific to the Amazon S3 SOAP API <br >Note <br >SOAP requests both authenticated and anonymous must be sent to Amazon S3 using SSL <br >Amazon S3 returns an error when you send a SOAP request over HTTP <br >Common SOAP API Elements <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >You can interact with Amazon S3 using SOAP 11 over HTTP The Amazon S3 WSDL <br >which describes the Amazon S3 API in a machinereadable way is available at http <br >docs3amazonawscom20060301AmazonS3wsdl The Amazon S3 schema is available at http <br >docs3amazonawscom20060301AmazonS3xsd <br >Most users will interact with Amazon S3 using a SOAP toolkit tailored for their language and <br >development environment Different toolkits will expose the Amazon S3 API in different ways Please <br >refer to your specific toolkit documentation to understand how to use it This section illustrates the <br >Amazon S3 SOAP operations in a toolkitindependent way by exhibiting the XML requests and <br >responses as they appear on the wire <br >Common Elements <br >You can include the following authorizationrelated elements with any SOAP request <br >API Version 20060301 <br >570Amazon Simple Storage Service Developer Guide <br >Authenticating SOAP Requests <br >• AWSAccessKeyId The AWS Access Key ID of the requester <br >• Timestamp The current time on your system <br >• Signature The signature for the request <br >Authenticating SOAP Requests <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >Every nonanonymous request must contain authentication information to establish the identity of the <br >principal making the request In SOAP the authentication information is put into the following elements <br >of the SOAP request <br >• Your AWS Access Key ID <br >Note <br >When making authenticated SOAP requests temporary security credentials are not <br >supported For more information about types of credentials see Making Requests (p 11) <br >• Timestamp This must be a dateTime (go to httpwwww3orgTRxmlschema2 <br >#dateTime) in the Coordinated Universal Time (Greenwich Mean Time) time zone such as <br >20090101T120000000Z Authorization will fail if this timestamp is more than 15 minutes <br >away from the clock on Amazon S3 servers <br >• Signature The RFC 2104 HMACSHA1 digest (go to httpwwwietforgrfcrfc2104txt) of the <br >concatenation of AmazonS3 + OPERATION + Timestamp using your AWS Secret Access Key as <br >the key For example in the following CreateBucket sample request the signature element would <br >contain the HMACSHA1 digest of the value AmazonS3CreateBucket20090101T120000000Z <br >For example in the following CreateBucket sample request the signature element would contain the <br >HMACSHA1 digest of the value AmazonS3CreateBucket20090101T120000000Z <br >Example <br ><CreateBucket xmlnshttpdocs3amazonawscom20060301> <br > <Bucket>quotes<Bucket> <br > <Acl>private<Acl> <br > <AWSAccessKeyId>AKIAIOSFODNN7EXAMPLE<AWSAccessKeyId> <br > <Timestamp>20090101T120000000Z<Timestamp> <br > <Signature>Iuyz3d3P0aTou39dzbqaEXAMPLE<Signature> <br ><CreateBucket> <br >Note <br >SOAP requests both authenticated and anonymous must be sent to Amazon S3 using SSL <br >Amazon S3 returns an error when you send a SOAP request over HTTP <br >Important <br >Due to different interpretations regarding how extra time precision should be dropped NET <br >users should take care not to send Amazon S3 overly specific time stamps This can be <br >accomplished by manually constructing DateTime objects with only millisecond precision <br >Setting Access Policy with SOAP <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >API Version 20060301 <br >571Amazon Simple Storage Service Developer Guide <br >Setting Access Policy with SOAP <br >Access control can be set at the time a bucket or object is written by including the AccessControlList <br >element with the request to CreateBucket PutObjectInline or PutObject The <br >AccessControlList element is described in Managing Access Permissions to Your Amazon S3 <br >Resources (p 266) If no access control list is specified with these operations the resource is created <br >with a default access policy that gives the requester FULL_CONTROL access (this is the case even if <br >the request is a PutObjectInline or PutObject request for an object that already exists) <br >Following is a request that writes data to an object makes the object readable by anonymous <br >principals and gives the specified user FULL_CONTROL rights to the bucket (Most developers will <br >want to give themselves FULL_CONTROL access to their own bucket) <br >Example <br >Following is a request that writes data to an object and makes the object readable by anonymous <br >principals <br >Sample Request <br ><PutObjectInline xmlnshttpdocs3amazonawscom20060301> <br > <Bucket>quotes<Bucket> <br > <Key>Nelson<Key> <br > <Metadata> <br > <Name>ContentType<Name> <br > <Value>textplain<Value> <br > <Metadata> <br > <Data>aGEtaGE<Data> <br > <ContentLength>5<ContentLength> <br > <AccessControlList> <br > <Grant> <br > <Grantee xsitypeCanonicalUser> <br > <br > <ID>75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a<ID> <br > <DisplayName>chriscustomer<DisplayName> <br > <Grantee> <br > <Permission>FULL_CONTROL<Permission> <br > <Grant> <br > <Grant> <br > <Grantee xsitypeGroup> <br > <URI>httpacsamazonawscomgroupsglobalAllUsers<URI> <br > <Grantee> <br > <Permission>READ<Permission> <br > <Grant> <br > <AccessControlList> <br > <AWSAccessKeyId>AKIAIOSFODNN7EXAMPLE<AWSAccessKeyId> <br > <Timestamp>20090301T120000183Z<Timestamp> <br > <Signature>Iuyz3d3P0aTou39dzbqaEXAMPLE<Signature> <br ><PutObjectInline> <br >Sample Response <br ><PutObjectInlineResponse xmlnshttps3amazonawscomdoc20060301> <br > <PutObjectInlineResponse> <br > <ETag>"828ef3fdfa96f00ad9f27c383fc9ac7f"<ETag> <br > <LastModified>20090101T120000000Z<LastModified> <br > <PutObjectInlineResponse> <br ><PutObjectInlineResponse> <br >The access control policy can be read or set for an existing bucket or object using <br >the GetBucketAccessControlPolicy GetObjectAccessControlPolicy <br >API Version 20060301 <br >572Amazon Simple Storage Service Developer Guide <br >Appendix B Authenticating <br >Requests (AWS Signature Version 2) <br >SetBucketAccessControlPolicy and SetObjectAccessControlPolicy methods For more <br >information see the detailed explanation of these methods <br >Appendix B Authenticating Requests (AWS <br >Signature Version 2) <br >Topics <br >• Authenticating Requests Using the REST API (p 574) <br >• Signing and Authenticating REST Requests (p 575) <br >• BrowserBased Uploads Using POST (AWS Signature Version 2) (p 586) <br >Note <br >This topic explains authenticating requests using Signature Version 2 Amazon S3 now <br >supports the latest Signature Version 4 which is supported in all regions it is the only version <br >supported for new AWS regions For more information go to Authenticating Requests (AWS <br >Signature Version 4) in the Amazon Simple Storage Service API Reference <br >API Version 20060301 <br >573Amazon Simple Storage Service Developer Guide <br >Authenticating Requests Using the REST API <br >Authenticating Requests Using the REST API <br >When accessing Amazon S3 using REST you must provide the following items in your request so the <br >request can be authenticated <br >Request Elements <br >• AWS Access Key Id – Each request must contain the access key ID of the identity you are using to <br >send your request <br >• Signature – Each request must contain a valid request signature or the request is rejected <br >A request signature is calculated using your secret access key which is a shared secret known only <br >to you and AWS <br >• Time stamp – Each request must contain the date and time the request was created represented <br >as a string in UTC <br >• Date – Each request must contain the time stamp of the request <br >Depending on the API action you're using you can provide an expiration date and time for the <br >request instead of or in addition to the time stamp See the authentication topic for the particular <br >action to determine what it requires <br >Following are the general steps for authenticating requests to Amazon S3 It is assumed you have the <br >necessary security credentials access key ID and secret access key <br >1 Construct a request to AWS <br >API Version 20060301 <br >574Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >2 Calculate the signature using your secret access key <br >3 Send the request to Amazon S3 Include your access key ID and the signature in your <br >request Amazon S3 performs the next three steps <br >4 Amazon S3 uses the access key ID to look up your secret access key <br >5 Amazon S3 calculates a signature from the request data and the secret access key using <br >the same algorithm that you used to calculate the signature you sent in the request <br >6 If the signature generated by Amazon S3 matches the one you sent in the request the <br >request is considered authentic If the comparison fails the request is discarded and <br >Amazon S3 returns an error response <br >Detailed Authentication Information <br >For detailed information about REST authentication see Signing and Authenticating REST <br >Requests (p 575) <br >Signing and Authenticating REST Requests <br >Topics <br >• Using Temporary Security Credentials (p 576) <br >API Version 20060301 <br >575Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >• The Authentication Header (p 577) <br >• Request Canonicalization for Signing (p 578) <br >• Constructing the CanonicalizedResource Element (p 578) <br >• Constructing the CanonicalizedAmzHeaders Element (p 579) <br >• Positional versus Named HTTP Header StringToSign Elements (p 579) <br >• Time Stamp Requirement (p 579) <br >• Authentication Examples (p 580) <br >• REST Request Signing Problems (p 584) <br >• Query String Request Authentication Alternative (p 584) <br >Note <br >This topic explains authenticating requests using Signature Version 2 Amazon S3 now <br >supports the latest Signature Version 4 This latest signature version is supported in all <br >regions and any new regions after January 30 2014 will support only Signature Version 4 For <br >more information go to Authenticating Requests (AWS Signature Version 4) in the Amazon <br >Simple Storage Service API Reference <br >Authentication is the process of proving your identity to the system Identity is an important factor in <br >Amazon S3 access control decisions Requests are allowed or denied in part based on the identity of <br >the requester For example the right to create buckets is reserved for registered developers and (by <br >default) the right to create objects in a bucket is reserved for the owner of the bucket in question As a <br >developer you'll be making requests that invoke these privileges so you'll need to prove your identity <br >to the system by authenticating your requests This section shows you how <br >Note <br >The content in this section does not apply to HTTP POST For more information see Browser <br >Based Uploads Using POST (AWS Signature Version 2) (p 586) <br >The Amazon S3 REST API uses a custom HTTP scheme based on a keyedHMAC (Hash Message <br >Authentication Code) for authentication To authenticate a request you first concatenate selected <br >elements of the request to form a string You then use your AWS secret access key to calculate the <br >HMAC of that string Informally we call this process signing the request and we call the output of the <br >HMAC algorithm the signature because it simulates the security properties of a real signature Finally <br >you add this signature as a parameter of the request by using the syntax described in this section <br >When the system receives an authenticated request it fetches the AWS secret access key that you <br >claim to have and uses it in the same way to compute a signature for the message it received It <br >then compares the signature it calculated against the signature presented by the requester If the <br >two signatures match the system concludes that the requester must have access to the AWS secret <br >access key and therefore acts with the authority of the principal to whom the key was issued If the two <br >signatures do not match the request is dropped and the system responds with an error message <br >Example Authenticated Amazon S3 REST Request <br >GET photospuppyjpg HTTP11 <br >Host johnsmiths3amazonawscom <br >Date Mon 26 Mar 2007 193758 +0000 <br >Authorization AWS AKIAIOSFODNN7EXAMPLEfrJIUN8DYpKDtOLCwoyllqDzg <br >Using Temporary Security Credentials <br >If you are signing your request using temporary security credentials (see Making Requests (p 11)) you <br >must include the corresponding security token in your request by adding the xamzsecuritytoken <br >header <br >API Version 20060301 <br >576Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >When you obtain temporary security credentials using the AWS Security Token Service API the <br >response includes temporary security credentials and a session token You provide the session token <br >value in the xamzsecuritytoken header when you send requests to Amazon S3 For information <br >about the AWS Security Token Service API provided by IAM go to Action in the AWS Security Token <br >Service API Reference Guide <br >The Authentication Header <br >The Amazon S3 REST API uses the standard HTTP Authorization header to pass authentication <br >information (The name of the standard header is unfortunate because it carries authentication <br >information not authorization) Under the Amazon S3 authentication scheme the Authorization header <br >has the following form <br >Authorization AWS AWSAccessKeyIdSignature <br >Developers are issued an AWS access key ID and AWS secret access key when they register For <br >request authentication the AWSAccessKeyId element identifies the access key ID that was used to <br >compute the signature and indirectly the developer making the request <br >The Signature element is the RFC 2104 HMACSHA1 of selected elements from the request and <br >so the Signature part of the Authorization header will vary from request to request If the request <br >signature calculated by the system matches the Signature included with the request the requester <br >will have demonstrated possession of the AWS secret access key The request will then be processed <br >under the identity and with the authority of the developer to whom the key was issued <br >Following is pseudogrammar that illustrates the construction of the Authorization request header <br >(In the example \n means the Unicode code point U+000A commonly called newline) <br >Authorization AWS + + AWSAccessKeyId + + Signature <br >Signature Base64( HMACSHA1( YourSecretAccessKeyID UTF8Encoding <br >Of( StringToSign ) ) ) <br >StringToSign HTTPVerb + \n + <br > ContentMD5 + \n + <br > ContentType + \n + <br > Date + \n + <br > CanonicalizedAmzHeaders + <br > CanonicalizedResource <br >CanonicalizedResource [ + Bucket ] + <br > <HTTPRequestURI from the protocol name up to the query string> + <br > [ subresource if present For example acl location logging or <br > torrent] <br >CanonicalizedAmzHeaders <described below> <br >HMACSHA1 is an algorithm defined by RFC 2104 KeyedHashing for Message Authentication <br > The algorithm takes as input two bytestrings a key and a message For Amazon S3 request <br >authentication use your AWS secret access key (YourSecretAccessKeyID) as the key and the <br >UTF8 encoding of the StringToSign as the message The output of HMACSHA1 is also a byte <br >string called the digest The Signature request parameter is constructed by Base64 encoding this <br >digest <br >API Version 20060301 <br >577Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >Request Canonicalization for Signing <br >Recall that when the system receives an authenticated request it compares the computed request <br >signature with the signature provided in the request in StringToSign For that reason you must <br >compute the signature by using the same method used by Amazon S3 We call the process of putting a <br >request in an agreedupon form for signing canonicalization <br >Constructing the CanonicalizedResource Element <br >CanonicalizedResource represents the Amazon S3 resource targeted by the request Construct it <br >for a REST request as follows <br >Launch Process <br >1 Start with an empty string () <br >2 If the request specifies a bucket using the HTTP Host header (virtual hostedstyle) append the <br >bucket name preceded by a (eg bucketname) For pathstyle requests and requests that <br >don't address a bucket do nothing For more information about virtual hostedstyle requests see <br >Virtual Hosting of Buckets (p 50) <br >For a virtual hostedstyle request httpsjohnsmiths3amazonawscomphotospuppyjpg the <br >CanonicalizedResource is johnsmith <br >For the pathstyle request httpss3amazonawscomjohnsmithphotospuppyjpg the <br >CanonicalizedResource is <br >3 Append the path part of the undecoded HTTP RequestURI upto but not including the query <br >string <br >For a virtual hostedstyle request httpsjohnsmiths3amazonawscomphotospuppyjpg the <br >CanonicalizedResource is johnsmithphotospuppyjpg <br >For a pathstyle request httpss3amazonawscomjohnsmithphotospuppyjpg <br >the CanonicalizedResource is johnsmithphotospuppyjpg At this point the <br >CanonicalizedResource is the same for both the virtual hostedstyle and pathstyle request <br >For a request that does not address a bucket such as GET Service append <br >4 If the request addresses a subresource such as versioning location acl torrent <br >lifecycle or versionid append the subresource its value if it has one and the question <br >mark Note that in case of multiple subresources subresources must be lexicographically sorted <br >by subresource name and separated by '&' eg acl&versionIdvalue <br >The subresources that must be included when constructing the CanonicalizedResource Element <br >are acl lifecycle location logging notification partNumber policy requestPayment torrent <br >uploadId uploads versionId versioning versions and website <br >If the request specifies query string parameters overriding the response header values (see Get <br >Object) append the query string parameters and their values When signing you do not encode <br >these values however when making the request you must encode these parameter values <br >The query string parameters in a GET request include responsecontenttype response <br >contentlanguage responseexpires responsecachecontrol response <br >contentdisposition and responsecontentencoding <br >The delete query string parameter must be included when you create the <br >CanonicalizedResource for a multiobject Delete request <br >Elements of the CanonicalizedResource that come from the HTTP RequestURI should be signed <br >literally as they appear in the HTTP request including URLEncoding meta characters <br >The CanonicalizedResource might be different than the HTTP RequestURI In particular if your <br >request uses the HTTP Host header to specify a bucket the bucket does not appear in the HTTP <br >API Version 20060301 <br >578Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >RequestURI However the CanonicalizedResource continues to include the bucket Query string <br >parameters might also appear in the RequestURI but are not included in CanonicalizedResource <br >For more information see Virtual Hosting of Buckets (p 50) <br >Constructing the CanonicalizedAmzHeaders Element <br >To construct the CanonicalizedAmzHeaders part of StringToSign select all HTTP request headers <br >that start with 'xamz' (using a caseinsensitive comparison) and use the following process <br >CanonicalizedAmzHeaders Process <br >1 Convert each HTTP header name to lowercase For example 'XAmzDate' becomes 'x <br >amzdate' <br >2 Sort the collection of headers lexicographically by header name <br >3 Combine header fields with the same name into one headernamecommaseparatedvalue <br >list pair as prescribed by RFC 2616 section 42 without any whitespace between values For <br >example the two metadata headers 'xamzmetausername fred' and 'xamzmeta <br >username barney' would be combined into the single header 'xamzmetausername <br >fredbarney' <br >4 Unfold long headers that span multiple lines (as allowed by RFC 2616 section 42) by <br >replacing the folding whitespace (including newline) by a single space <br >5 Trim any whitespace around the colon in the header For example the header 'xamzmeta <br >username fredbarney' would become 'xamzmetausernamefredbarney' <br >6 Finally append a newline character (U+000A) to each canonicalized header in the resulting <br >list Construct the CanonicalizedResource element by concatenating all headers in this list into <br >a single string <br >Positional versus Named HTTP Header StringToSign <br >Elements <br >The first few header elements of StringToSign (ContentType Date and ContentMD5) are <br >positional in nature StringToSign does not include the names of these headers only their values <br >from the request In contrast the 'xamz' elements are named Both the header names and the <br >header values appear in StringToSign <br >If a positional header called for in the definition of StringToSign is not present in your request (for <br >example ContentType or ContentMD5 are optional for PUT requests and meaningless for GET <br >requests) substitute the empty string () for that position <br >Time Stamp Requirement <br >A valid time stamp (using either the HTTP Date header or an xamzdate alternative) is mandatory <br >for authenticated requests Furthermore the client timestamp included with an authenticated request <br >must be within 15 minutes of the Amazon S3 system time when the request is received If not the <br >request will fail with the RequestTimeTooSkewed error code The intention of these restrictions is to <br >limit the possibility that intercepted requests could be replayed by an adversary For stronger protection <br >against eavesdropping use the HTTPS transport for authenticated requests <br >Note <br >The validation constraint on request date applies only to authenticated requests that do <br >not use query string authentication For more information see Query String Request <br >Authentication Alternative (p 584) <br >API Version 20060301 <br >579Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >Some HTTP client libraries do not expose the ability to set the Date header for a request If you <br >have trouble including the value of the 'Date' header in the canonicalized headers you can set the <br >timestamp for the request by using an 'xamzdate' header instead The value of the xamzdate <br >header must be in one of the RFC 2616 formats (httpwwwietforgrfcrfc2616txt) When an xamz <br >date header is present in a request the system will ignore any Date header when computing the <br >request signature Therefore if you include the xamzdate header use the empty string for the Date <br >when constructing the StringToSign See the next section for an example <br >Authentication Examples <br >The examples in this section use the (nonworking) credentials in the following table <br >Parameter Value <br >AWSAccessKeyId AKIAIOSFODNN7EXAMPLE <br >AWSSecretAccessKey wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY <br >In the example StringToSigns formatting is not significant and \n means the Unicode code point U <br >+000A commonly called newline Also the examples use +0000 to designate the time zone You can <br >use GMT to designate timezone instead but the signatures shown in the examples will be different <br >Example Object GET <br >This example gets an object from the johnsmith bucket <br >Request StringToSign <br >GET photospuppyjpg HTTP11 <br >Host johnsmiths3amazonawscom <br >Date Tue 27 Mar 2007 193642 <br > +0000 <br >Authorization AWS <br > AKIAIOSFODNN7EXAMPLE <br >bWq2s1WEIj+Ydj0vQ697zp+IXMU <br >GET\n <br >\n <br >\n <br >Tue 27 Mar 2007 193642 +0000\n <br >johnsmithphotospuppyjpg <br >Note that the CanonicalizedResource includes the bucket name but the HTTP RequestURI does not <br >(The bucket is specified by the Host header) <br >API Version 20060301 <br >580Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >Example Object PUT <br >This example puts an object into the johnsmith bucket <br >Request StringToSign <br >PUT photospuppyjpg HTTP11 <br >ContentType imagejpeg <br >ContentLength 94328 <br >Host johnsmiths3amazonawscom <br >Date Tue 27 Mar 2007 211545 <br > +0000 <br >Authorization AWS <br > AKIAIOSFODNN7EXAMPLE <br >MyyxeRY7whkBe+bq8fHCL2kKUg <br >PUT\n <br >\n <br >imagejpeg\n <br >Tue 27 Mar 2007 211545 +0000\n <br >johnsmithphotospuppyjpg <br >Note the ContentType header in the request and in the StringToSign Also note that the ContentMD5 <br >is left blank in the StringToSign because it is not present in the request <br >Example List <br >This example lists the content of the johnsmith bucket <br >Request StringToSign <br >GET prefixphotos&maxkeys50&markerpuppy <br > HTTP11 <br >UserAgent Mozilla50 <br >Host johnsmiths3amazonawscom <br >Date Tue 27 Mar 2007 194241 +0000 <br >Authorization AWS AKIAIOSFODNN7EXAMPLE <br >htDYFYduRNen8P9ZfEs9SuKy0U <br >GET\n <br >\n <br >\n <br >Tue 27 Mar 2007 194241 <br > +0000\n <br >johnsmith <br >Note the trailing slash on the CanonicalizedResource and the absence of query string parameters <br >Example Fetch <br >This example fetches the access control policy subresource for the 'johnsmith' bucket <br >Request StringToSign <br >GET acl HTTP11 <br >Host johnsmiths3amazonawscom <br >Date Tue 27 Mar 2007 194446 +0000 <br >Authorization AWS AKIAIOSFODNN7EXAMPLE <br >c2WLPFtWHVgbEmeEG93a4cG37dM <br >GET\n <br >\n <br >\n <br >Tue 27 Mar 2007 194446 <br > +0000\n <br >johnsmithacl <br >Notice how the subresource query string parameter is included in the CanonicalizedResource <br >API Version 20060301 <br >581Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >Example Delete <br >This example deletes an object from the 'johnsmith' bucket using the pathstyle and Date alternative <br >Request StringToSign <br >DELETE johnsmithphotospuppyjpg <br > HTTP11 <br >UserAgent dotnet <br >Host s3amazonawscom <br >Date Tue 27 Mar 2007 212027 +0000 <br >xamzdate Tue 27 Mar 2007 212026 <br > +0000 <br >Authorization AWS <br > AKIAIOSFODNN7EXAMPLElx3byBScXR6KzyMaifNkardMwNk <br >DELETE\n <br >\n <br >\n <br >Tue 27 Mar 2007 212026 +0000\n <br >johnsmithphotospuppyjpg <br >Note how we used the alternate 'xamzdate' method of specifying the date (because our client library <br >prevented us from setting the date say) In this case the xamzdate takes precedence over the <br >Date header Therefore date entry in the signature must contain the value of the xamzdate header <br >API Version 20060301 <br >582Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >Example Upload <br >This example uploads an object to a CNAME style virtual hosted bucket with metadata <br >Request StringToSign <br >PUT dbbackupdatgz HTTP11 <br >UserAgent curl7155 <br >Host staticjohnsmithnet8080 <br >Date Tue 27 Mar 2007 210608 +0000 <br >xamzacl publicread <br >contenttype applicationxdownload <br >ContentMD5 4gJE4saaMU4BqNR0kLY+lw <br >XAmzMetaReviewedBy <br > joe@johnsmithnet <br >XAmzMetaReviewedBy <br > jane@johnsmithnet <br >XAmzMetaFileChecksum 0x02661779 <br >XAmzMetaChecksumAlgorithm crc32 <br >ContentDisposition attachment <br > filenamedatabasedat <br >ContentEncoding gzip <br >ContentLength 5913339 <br >Authorization AWS <br > AKIAIOSFODNN7EXAMPLE <br >ilyl83RwaSoYIEdixDQcA4OnAnc <br >PUT\n <br >4gJE4saaMU4BqNR0kLY+lw\n <br >applicationxdownload\n <br >Tue 27 Mar 2007 210608 +0000\n <br >xamzaclpublicread\n <br >xamzmeta <br >checksumalgorithmcrc32\n <br >xamzmeta <br >filechecksum0x02661779\n <br >xamzmetareviewedby <br >joe@johnsmithnetjane@johnsmithnet <br >\n <br >staticjohnsmithnetdb <br >backupdatgz <br >Notice how the 'xamz' headers are sorted trimmed of whitespace and converted to lowercase Note <br >also that multiple headers with the same name have been joined using commas to separate values <br >Note how only the ContentType and ContentMD5 HTTP entity headers appear in the <br >StringToSign The other Content* entity headers do not <br >Again note that the CanonicalizedResource includes the bucket name but the HTTP RequestURI <br >does not (The bucket is specified by the Host header) <br >Example List All My Buckets <br >Request StringToSign <br >GET HTTP11 <br >Host s3amazonawscom <br >Date Wed 28 Mar 2007 012959 +0000 <br >Authorization AWS <br > AKIAIOSFODNN7EXAMPLEqGdzdERIC03wnaRNKh6OqZehG9s <br >GET\n <br >\n <br >\n <br >Wed 28 Mar 2007 012959 <br > +0000\n <br > <br >API Version 20060301 <br >583Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >Example Unicode Keys <br >Request StringToSign <br >GET dictionaryfranC3A7aispr <br >c3a9fc3a8re HTTP11 <br >Host s3amazonawscom <br >Date Wed 28 Mar 2007 014949 +0000 <br >Authorization AWS <br > AKIAIOSFODNN7EXAMPLEDNEZGsoieTZ92F3bUfSPQcbGmlM <br >GET\n <br >\n <br >\n <br >Wed 28 Mar 2007 014949 +0000\n <br >dictionaryfranC3A7aisprc3a9f <br >c3a8re <br >Note <br >The elements in StringToSign that were derived from the RequestURI are taken literally <br >including URLEncoding and capitalization <br >REST Request Signing Problems <br >When REST request authentication fails the system responds to the request with an XML error <br >document The information contained in this error document is meant to help developers diagnose the <br >problem In particular the StringToSign element of the SignatureDoesNotMatch error document <br >tells you exactly what request canonicalization the system is using <br >Some toolkits silently insert headers that you do not know about beforehand such as adding the <br >header ContentType during a PUT In most of these cases the value of the inserted header remains <br >constant allowing you to discover the missing headers by using tools such as Ethereal or tcpmon <br >Query String Request Authentication Alternative <br >You can authenticate certain types of requests by passing the required information as querystring <br >parameters instead of using the Authorization HTTP header This is useful for enabling direct <br >thirdparty browser access to your private Amazon S3 data without proxying the request The idea is <br >to construct a presigned request and encode it as a URL that an enduser's browser can retrieve <br >Additionally you can limit a presigned request by specifying an expiration time <br >Note <br >For examples of using the AWS SDKs to generating presigned URLs see Share an Object <br >with Others (p 152) <br >Creating a Signature <br >Following is an example query string authenticated Amazon S3 REST request <br >GET photospuppyjpg <br > <br >AWSAccessKeyIdAKIAIOSFODNN7EXAMPLE&Expires1141889120&SignaturevjbyPxybdZaNmGa <br >2ByT272YEAiv43D HTTP11 <br >Host johnsmiths3amazonawscom <br >Date Mon 26 Mar 2007 193758 +0000 <br >The query string request authentication method doesn't require any special HTTP headers Instead <br >the required authentication elements are specified as query string parameters <br >API Version 20060301 <br >584Amazon Simple Storage Service Developer Guide <br >Signing and Authenticating REST Requests <br >Query String <br >Parameter Name <br >Example Value Description <br >AWSAccessKeyId AKIAIOSFODNN7EXAMPLE Your AWS access key ID Specifies <br >the AWS secret access key used to <br >sign the request and indirectly the <br >identity of the developer making the <br >request <br >Expires 1141889120 The time when the signature expires <br >specified as the number of seconds <br >since the epoch (000000 UTC on <br >January 1 1970) A request received <br >after this time (according to the server) <br >will be rejected <br >Signature vjbyPxybdZaNmGa <br >2ByT272YEAiv43D <br >The URL encoding of the Base64 <br >encoding of the HMACSHA1 of <br >StringToSign <br >The query string request authentication method differs slightly from the ordinary method but only in the <br >format of the Signature request parameter and the StringToSign element Following is pseudo <br >grammar that illustrates the query string request authentication method <br >Signature URLEncode( Base64( HMACSHA1( YourSecretAccessKeyID UTF8 <br >EncodingOf( StringToSign ) ) ) ) <br >StringToSign HTTPVERB + \n + <br > ContentMD5 + \n + <br > ContentType + \n + <br > Expires + \n + <br > CanonicalizedAmzHeaders + <br > CanonicalizedResource <br >YourSecretAccessKeyID is the AWS secret access key ID that Amazon assigns to you when <br >you sign up to be an Amazon Web Service developer Notice how the Signature is URLEncoded <br >to make it suitable for placement in the query string Note also that in StringToSign the HTTP <br >Date positional element has been replaced with Expires The CanonicalizedAmzHeaders and <br >CanonicalizedResource are the same <br >Note <br >In the query string authentication method you do not use the Date or the xamzdate <br >request header when calculating the string to sign <br >API Version 20060301 <br >585Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >Example Query String Request Authentication <br >Request StringToSign <br >GET photospuppyjpg <br >AWSAccessKeyIdAKIAIOSFODNN7EXAMPLE& <br > SignatureNpgCjnDzrM <br >2BWFzoENXmpNDUsSn83D& <br > Expires1175139620 HTTP11 <br >Host johnsmiths3amazonawscom <br >GET\n <br >\n <br >\n <br >1175139620\n <br >johnsmithphotospuppyjpg <br >We assume that when a browser makes the GET request it won't provide a ContentMD5 or a <br >ContentType header nor will it set any xamz headers so those parts of the StringToSign are left <br >blank <br >Using Base64 Encoding <br >HMAC request signatures must be Base64 encoded Base64 encoding converts the signature into a <br >simple ASCII string that can be attached to the request Characters that could appear in the signature <br >string like plus (+) forward slash () and equals () must be encoded if used in a URI For example <br >if the authentication code includes a plus (+) sign encode it as 2B in the request Encode a forward <br >slash as 2F and equals as 3D <br >For examples of Base64 encoding refer to the Amazon S3 Authentication Examples (p 580) <br >BrowserBased Uploads Using POST (AWS <br >Signature Version 2) <br >Amazon S3 supports POST which allows your users to upload content directly to Amazon S3 POST is <br >designed to simplify uploads reduce upload latency and save you money on applications where users <br >upload data to store in Amazon S3 <br >Note <br >The request authentication discussed in this section is based on AWS Signature Version 2 a <br >protocol for authenticating inbound API requests to AWS services <br >Amazon S3 now supports Signature Version 4 a protocol for authenticating inbound API <br >requests to AWS services in all AWS regions At this time AWS regions created before <br >January 30 2014 will continue to support the previous protocol Signature Version 2 Any <br >new regions after January 30 2014 will support only Signature Version 4 and therefore all <br >requests to those regions must be made with Signature Version 4 For more information see <br >Authenticating Requests in BrowserBased Uploads Using POST (AWS Signature Version 4) <br >in the Amazon Simple Storage Service API Reference <br >The following figure shows an upload using Amazon S3 POST <br >API Version 20060301 <br >586Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >Uploading Using POST <br >1 The user opens a web browser and accesses your web page <br >2 Your web page contains an HTTP form that contains all the information necessary for the <br >user to upload content to Amazon S3 <br >3 The user uploads content directly to Amazon S3 <br >Note <br >Query string authentication is not supported for POST <br >HTML Forms (AWS Signature Version 2) <br >Topics <br >• HTML Form Encoding (p 588) <br >• HTML Form Declaration (p 588) <br >• HTML Form Fields (p 589) <br >• Policy Construction (p 591) <br >• Constructing a Signature (p 594) <br >• Redirection (p 594) <br >When you communicate with Amazon S3 you normally use the REST or SOAP API to perform put <br >get delete and other operations With POST users upload data directly to Amazon S3 through their <br >browsers which cannot process the SOAP API or create a REST PUT request <br >API Version 20060301 <br >587Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >Note <br >SOAP support over HTTP is deprecated but it is still available over HTTPS New Amazon S3 <br >features will not be supported for SOAP We recommend that you use either the REST API or <br >the AWS SDKs <br >To allow users to upload content to Amazon S3 by using their browsers you use HTML forms HTML <br >forms consist of a form declaration and form fields The form declaration contains highlevel information <br >about the request The form fields contain detailed information about the request as well as the policy <br >that is used to authenticate it and ensure that it meets the conditions that you specify <br >Note <br >The form data and boundaries (excluding the contents of the file) cannot exceed 20 KB <br >This section explains how to use HTML forms <br >HTML Form Encoding <br >The form and policy must be UTF8 encoded You can apply UTF8 encoding to the form by specifying <br >it in the HTML heading or as a request header <br >Note <br >The HTML form declaration does not accept query string authentication parameters <br >The following is an example of UTF8 encoding in the HTML heading <br ><html> <br > <head> <br > <br > <meta httpequivContentType contenttexthtml charsetUTF8 > <br > <br > <head> <br > <body> <br >The following is an example of UTF8 encoding in a request header <br >ContentType texthtml charsetUTF8 <br >HTML Form Declaration <br >The form declaration has three components the action the method and the enclosure type If any of <br >these values is improperly set the request fails <br >The action specifies the URL that processes the request which must be set to the URL <br >of the bucket For example if the name of your bucket is johnsmith the URL is http <br >johnsmiths3amazonawscom <br >Note <br >The key name is specified in a form field <br >The method must be POST <br >The enclosure type (enctype) must be specified and must be set to multipartformdata for both file <br >uploads and text area uploads For more information go to RFC 1867 <br >API Version 20060301 <br >588Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >Example <br >The following example is a form declaration for the bucket johnsmith <br ><form actionhttpjohnsmiths3amazonawscom methodpost <br >enctypemultipartformdata> <br >HTML Form Fields <br >The following table describes fields that can be used within an HTML form <br >Note <br >The variable {filename} is automatically replaced with the name of the file provided by the <br >user and is recognized by all form fields If the browser or client provides a full or partial path <br >to the file only the text following the last slash () or backslash (\) will be used For example <br >C\Program Files\directory1\filetxt will be interpreted as filetxt If no file or file name is <br >provided the variable is replaced with an empty string <br >Field Name Description Required <br >AWSAccessKeyId The AWS Access Key ID of the owner of the <br >bucket who grants an anonymous user access <br >for a request that satisfies the set of constraints <br >in the policy This field is required if the request <br >includes a policy document <br >Conditional <br >acl An Amazon S3 access control list (ACL) If an <br >invalid access control list is specified an error is <br >generated For more information on ACLs see <br >Access Control Lists (p 8) <br >Type String <br >Default private <br >Valid Values private | publicread | <br >publicreadwrite | awsexecread | <br >authenticatedread | bucketowner <br >read | bucketownerfullcontrol <br >No <br >CacheControl Content <br >Type Content <br >Disposition Content <br >Encoding Expires <br >RESTspecific headers For more information <br >see PUT Object <br >No <br >key The name of the uploaded key <br >To use the filename provided by the user <br >use the {filename} variable For example if <br >user Betty uploads the file lolcatzjpg and you <br >specify userbetty{filename} the file is stored <br >as userbettylolcatzjpg <br >For more information see Object Key and <br >Metadata (p 99) <br >Yes <br >policy Security policy describing what is permitted in <br >the request Requests without a security policy <br >No <br >API Version 20060301 <br >589Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >Field Name Description Required <br >are considered anonymous and will succeed <br >only on publicly writable buckets <br >success_action_redirect <br >redirect <br >The URL to which the client is redirected upon <br >successful upload Amazon S3 appends the <br >bucket key and etag values as query string <br >parameters to the URL <br >If success_action_redirect is not specified <br >Amazon S3 returns the empty document type <br >specified in the success_action_status field <br >If Amazon S3 cannot interpret the URL it <br >ignores the field <br >If the upload fails Amazon S3 displays an error <br >and does not redirect the user to a URL <br >For more information see <br >Redirection (p 594) <br >Note <br >The redirect field name is deprecated <br >and support for the redirect field name <br >will be removed in the future <br >No <br >success_action_status The status code returned to the client upon <br >successful upload if success_action_redirect is <br >not specified <br >Valid values are 200 201 or 204 (default) <br >If the value is set to 200 or 204 Amazon S3 <br >returns an empty document with a 200 or 204 <br >status code <br >If the value is set to 201 Amazon S3 returns <br >an XML document with a 201 status code <br >For information about the content of the XML <br >document see POST Object <br >If the value is not set or if it is set to an invalid <br >value Amazon S3 returns an empty document <br >with a 204 status code <br >Note <br >Some versions of the Adobe Flash <br >player do not properly handle HTTP <br >responses with an empty body To <br >support uploads through Adobe <br >Flash we recommend setting <br >success_action_status to 201 <br >No <br >API Version 20060301 <br >590Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >Field Name Description Required <br >signature The HMAC signature constructed by using <br >the secret access key that corresponds to <br >the provided AWSAccessKeyId This field is <br >required if a policy document is included with <br >the request <br >For more information see Using Auth Access <br >Conditional <br >xamzsecuritytoken A security token used by Amazon DevPay and <br >session credentials <br >If the request is using Amazon DevPay then it <br >requires two xamzsecuritytoken form <br >fields one for the product token and one for the <br >user token For more information go to Using <br >DevPay <br >If the request is using session credentials then <br >it requires one xamzsecuritytoken form <br >For more information see Temporary Security <br >Credentials in the IAM User Guide <br >No <br >Other field names prefixed with <br >xamzmeta <br >Userspecified metadata <br >Amazon S3 does not validate or use this data <br >For more information see PUT Object <br >No <br >file File or text content <br >The file or content must be the last field in the <br >form Any fields below it are ignored <br >You cannot upload more than one file at a time <br >Yes <br >Policy Construction <br >Topics <br >• Expiration (p 592) <br >• Conditions (p 592) <br >• Condition Matching (p 593) <br >• Character Escaping (p 594) <br >The policy is a UTF8 and Base64encoded JSON document that specifies conditions that the <br >request must meet and is used to authenticate the content Depending on how you design your policy <br >documents you can use them per upload per user for all uploads or according to other designs that <br >meet your needs <br >Note <br >Although the policy document is optional we highly recommend it over making a bucket <br >publicly writable <br >The following is an example of a policy document <br >{ expiration 20071201T120000000Z <br >API Version 20060301 <br >591Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br > conditions [ <br > {acl publicread } <br > {bucket johnsmith } <br > [startswith key usereric] <br > ] <br >} <br >The policy document contains the expiration and conditions <br >Expiration <br >The expiration element specifies the expiration date of the policy in ISO 8601 UTC date format For <br >example 20071201T120000000Z specifies that the policy is not valid after midnight UTC on <br >20071201 Expiration is required in a policy <br >Conditions <br >The conditions in the policy document validate the contents of the uploaded object Each form field that <br >you specify in the form (except AWSAccessKeyId signature file policy and field names that have an <br >xignore prefix) must be included in the list of conditions <br >Note <br >If you have multiple fields with the same name the values must be separated by commas <br >For example if you have two fields named xamzmetatag and the first one has a value <br >of Ninja and second has a value of Stallman you would set the policy document to <br >NinjaStallman <br >All variables within the form are expanded before the policy is validated Therefore all <br >condition matching should be performed against the expanded fields For example if you <br >set the key field to userbetty{filename} your policy might be [ startswith <br >key userbetty ] Do not enter [ startswith key userbetty <br >{filename} ] For more information see Condition Matching (p 593) <br >The following table describes policy document conditions <br >Element Name Description <br >acl Specifies conditions that the ACL must meet <br >Supports exact matching and startswith <br >contentlengthrange Specifies the minimum and maximum allowable size for the <br >uploaded content <br >Supports range matching <br >CacheControl ContentType <br >ContentDisposition Content <br >Encoding Expires <br >RESTspecific headers <br >Supports exact matching and startswith <br >key The name of the uploaded key <br >Supports exact matching and startswith <br >success_action_redirect redirect The URL to which the client is redirected upon successful <br >upload <br >API Version 20060301 <br >592Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >Element Name Description <br >Supports exact matching and startswith <br >success_action_status The status code returned to the client upon successful upload if <br >success_action_redirect is not specified <br >Supports exact matching <br >xamzsecuritytoken Amazon DevPay security token <br >Each request that uses Amazon DevPay requires two x <br >amzsecuritytoken form fields one for the product token <br >and one for the user token As a result the values must be <br >separated by commas For example if the user token is <br >eW91dHViZQ and the product token is b0hnNVNKWVJIQTA <br >you set the policy entry to { xamzsecuritytoken <br >eW91dHViZQb0hnNVNKWVJIQTA } <br >For more information about Amazon DevPay see Using <br >DevPay <br >Other field names prefixed with x <br >amzmeta <br >Userspecified metadata <br >Supports exact matching and startswith <br >Note <br >If your toolkit adds additional fields (eg Flash adds filename) you must add them to the <br >policy document If you can control this functionality prefix xignore to the field so Amazon <br >S3 ignores the feature and it won't affect future versions of this feature <br >Condition Matching <br >The following table describes condition matching types Although you must specify one condition <br >for each form field that you specify in the form you can create more complex matching criteria by <br >specifying multiple conditions for a form field <br >Condition Description <br >Exact Matches Exact matches verify that fields match specific values This example indicates that <br >the ACL must be set to publicread <br >{acl publicread } <br >This example is an alternate way to indicate that the ACL must be set to public <br >read <br >[ eq acl publicread ] <br >Starts With If the value must start with a certain value use startswith This example indicates <br >that the key must start with userbetty <br >[startswith key userbetty] <br >Matching Any <br >Content <br >To configure the policy to allow any content within a field use startswith with an <br >empty value This example allows any success_action_redirect <br >API Version 20060301 <br >593Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >Condition Description <br >[startswith success_action_redirect ] <br >Specifying <br >Ranges <br >For fields that accept ranges separate the upper and lower ranges with a comma <br >This example allows a file size from 1 to 10 megabytes <br >[contentlengthrange 1048579 10485760] <br >Character Escaping <br >The following table describes characters that must be escaped within a policy document <br >Escape <br >Sequence <br >Description <br >\\ Backslash <br >\ Dollar sign <br >\b Backspace <br >\f Form feed <br >\n New line <br >\r Carriage return <br >\t Horizontal tab <br >\v Vertical tab <br >\uxxxx All Unicode characters <br >Constructing a Signature <br >Step Description <br >1 Encode the policy by using UTF8 <br >2 Encode those UTF8 bytes by using Base64 <br >3 Sign the policy with your secret access key by using HMAC SHA1 <br >4 Encode the SHA1 signature by using Base64 <br >For general information about authentication see Using Auth Access <br >Redirection <br >This section describes how to handle redirects <br >General Redirection <br >On completion of the POST request the user is redirected to the location that you specified in <br >the success_action_redirect field If Amazon S3 cannot interpret the URL it ignores the <br >success_action_redirect field <br >API Version 20060301 <br >594Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >If success_action_redirect is not specified Amazon S3 returns the empty document type <br >specified in the success_action_status field <br >If the POST request fails Amazon S3 displays an error and does not provide a redirect <br >PreUpload Redirection <br >If your bucket was created using <CreateBucketConfiguration> your end users might require a <br >redirect If this occurs some browsers might handle the redirect incorrectly This is relatively rare but is <br >most likely to occur right after a bucket is created <br >Upload Examples (AWS Signature Version 2) <br >Topics <br >• File Upload (p 595) <br >• Text Area Upload (p 598) <br >Note <br >The request authentication discussed in this section is based on AWS Signature Version 2 a <br >protocol for authenticating inbound API requests to AWS services <br >Amazon S3 now supports Signature Version 4 a protocol for authenticating inbound API <br >requests to AWS services in all AWS regions At this time AWS regions created before <br >January 30 2014 will continue to support the previous protocol Signature Version 2 Any <br >new regions after January 30 2014 will support only Signature Version 4 and therefore all <br >requests to those regions must be made with Signature Version 4 For more information see <br >Examples BrowserBased Upload using HTTP POST (Using AWS Signature Version 4) in the <br >Amazon Simple Storage Service API Reference <br >File Upload <br >This example shows the complete process for constructing a policy and form that can be used to <br >upload a file attachment <br >Policy and Form Construction <br >The following policy supports uploads to Amazon S3 for the johnsmith bucket <br >{ expiration 20071201T120000000Z <br > conditions [ <br > {bucket johnsmith} <br > [startswith key usereric] <br > {acl publicread} <br > {success_action_redirect httpjohnsmiths3amazonawscom <br >successful_uploadhtml} <br > [startswith ContentType image] <br > {xamzmetauuid 14365123651274} <br > [startswith xamzmetatag ] <br > ] <br >} <br >This policy requires the following <br >• The upload must occur before 1200 UTC on December 1 2007 <br >• The content must be uploaded to the johnsmith bucket <br >• The key must start with usereric <br >• The ACL is set to publicread <br >• The success_action_redirect is set to httpjohnsmiths3amazonawscomsuccessful_uploadhtml <br >API Version 20060301 <br >595Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >• The object is an image file <br >• The xamzmetauuid tag must be set to 14365123651274 <br >• The xamzmetatag can contain any value <br >The following is a Base64encoded version of this policy <br >eyAiZXhwaXJhdGlvbiI6ICIyMDA3LTEyLTAxVDEyOjAwOjAwLjAwMFoiLAogICJjb25kaXRpb25zIjogWwogICAgeyJidWNrZXQiOiAiam9obnNtaXRoIn0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiRrZXkiLCAidXNlci9lcmljLyJdLAogICAgeyJhY2wiOiAicHVibGljLXJlYWQifSwKICAgIHsic3VjY2Vzc19hY3Rpb25fcmVkaXJlY3QiOiAiaHR0cDovL2pvaG5zbWl0aC5zMy5hbWF6b25hd3MuY29tL3N1Y2Nlc3NmdWxfdXBsb2FkLmh0bWwifSwKICAgIFsic3RhcnRzLXdpdGgiLCAiJENvbnRlbnQtVHlwZSIsICJpbWFnZS8iXSwKICAgIHsieC1hbXotbWV0YS11dWlkIjogIjE0MzY1MTIzNjUxMjc0In0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiR4LWFtei1tZXRhLXRhZyIsICIiXQogIF0KfQo <br >Using your credentials create a signature for example 0RavWzkygo6QX9caELEqKi9kDbU is the <br >signature for the preceding policy document <br >The following form supports a POST request to the johnsmithnet bucket that uses this policy <br ><html> <br > <head> <br > <br > <meta httpequivContentType contenttexthtml charsetUTF8 > <br > <br > <head> <br > <body> <br > <br > <form actionhttpjohnsmiths3amazonawscom methodpost <br > enctypemultipartformdata> <br > Key to upload <input typeinput namekey valueusereric ><br > <br > <input typehidden nameacl valuepublicread > <br > <input typehidden namesuccess_action_redirect valuehttp <br >johnsmiths3amazonawscomsuccessful_uploadhtml > <br > ContentType <input typeinput nameContentType valueimage <br >jpeg ><br > <br > <input typehidden namexamzmetauuid value14365123651274 > <br > Tags for File <input typeinput namexamzmetatag value ><br <br >> <br > <input typehidden nameAWSAccessKeyId valueAKIAIOSFODNN7EXAMPLE <br >> <br > <input typehidden namePolicy valuePOLICY > <br > <input typehidden nameSignature valueSIGNATURE > <br > File <input typefile namefile > <br > <br > < The elements after this will be ignored > <br > <input typesubmit namesubmit valueUpload to Amazon S3 > <br > <form> <br > <br ><html> <br >Sample Request <br >This request assumes that the image uploaded is 117108 bytes the image data is not included <br >POST HTTP11 <br >Host johnsmiths3amazonawscom <br >UserAgent Mozilla50 (Windows U Windows NT 51 enUS rv18110) <br > Gecko20071115 Firefox20010 <br >Accept textxmlapplicationxmlapplicationxhtml+xmltexthtmlq09text <br >plainq08imagepng**q05 <br >AcceptLanguage enusenq05 <br >API Version 20060301 <br >596Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >AcceptEncoding gzipdeflate <br >AcceptCharset ISO88591utf8q07*q07 <br >KeepAlive 300 <br >Connection keepalive <br >ContentType multipartformdata boundary9431149156168 <br >ContentLength 118698 <br >9431149156168 <br >ContentDisposition formdata namekey <br >userericMyPicturejpg <br >9431149156168 <br >ContentDisposition formdata nameacl <br >publicread <br >9431149156168 <br >ContentDisposition formdata namesuccess_action_redirect <br >httpjohnsmiths3amazonawscomsuccessful_uploadhtml <br >9431149156168 <br >ContentDisposition formdata nameContentType <br >imagejpeg <br >9431149156168 <br >ContentDisposition formdata namexamzmetauuid <br >14365123651274 <br >9431149156168 <br >ContentDisposition formdata namexamzmetatag <br >SomeTagForPicture <br >9431149156168 <br >ContentDisposition formdata nameAWSAccessKeyId <br >AKIAIOSFODNN7EXAMPLE <br >9431149156168 <br >ContentDisposition formdata namePolicy <br >eyAiZXhwaXJhdGlvbiI6ICIyMDA3LTEyLTAxVDEyOjAwOjAwLjAwMFoiLAogICJjb25kaXRpb25zIjogWwogICAgeyJidWNrZXQiOiAiam9obnNtaXRoIn0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiRrZXkiLCAidXNlci9lcmljLyJdLAogICAgeyJhY2wiOiAicHVibGljLXJlYWQifSwKICAgIHsic3VjY2Vzc19hY3Rpb25fcmVkaXJlY3QiOiAiaHR0cDovL2pvaG5zbWl0aC5zMy5hbWF6b25hd3MuY29tL3N1Y2Nlc3NmdWxfdXBsb2FkLmh0bWwifSwKICAgIFsic3RhcnRzLXdpdGgiLCAiJENvbnRlbnQtVHlwZSIsICJpbWFnZS8iXSwKICAgIHsieC1hbXotbWV0YS11dWlkIjogIjE0MzY1MTIzNjUxMjc0In0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiR4LWFtei1tZXRhLXRhZyIsICIiXQogIF0KfQo <br >9431149156168 <br >ContentDisposition formdata nameSignature <br >0RavWzkygo6QX9caELEqKi9kDbU <br >9431149156168 <br >ContentDisposition formdata namefile filenameMyFilenamejpg <br >ContentType imagejpeg <br >file content <br >9431149156168 <br >ContentDisposition formdata namesubmit <br >Upload to Amazon S3 <br >9431149156168 <br >Sample Response <br >HTTP11 303 Redirect <br >xamzrequestid 1AEE782442F35865 <br >API Version 20060301 <br >597Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >xamzid2 cxzFLJRatFHy+NGtaDFRR8YvI9BHmgLxjvJzNiGGICARZmVXHj7T+qQKhdpzHFh <br >ContentType applicationxml <br >Date Wed 14 Nov 2007 212133 GMT <br >Connection close <br >Location httpjohnsmiths3amazonawscom <br >successful_uploadhtmlbucketjohnsmith&keyusereric <br >MyPicturejpg&etag"39d459dfbc0faabbb5e179358dfb94c3" <br >Server AmazonS3 <br >Text Area Upload <br >Topics <br >• Policy and Form Construction (p 598) <br >• Sample Request (p 599) <br >• Sample Response (p 600) <br >The following example shows the complete process for constructing a policy and form to upload a text <br >area Uploading a text area is useful for submitting usercreated content such as blog postings <br >Policy and Form Construction <br >The following policy supports text area uploads to Amazon S3 for the johnsmith bucket <br >{ expiration 20071201T120000000Z <br > conditions [ <br > {bucket johnsmith} <br > [startswith key usereric] <br > {acl publicread} <br > {success_action_redirect httpjohnsmiths3amazonawscom <br >new_posthtml} <br > [eq ContentType texthtml] <br > {xamzmetauuid 14365123651274} <br > [startswith xamzmetatag ] <br > ] <br >} <br >This policy requires the following <br >• The upload must occur before 1200 GMT on 20071201 <br >• The content must be uploaded to the johnsmith bucket <br >• The key must start with usereric <br >• The ACL is set to publicread <br >• The success_action_redirect is set to httpjohnsmiths3amazonawscomnew_posthtml <br >• The object is HTML text <br >• The xamzmetauuid tag must be set to 14365123651274 <br >• The xamzmetatag can contain any value <br >Following is a Base64encoded version of this policy <br >eyAiZXhwaXJhdGlvbiI6ICIyMDA3LTEyLTAxVDEyOjAwOjAwLjAwMFoiLAogICJjb25kaXR <br >pb25zIjogWwogICAgeyJidWNrZXQiOiAiam9obnNtaXRoIn0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiRrZXkiLCAidXNlci9lcmljLyJd <br >LAogICAgeyJhY2wiOiAicHVibGljLXJlYWQifSwKICAgIHsic3VjY2Vzc19hY3Rpb25fcmVkaXJlY3QiOiAiaHR0cDovL2pvaG5zbWl0a <br >C5zMy5hbWF6b25hd3MuY29tL25ld19wb3N0Lmh0bWwifSwKICAgIFsiZXEiLCAiJENvbnRlbnQtVHlwZSIsICJ0ZXh0L2h0bWwiXSwKI <br >CAgIHsieC1hbXotbWV0YS11dWlkIjogIjE0MzY1MTIzNjUxMjc0In0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiR4LWFtei1tZXRhLXRhZy <br >API Version 20060301 <br >598Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >IsICIiXQogIF0KfQo <br >Using your credentials create a signature For example qA7FWXKq6VvU68lI9KdveT1cWgF is the <br >signature for the preceding policy document <br >The following form supports a POST request to the johnsmithnet bucket that uses this policy <br ><html> <br > <head> <br > <br > <meta httpequivContentType contenttexthtml charsetUTF8 > <br > <br > <head> <br > <body> <br > <br > <form actionhttpjohnsmiths3amazonawscom methodpost <br > enctypemultipartformdata> <br > Key to upload <input typeinput namekey valueusereric ><br > <br > <input typehidden nameacl valuepublicread > <br > <input typehidden namesuccess_action_redirect valuehttp <br >johnsmiths3amazonawscomnew_posthtml > <br > <input typehidden nameContentType valuetexthtml > <br > <input typehidden namexamzmetauuid value14365123651274 > <br > Tags for File <input typeinput namexamzmetatag value ><br <br >> <br > <input typehidden nameAWSAccessKeyId valueAKIAIOSFODNN7EXAMPLE <br >> <br > <input typehidden namePolicy valuePOLICY > <br > <input typehidden nameSignature valueSIGNATURE > <br > Entry <textarea namefile cols60 rows10> <br >Your blog post goes here <br > <textarea><br > <br > < The elements after this will be ignored > <br > <input typesubmit namesubmit valueUpload to Amazon S3 > <br > <form> <br > <br ><html> <br >Sample Request <br >This request assumes that the image uploaded is 117108 bytes the image data is not included <br >POST HTTP11 <br >Host johnsmiths3amazonawscom <br >UserAgent Mozilla50 (Windows U Windows NT 51 enUS rv18110) <br > Gecko20071115 Firefox20010 <br >Accept textxmlapplicationxmlapplicationxhtml+xmltexthtmlq09text <br >plainq08imagepng**q05 <br >AcceptLanguage enusenq05 <br >AcceptEncoding gzipdeflate <br >AcceptCharset ISO88591utf8q07*q07 <br >KeepAlive 300 <br >Connection keepalive <br >ContentType multipartformdata boundary178521717625888 <br >ContentLength 118635 <br >API Version 20060301 <br >599Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >178521717625888 <br >ContentDisposition formdata namekey <br >serericNewEntryhtml <br >178521717625888 <br >ContentDisposition formdata nameacl <br >publicread <br >178521717625888 <br >ContentDisposition formdata namesuccess_action_redirect <br >httpjohnsmiths3amazonawscomnew_posthtml <br >178521717625888 <br >ContentDisposition formdata nameContentType <br >texthtml <br >178521717625888 <br >ContentDisposition formdata namexamzmetauuid <br >14365123651274 <br >178521717625888 <br >ContentDisposition formdata namexamzmetatag <br >Interesting Post <br >178521717625888 <br >ContentDisposition formdata nameAWSAccessKeyId <br >AKIAIOSFODNN7EXAMPLE <br >178521717625888 <br >ContentDisposition formdata namePolicy <br >eyAiZXhwaXJhdGlvbiI6ICIyMDA3LTEyLTAxVDEyOjAwOjAwLjAwMFoiLAogICJjb25kaXRpb25zIjogWwogICAgeyJidWNrZXQiOiAiam9obnNtaXRoIn0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiRrZXkiLCAidXNlci9lcmljLyJdLAogICAgeyJhY2wiOiAicHVibGljLXJlYWQifSwKICAgIHsic3VjY2Vzc19hY3Rpb25fcmVkaXJlY3QiOiAiaHR0cDovL2pvaG5zbWl0aC5zMy5hbWF6b25hd3MuY29tL25ld19wb3N0Lmh0bWwifSwKICAgIFsiZXEiLCAiJENvbnRlbnQtVHlwZSIsICJ0ZXh0L2h0bWwiXSwKICAgIHsieC1hbXotbWV0YS11dWlkIjogIjE0MzY1MTIzNjUxMjc0In0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiR4LWFtei1tZXRhLXRhZyIsICIiXQogIF0KfQo <br >178521717625888 <br >ContentDisposition formdata nameSignature <br >qA7FWXKq6VvU68lI9KdveT1cWgF <br >178521717625888 <br >ContentDisposition formdata namefile <br >content goes here <br >178521717625888 <br >ContentDisposition formdata namesubmit <br >Upload to Amazon S3 <br >178521717625888 <br >Sample Response <br >HTTP11 303 Redirect <br >xamzrequestid 1AEE782442F35865 <br >xamzid2 cxzFLJRatFHy+NGtaDFRR8YvI9BHmgLxjvJzNiGGICARZmVXHj7T+qQKhdpzHFh <br >ContentType applicationxml <br >Date Wed 14 Nov 2007 212133 GMT <br >Connection close <br >Location httpjohnsmiths3amazonawscom <br >new_posthtmlbucketjohnsmith&keyusereric <br >NewEntryhtml&etag40c3271af26b7f1672e41b8a274d28d4 <br >Server AmazonS3 <br >API Version 20060301 <br >600Amazon Simple Storage Service Developer Guide <br >BrowserBased Uploads Using POST <br >POST with Adobe Flash <br >This section describes how to use POST with Adobe Flash <br >Adobe Flash Player Security <br >By default the Adobe Flash Player security model prohibits Adobe Flash Players from making network <br >connections to servers outside the domain that serves the SWF file <br >To override the default you must upload a publicly readable crossdomainxml file to the bucket that will <br >accept POST uploads The following is a sample crossdomainxml file <br ><xml version10> <br ><DOCTYPE crossdomainpolicy SYSTEM <br >httpwwwmacromediacomxmldtdscrossdomainpolicydtd> <br ><crossdomainpolicy> <br ><allowaccessfrom domain* securefalse > <br ><crossdomainpolicy> <br >Note <br >For more information about the Adobe Flash security model go to the Adobe website <br >Adding the crossdomainxml file to your bucket allows any Adobe Flash Player to connect to <br >the crossdomainxml file within your bucket however it does not grant access to the actual <br >Amazon S3 bucket <br >Adobe Flash Considerations <br >The FileReference API in Adobe Flash adds the Filename form field to the POST request When <br >you build Adobe Flash applications that upload to Amazon S3 by using the FileReference API action <br >include the following condition in your policy <br >['startswith' 'Filename' ''] <br >Some versions of the Adobe Flash Player do not properly handle HTTP responses that have <br >an empty body To configure POST to return a response that does not have an empty body set <br >success_action_status to 201 Amazon S3 will then return an XML document with a 201 status <br >code For information about the content of the XML document see POST Object For information about <br >form fields see HTML Form Fields (p 589) <br >API Version 20060301 <br >601Amazon Simple Storage Service Developer Guide <br >Amazon S3 Resources <br >Following is a table that lists related resources that you'll find useful as you work with this service <br >Resource Description <br >Amazon Simple Storage Service <br >Getting Started Guide <br >The Getting Started Guide provides a quick tutorial of the <br >service based on a simple use case <br >Amazon Simple Storage Service API <br >Reference <br >The API Reference describes Amazon S3 operations in <br >detail <br >Amazon S3Technical FAQ The FAQ covers the top questions developers have asked <br >about this product <br >Amazon S3 Release Notes The Release Notes give a highlevel overview of the <br >current release They specifically note any new features <br >corrections and known issues <br >AWS Developer Resource Center A central starting point to find documentation code <br >samples release notes and other information to help you <br >build innovative applications with AWS <br >AWS Management Console The console allows you to perform most of the functions of <br >Amazon S3without programming <br >httpsforumsawsamazoncom A communitybased forum for developers to discuss <br >technical questions related to AWS <br >AWS Support Center The home page for AWS Technical Support including <br >access to our Developer Forums Technical FAQs Service <br >Status page and Premium Support <br >AWS Premium Support The primary web page for information about AWS Premium <br >Support a oneonone fastresponse support channel to <br >help you build and run applications on AWS Infrastructure <br >Services <br >Amazon S3 product information The primary web page for information about Amazon S3 <br >Contact Us A central contact point for inquiries concerning AWS billing <br >account events abuse etc <br >API Version 20060301 <br >602Amazon Simple Storage Service Developer Guide <br >Resource Description <br >Conditions of Use Detailed information about the copyright and trademark <br >usage at Amazoncom and other topics <br >API Version 20060301 <br >603Amazon Simple Storage Service Developer Guide <br >Document History <br >The following table describes the important changes since the last release of the Amazon Simple <br >Storage Service Developer Guide <br >Relevant Dates to this History <br >• Current product version 20060301 <br >• Last documentation update August 11 2016 <br >Change Description Date <br >IPv6 support Amazon S3 now supports Internet Protocol version 6 (IPv6) <br >You can access Amazon S3 over IPv6 by using dualstack <br >endpoints For more information see Making Requests to <br >Amazon S3 over IPv6 (p 13) <br >In this <br >release <br >Asia Pacific (Mumbai) <br >region <br >Amazon S3 is now available in the Asia Pacific (Mumbai) <br >region For more information about Amazon S3 regions and <br >endpoints see Regions and Endpoints in the AWS General <br >Reference <br >June 27 <br >2016 <br >Amazon S3 Transfer <br >Acceleration <br >Amazon S3 Transfer Acceleration enables fast easy <br >and secure transfers of files over long distances between <br >your client and an S3 bucket Transfer Acceleration takes <br >advantage of Amazon CloudFront’s globally distributed edge <br >locations <br >For more information see Amazon S3 Transfer <br >Acceleration (p 81) <br >April 19 <br >2016 <br >Lifecycle support to <br >remove expired object <br >delete markers <br >Lifecycle configuration Expiration action now allows <br >you to direct Amazon S3 to remove expired object delete <br >markers in a versioned bucket For more information see <br >Elements to Describe Lifecycle Actions (p 115) <br >March 16 <br >2016 <br >Bucket lifecycle <br >configuration now <br >supports action to abort <br >incomplete multipart <br >uploads <br >Bucket lifecycle configuration now supports the <br >AbortIncompleteMultipartUpload action that you can <br >use to direct Amazon S3 to abort multipart uploads that <br >don't complete within a specified number of days after being <br >initiated When a multipart upload becomes eligible for an <br >March 16 <br >2016 <br >API Version 20060301 <br >604Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >abort operation Amazon S3 deletes any uploaded parts and <br >aborts the multipart upload <br >For conceptual information see the following topics in the <br >Amazon Simple Storage Service Developer Guide <br >• Aborting Incomplete Multipart Uploads Using a Bucket <br >Lifecycle Policy (p 167) <br >• Elements to Describe Lifecycle Actions (p 115) <br >The following APIs have been updated to support the new <br >action <br >• PUT Bucket lifecycle – The XML configuration now allows <br >you to specify the AbortIncompleteMultipartUpload <br >action in a lifecycle configuration rule <br >• List Parts and Initiate Multipart Upload – Both of these <br >APIs now return two additional response headers <br >(xamzabortdate and xamzabortrule <br >id) if the bucket has a lifecycle rule that specifies the <br >AbortIncompleteMultipartUpload action These <br >headers in the response indicate when the initiated <br >multipart upload will become eligible for abort operation <br >and which lifecycle rule is applicable <br >Asia Pacific (Seoul) <br >region <br >Amazon S3 is now available in the Asia Pacific (Seoul) <br >region For more information about Amazon S3 regions and <br >endpoints see Regions and Endpoints in the AWS General <br >Reference <br >January 6 <br >2016 <br >New condition key <br >and a Multipart Upload <br >change <br >IAM policies now support an Amazon S3 s3xamz <br >storageclass condition key For more information see <br >Specifying Conditions in a Policy (p 315) <br >You no longer need to be the initiator of a multipart upload to <br >upload parts and complete the upload For more information <br >see Multipart Upload API and Permissions (p 169) <br >December <br >14 2015 <br >Renamed the US <br >Standard region <br >Changed the region name string from US Standard to US <br >East (N Virginia) This is only a region name update there <br >is no change in the functionality <br >December <br >11 2015 <br >API Version 20060301 <br >605Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >New storage class Amazon S3 now offers a new storage class STANDARD_IA <br >(IA for infrequent access) for storing objects This storage <br >class is optimized for longlived and less frequently <br >accessed data For more information see Storage <br >Classes (p 103) <br >Lifecycle configuration feature updates now allow you <br >to transition objects to the STANDARD_IA storage <br >class For more information see Object Lifecycle <br >Management (p 109) <br >Previously the crossregion replication feature used the <br >storage class of the source object for object replicas <br >Now when you configure crossregion replication you can <br >specify a storage class for the object replica created in the <br >destination bucket For more information see CrossRegion <br >Replication (p 492) <br >September <br >16 2015 <br >AWS CloudTrail <br >integration <br >New AWS CloudTrail integration allows you to record <br >Amazon S3 API activity in your S3 bucket You can use <br >CloudTrail to track S3 bucket creations or deletions access <br >control modifications or lifecycle policy changes For more <br >information see Logging Amazon S3 API Calls By Using <br >AWS CloudTrail (p 526) <br >September <br >1 2015 <br >Bucket limit increase Amazon S3 now supports bucket limit increases By default <br >customers can create up to 100 buckets in their AWS <br >account Customers who need additional buckets can <br >increase that limit by submitting a service limit increase For <br >information about how to increase your bucket limit go to <br >AWS Service Limits in the AWS General Reference For <br >more information see Creating a Bucket (p 59) and Bucket <br >Restrictions and Limitations (p 62) <br >August 4 <br >2015 <br >Consistency model <br >update <br >Amazon S3 now supports readafterwrite consistency <br >for new objects added to Amazon S3 in the US East (N <br >Virginia) region Prior to this update all regions except <br >US East (N Virginia) region supported readafterwrite <br >consistency for new objects uploaded to Amazon S3 With <br >this enhancement Amazon S3 now supports readafter <br >write consistency in all regions for new objects added to <br >Amazon S3 Readafterwrite consistency allows you to <br >retrieve objects immediately after creation in Amazon S3 <br >For more information see Regions (p 4) <br >August 4 <br >2015 <br >Event notifications Amazon S3 event notifications have been updated to <br >add notifications when objects are deleted and to add <br >filtering on object names with prefix and suffix matching <br >For more information see Configuring Amazon S3 Event <br >Notifications (p 472) <br >July 28 <br >2015 <br >API Version 20060301 <br >606Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >Amazon CloudWatch <br >integration <br >New Amazon CloudWatch integration allows you to <br >monitor and set alarms on your Amazon S3 usage through <br >CloudWatch metrics for Amazon S3 Supported metrics <br >include total bytes for standard storage total bytes for <br >ReducedRedundancy storage and total number of objects <br >for a given S3 bucket For more information see Monitoring <br >Amazon S3 with Amazon CloudWatch (p 523) <br >July 28 <br >2015 <br >Support for deleting <br >and emptying non <br >empty buckets <br >Amazon S3 now supports deleting and emptying nonempty <br >buckets For more information see Deleting or Emptying a <br >Bucket (p 67) <br >July 16 <br >2015 <br >Bucket policies <br >for Amazon VPC <br >endpoints <br >Amazon S3 has added support for bucket policies for <br >Amazon Virtual Private Cloud (Amazon VPC) endpoints <br >You can use S3 bucket policies to control access to buckets <br >from specific Amazon VPC endpoints or specific VPCs <br >VPC endpoints are easy to configure are highly reliable and <br >provide a secure connection to Amazon S3 without requiring <br >a gateway or a NAT instance For more information see <br >Example Bucket Policies for VPC Endpoints for Amazon <br >S3 (p 341) <br >April 29 <br >2015 <br >Event notifications Amazon S3 event notifications have been updated to <br >support the switch to resourcebased permissions for AWS <br >Lambda functions For more information see Configuring <br >Amazon S3 Event Notifications (p 472) <br >April 9 <br >2015 <br >Crossregion <br >replication <br >Amazon S3 now supports crossregion replication Cross <br >region replication is the automatic asynchronous copying of <br >objects across buckets in different AWS regions For more <br >information see CrossRegion Replication (p 492) <br >March 24 <br >2015 <br >Event notifications Amazon S3 now supports new event types and <br >destinations in a bucket notification configuration <br >Prior to this release Amazon S3 supported only the <br >s3ReducedRedundancyLostObject event type and an <br >Amazon SNS topic as the destination For more information <br >about the new event types see Configuring Amazon S3 <br >Event Notifications (p 472) <br >November <br >13 2014 <br >Serverside encryption <br >with customerprovided <br >encryption keys <br >Serverside encryption with AWS Key Management Service <br >(KMS) <br >Amazon S3 now supports serverside encryption using <br >AWS Key Management Service This feature allows you to <br >manage the envelope key through KMS and Amazon S3 <br >calls KMS to access the envelope key within the permissions <br >you set <br >For more information about serverside encryption with <br >KMS see Protecting Data Using ServerSide Encryption <br >with AWS Key Management Service <br >November <br >12 2014 <br >EU (Frankfurt) region Amazon S3 is now available in the EU (Frankfurt) region October 23 <br >2014 <br >API Version 20060301 <br >607Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >Serverside encryption <br >with customerprovided <br >encryption keys <br >Amazon S3 now supports serverside encryption using <br >customerprovided encryption keys (SSEC) Serverside <br >encryption enables you to request Amazon S3 to encrypt <br >your data at rest When using SSEC Amazon S3 encrypts <br >your objects with the custom encryption keys that you <br >provide Since Amazon S3 performs the encryption for you <br >you get the benefits of using your own encryption keys <br >without the cost of writing or executing your own encryption <br >code <br >For more information about SSEC see ServerSide <br >Encryption (Using CustomerProvided Encryption Keys) <br >June 12 <br >2014 <br >Lifecycle support for <br >versioning <br >Prior to this release lifecycle configuration was supported <br >only on nonversioned buckets Now you can configure <br >lifecycle on both nonversioned and versioningenabled <br >buckets For more information see Object Lifecycle <br >Management (p 109) <br >May 20 <br >2014 <br >Access control topics <br >revised <br >Revised Amazon S3 access control documentation For <br >more information see Managing Access Permissions to <br >Your Amazon S3 Resources (p 266) <br >April 15 <br >2014 <br >Server access logging <br >topic revised <br >Revised server access logging documentation For more <br >information see Server Access Logging (p 546) <br >November <br >26 2013 <br >NET SDK samples <br >updated to version 20 <br >NET SDK samples in this guide are now compliant to <br >version 20 <br >November <br >26 2013 <br >SOAP Support Over <br >HTTP Deprecated <br >SOAP support over HTTP is deprecated but it is still <br >available over HTTPS New Amazon S3 features will not be <br >supported for SOAP We recommend that you use either the <br >REST API or the AWS SDKs <br >September <br >20 2013 <br >IAM policy variable <br >support <br >The IAM access policy language now supports variables <br >When a policy is evaluated any policy variables are <br >replaced with values that are supplied by contextbased <br >information from the authenticated user’s session You <br >can use policy variables to define general purpose policies <br >without explicitly listing all the components of the policy <br >For more information about policy variables see IAM Policy <br >Variables Overview in the IAM User Guide <br >For examples of policy variables in Amazon S3 see User <br >Policy Examples (p 343) <br >April 3 <br >2013 <br >Console support for <br >Requester Pays <br >You can now configure your bucket for Requester Pays <br >by using the Amazon S3 console For more information <br >see Configure Requester Pays by Using the Amazon S3 <br >Console (p 93) <br >December <br >31 2012 <br >API Version 20060301 <br >608Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >Root domain support <br >for website hosting <br >Amazon S3 now supports hosting static websites at the <br >root domain Visitors to your website can access your site <br >from their browser without specifying www in the web <br >address (eg examplecom) Many customers already <br >host static websites on Amazon S3 that are accessible from <br >a www subdomain (eg wwwexamplecom) Previously <br >to support root domain access you needed to run your own <br >web server to proxy root domain requests from browsers <br >to your website on Amazon S3 Running a web server to <br >proxy requests introduces additional costs operational <br >burden and another potential point of failure Now you <br >can take advantage of the high availability and durability of <br >Amazon S3 for both www and root domain addresses For <br >more information see Hosting a Static Website on Amazon <br >S3 (p 449) <br >December <br >27 2012 <br >Console revision Amazon S3 console has been updated The documentation <br >topics that refer to the console have been revised <br >accordingly <br >December <br >14 2012 <br >Support for Archiving <br >Data to Amazon <br >Glacier <br >Amazon S3 now support a storage option that enables you <br >to utilize Amazon Glacier's lowcost storage service for <br >data archival To archive objects you define archival rules <br >identifying objects and a timeline when you want Amazon S3 <br >to archive these objects to Amazon Glacier You can easily <br >set the rules on a bucket using the Amazon S3 console or <br >programmatically using the Amazon S3 API or AWS SDKs <br >For more information see Object Lifecycle <br >Management (p 109) <br >November <br >13 2012 <br >Support for Website <br >Page Redirects <br >For a bucket that is configured as a website Amazon S3 <br >now supports redirecting a request for an object to another <br >object in the same bucket or to an external URL For more <br >information see Configuring a Web Page Redirect (p 460) <br >For information about hosting websites see Hosting a Static <br >Website on Amazon S3 (p 449) <br >October 4 <br >2012 <br >Support for Cross <br >Origin Resource <br >Sharing (CORS) <br >Amazon S3 now supports CrossOrigin Resource Sharing <br >(CORS) CORS defines a way in which client web <br >applications that are loaded in one domain can interact <br >with or access resources in a different domain With CORS <br >support in Amazon S3 you can build rich clientside web <br >applications on top of Amazon S3 and selectively allow <br >crossdomain access to your Amazon S3 resources For <br >more information see CrossOrigin Resource Sharing <br >(CORS) (p 131) <br >August 31 <br >2012 <br >Support for Cost <br >Allocation Tags <br >Amazon S3 now supports cost allocation tagging which <br >allows you to label S3 buckets so you can more easily <br >track their cost against projects or other criteria For more <br >information about using tagging for buckets see Cost <br >Allocation Tagging (p 96) <br >August 21 <br >2012 <br >API Version 20060301 <br >609Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >Support for MFA <br >protected API access <br >in bucket policies <br >Amazon S3 now supports MFAprotected API access a <br >feature that can enforce AWS MultiFactor Authentication for <br >an extra level of security when accessing your Amazon S3 <br >resources It is a security feature that requires users to prove <br >physical possession of an MFA device by providing a valid <br >MFA code For more information go to AWS MultiFactor <br >Authentication You can now require MFA authentication for <br >any requests to access your Amazon S3 resources <br >To enforce MFA authentication Amazon S3 now supports <br >the awsMultiFactorAuthAge key in a bucket policy For <br >an example bucket policy see Adding a Bucket Policy to <br >Require MFA Authentication (p 339) <br >July 10 <br >2012 <br >Object Expiration <br >support <br >You can use Object Expiration to schedule automatic <br >removal of data after a configured time period You set <br >object expiration by adding lifecycle configuration to a <br >bucket <br >27 <br >December <br >2011 <br >New region supported Amazon S3 now supports the South America (São <br >Paulo) region For more information see Accessing a <br >Bucket (p 60) <br >December <br >14 2011 <br >MultiObject Delete Amazon S3 now supports MultiObject Delete API that <br >enables you to delete multiple objects in a single request <br >With this feature you can remove large numbers of objects <br >from Amazon S3 more quickly than using multiple individual <br >DELETE requests For more information see Deleting <br >Objects (p 237) <br >December <br >7 2011 <br >New region supported Amazon S3 now supports the US West (Oregon) region For <br >more information see Buckets and Regions (p 60) <br >November <br >8 2011 <br >Documentation Update Documentation bug fixes November <br >8 2011 <br >Documentation Update In addition to documentation bug fixes this release includes <br >the following enhancements <br >• New serverside encryption sections using the AWS SDK <br >for PHP (see Specifying ServerSide Encryption Using the <br >AWS SDK for PHP (p 391)) and the AWS SDK for Ruby <br >(see Specifying ServerSide Encryption Using the AWS <br >SDK for Ruby (p 393)) <br >• New section on creating and testing Ruby samples (see <br >Using the AWS SDK for Ruby Version 2 (p 568)) <br >October 17 <br >2011 <br >Serverside encryption <br >support <br >Amazon S3 now supports serverside encryption It enables <br >you to request Amazon S3 to encrypt your data at rest <br >that is encrypt your object data when Amazon S3 writes <br >your data to disks in its data centers In addition to REST <br >API updates the AWS SDK for Java and NET provide <br >necessary functionality to request serverside encryption <br >You can also request serverside encryption when uploading <br >objects using AWS Management Console To learn more <br >about data encryption go to Using Data Encryption <br >October 4 <br >2011 <br >API Version 20060301 <br >610Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >Documentation Update In addition to documentation bug fixes this release includes <br >the following enhancements <br >• Added Ruby and PHP samples to the Making <br >Requests (p 11) section <br >• Added sections describing how to generate and use pre <br >signed URLs For more information see Share an Object <br >with Others (p 152) and Uploading Objects Using Pre <br >Signed URLs (p 206) <br >• Updated an existing section to introduce AWS Explorers <br >for Eclipse and Visual Studio For more information see <br >Using the AWS SDKs CLI and Explorers (p 560) <br >September <br >22 2011 <br >Support for sending <br >requests using <br >temporary security <br >credentials <br >In addition to using your AWS account and IAM user security <br >credentials to send authenticated requests to Amazon <br >S3 you can now send requests using temporary security <br >credentials you obtain from AWS Identity and Access <br >Management (IAM) You can use the AWS Security Token <br >Service API or the AWS SDK wrapper libraries to request <br >these temporary credentials from IAM You can request <br >these temporary security credentials for your own use or <br >hand them out to federated users and applications This <br >feature enables you to manage your users outside AWS and <br >provide them with temporary security credentials to access <br >your AWS resources <br >For more information see Making Requests (p 11) <br >For more information about IAM support for temporary <br >security credentials see Temporary Security Credentials in <br >the IAM User Guide <br >August 3 <br >2011 <br >Multipart Upload API <br >extended to enable <br >copying objects up to 5 <br >TB <br >Prior to this release Amazon S3 API supported copying <br >objects of up to 5 GB in size To enable copying objects <br >larger than 5 GB Amazon S3 now extends the multipart <br >upload API with a new operation Upload Part (Copy) <br >You can use this multipart upload operation to copy objects <br >up to 5 TB in size For more information see Copying <br >Objects (p 212) <br >For conceptual information about multipart upload API see <br >Uploading Objects Using Multipart Upload API (p 165) <br >June 21 <br >2011 <br >SOAP API calls over <br >HTTP disabled <br >To increase security SOAP API calls over HTTP are <br >disabled Authenticated and anonymous SOAP requests <br >must be sent to Amazon S3 using SSL <br >June 6 <br >2011 <br >API Version 20060301 <br >611Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >IAM enables cross <br >account delegation <br >Previously to access an Amazon S3 resource an IAM user <br >needed permissions from both the parent AWS account <br >and the Amazon S3 resource owner With crossaccount <br >access the IAM user now only needs permission from the <br >owner account That is If a resource owner grants access to <br >an AWS account the AWS account can now grant its IAM <br >users access to these resources <br >For more information see Creating a Role to Delegate <br >Permissions to an IAM User in the IAM User Guide <br >For more information on specifying principals in a bucket <br >policy see Specifying a Principal in a Policy (p 310) <br >June 6 <br >2011 <br >New link This service's endpoint information is now located in the <br >AWS General Reference For more information go to <br >Regions and Endpoints in AWS General Reference <br >March 1 <br >2011 <br >Support for hosting <br >static websites in <br >Amazon S3 <br >Amazon S3 introduces enhanced support for hosting static <br >websites This includes support for index documents and <br >custom error documents When using these features <br >requests to the root of your bucket or a subfolder (eg <br >httpmywebsitecomsubfolder) returns your index <br >document instead of the list of objects in your bucket If <br >an error is encountered Amazon S3 returns your custom <br >error message instead of an Amazon S3 error message For <br >more information see Hosting a Static Website on Amazon <br >S3 (p 449) <br >February <br >17 2011 <br >Response Header API <br >Support <br >The GET Object REST API now allows you to change the <br >response headers of the REST GET Object request for <br >each request That is you can alter object metadata in <br >the response without altering the object itself For more <br >information see Getting Objects (p 143) <br >January 14 <br >2011 <br >Large object support Amazon S3 has increased the maximum size of an object <br >you can store in an S3 bucket from 5 GB to 5 TB If you are <br >using the REST API you can upload objects of up to 5 GB <br >size in a single PUT operation For larger objects you must <br >use the Multipart Upload REST API to upload objects in <br >parts For more information see Uploading Objects Using <br >Multipart Upload API (p 165) <br >December <br >9 2010 <br >Multipart upload Multipart upload enables faster more flexible uploads into <br >Amazon S3 It allows you to upload a single object as a set <br >of parts For more information see Uploading Objects Using <br >Multipart Upload API (p 165) <br >November <br >10 2010 <br >Canonical ID support in <br >bucket policies <br >You can now specify canonical IDs in bucket policies <br >For more information see Access Policy Language <br >Overview (p 308) <br >September <br >17 2010 <br >Amazon S3 works with <br >IAM <br >This service now integrates with AWS Identity and Access <br >Management (IAM) For more information go to AWS <br >Services That Work with IAM in the IAM User Guide <br >September <br >2 2010 <br >API Version 20060301 <br >612Amazon Simple Storage Service Developer Guide <br >Change Description Date <br >Notifications The Amazon S3 notifications feature enables you to <br >configure a bucket so that Amazon S3 publishes a message <br >to an Amazon Simple Notification Service (Amazon SNS) <br >topic when Amazon S3 detects a key event on a bucket <br >For more information see Setting Up Notification of Bucket <br >Events (p 472) <br >July 14 <br >2010 <br >Bucket policies Bucket policies is an access management system you use <br >to set access permissions across buckets objects and sets <br >of objects This functionality supplements and in many cases <br >replaces access control lists For more information see <br >Using Bucket Policies and User Policies (p 308) <br >July 6 2010 <br >Pathstyle syntax <br >available in all regions <br >Amazon S3 now supports the pathstyle syntax for any <br >bucket in the US Classic Region or if the bucket is in the <br >same region as the endpoint of the request For more <br >information see Virtual Hosting (p 50) <br >June 9 <br >2010 <br >New endpoint for EU <br >(Ireland) <br >Amazon S3 now provides an endpoint for EU (Ireland) <br >https3euwest1amazonawscom <br >June 9 <br >2010 <br >Console You can now use Amazon S3 through the AWS <br >Management Console You can read about all of the <br >Amazon S3 functionality in the console in the Amazon <br >Simple Storage Service Console User Guide <br >June 9 <br >2010 <br >Reduced Redundancy Amazon S3 now enables you to reduce your storage costs <br >by storing objects in Amazon S3 with reduced redundancy <br >For more information see Reduced Redundancy <br >Storage (p 6) <br >May 12 <br >2010 <br >New region supported Amazon S3 now supports the Asia Pacific (Singapore) <br >region For more information see Buckets and <br >Regions (p 60) <br >April 28 <br >2010 <br >Object Versioning This release introduces object versioning All objects now <br >can have a key and a version If you enable versioning for <br >a bucket Amazon S3 gives all objects added to a bucket a <br >unique version ID This feature enables you to recover from <br >unintended overwrites and deletions For more information <br >see Versioning (p 8) and Using Versioning (p 423) <br >February 8 <br >2010 <br >New region supported Amazon S3 now supports the US West (N California) <br >region The new endpoint for requests to this Region is <br >s3uswest1amazonawscom For more information see <br >Buckets and Regions (p 60) <br >December <br >2 2009 <br >AWS SDK for NET AWS now provides libraries sample code tutorials and <br >other resources for software developers who prefer to build <br >applications using NET languagespecific APIs instead <br >of REST or SOAP These libraries provide basic functions <br >(not included in the REST or SOAP APIs) such as request <br >authentication request retries and error handling so that it's <br >easier to get started For more information about language <br >specific libraries and resources see Using the AWS SDKs <br >CLI and Explorers (p 560) <br >November <br >11 2009 <br >API Version 20060301 <br >613Amazon Simple Storage Service Developer Guide <br >AWS Glossary <br >For the latest AWS terminology see the AWS Glossary in the AWS General Reference <br >API Version 20060301 <br >614<br ><br />《香当网》用户分享的内容,不代表《香当网》观点或立场,请自行判断内容的真实性和可靠性!<br />该内容是文档的文本内容,更好的格式请下载文档<br /> </article> <div class="thumbnail box-line"> <div class="l1 line"></div> <div class="l2 line"></div> <div class="l3 line"></div> <div class="l4 line"></div> <div class="l5 line"></div> <div class="l6 line"></div> <div id="reader-more"> <p class="title">下载pdf到电脑,查找使用更方便</p> <p class="gray"> pdf的实际排版效果,会与网站的显示效果略有不同!!</p> <p class="download-info"> <span style="font-size: 14px;color: #888888">需要</span> <span style="font-size: 24px;">3</span> <span style="font-size: 14px;padding-right: 20px;color: #888888">香币</span> <a href="javascript:void(null);" onclick="JC.redirect('/pdf/create')" style="color: #cf6a07"> [ 分享pdf获得香币 ] </a> </p> <p> <a class="btn btn-danger download buy circle80 fs30" href="javascript:void(null);" data-type="3" data-num="3" data-download="true"><i aria-hidden="true" class="fa fa-yen"> </i> 下载pdf</a> </p> </div> </div> <section class="ut-mt20 items"> <div class="tip clearfix"> <h4><span>相关文档</span></h4> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/d517ea1539e73e8801d2bd74de90cabc_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/4884554996435925095.html" target="_blank"> Turn down service开床服务</a> </h2> <div class="description"> <p> STANDARD OPERATING PROCEDURE 标准操作程序 TURNDOWN SERVICE 开床服务 Task Number: 任务号: HKFL-00...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="s***n" data-id="622691" href="https://user.xiangdang.net/u/622691"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 11年前 </span>   <div class=" pull-right"> <span class="number">31434 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/2b06a49766e51c76b776ab4d041c9e7a_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5513636297444282501.html" target="_blank"> IT Service Capacity forecasts V0.0</a> </h2> <div class="description"> <p>IT Service Capacity forecasts 小组项目安全基准Capacity forecasts填写人FMT每天能完成WO数量4张WO/人/天OSTPCS每个PCS服务器可最多...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="a***j" data-id="132314" href="https://user.xiangdang.net/u/132314"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 12年前 </span>   <div class=" pull-right"> <span class="number">610 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/108912622424e59e91443e5dd9549ea5_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5937959849520834024.html" target="_blank"> 基于WEB SERVICE技术新华书店连锁系统</a> </h2> <div class="description"> <p>分类号 …………………….. 密级……………………U D C …………………… 编号…………………… 中 南 大 学C...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="https://static.xiangdang.net/img/avatar/animal-7.jpg" data-avatar="https://static.xiangdang.net/img/avatar/animal-7.jpg" data-name="郭***林" data-id="2147685" href="https://user.xiangdang.net/u/2147685"><img src="https://static.xiangdang.net/img/avatar/animal-7.jpg" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">343 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/e75b77707af35ae315312318d3a11ec7_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5954288351005561409.html" target="_blank"> 食品专业英语 LESSON 8 Principles Of Refrigerated Gas Storage</a> </h2> <div class="description"> <p>There are clear benefits from the cool temperature storage of foods. As this.thesis evolved ther...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="小***库" data-id="2032681" href="https://user.xiangdang.net/u/2032681"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">393 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/322b920ec756e8e38f89188c630f966b_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5513638935800630504.html" target="_blank"> 劳务协议书-Labor-Service-Agreement-中英文对照</a> </h2> <div class="description"> <p>劳务协议书Labor Service Agreement甲方(单位名称): Party A (Comp...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="周***领" data-id="1458845" href="https://user.xiangdang.net/u/1458845"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 10年前 </span>   <div class=" pull-right"> <span class="number">496 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/b3e042bbc9a41de31d1be469a5b2dc53_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/6132179818200511921.html" target="_blank"> 四川省德阳市 高三下学期2月第二次监测考试英语试题(Word版缺答案,无听力音频,无文字材料)</a> </h2> <div class="description"> <p>德阳市高中2019级质量监测考试(二)英语试卷注意事项:1.本试卷分第I卷(选择题)和第II卷(非选择题)两部分,全卷150分,考试时间120分钟。2.答题前,考生务必将自己的姓名、准考证号填...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="的***有" data-id="2468857" href="https://user.xiangdang.net/u/2468857"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 7个月前 </span>   <div class=" pull-right"> <span class="number">284 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/b81d07c89fe01601093502bcbfa947dd_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5828721813274551420.html" target="_blank"> 1380国开电大本科《商务英语3》历年期末考试(第三大题阅读判断)题库(排序考试版)</a> </h2> <div class="description"> <p>1380国开电大本科《商务英语3》历年期末考试(第三大题阅读判断)题库[排序考试版]说明:可以根据试题首字母音序查找试题及答案。[短文]Accounting errors will happe...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">653 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/d413c33fa1306d16d4ca02c58456d17d_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5919842363354611005.html" target="_blank"> 1380国开电大本科《商务英语3》期末纸质考试(第三大题阅读判断)题库(排序版)</a> </h2> <div class="description"> <p>说明:更新至2021年7月试题;可以根据试题首字母音序查找试题及答案。[短文]Accounting errors will happen from time to time,but many c...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">417 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/be4fc4040617143fea8c99a7da009640_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5853230905404257469.html" target="_blank"> 麦肯锡05年2月最新报告管理下一代的IT基础架构</a> </h2> <div class="description"> <p>Managing next-generation IT infrastructureThe days of building to order are over. The time is ri...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="https://static.xiangdang.net/img/avatar/animal-11.jpg" data-avatar="https://static.xiangdang.net/img/avatar/animal-11.jpg" data-name="鬼***笑" data-id="2224227" href="https://user.xiangdang.net/u/2224227"><img src="https://static.xiangdang.net/img/avatar/animal-11.jpg" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">367 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/77d8381b49ff870d4fb691c3b802265c_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5919352679320312795.html" target="_blank"> 1380国开电大本科《商务英语3》期末纸质考试(第三大题阅读判断)题库(分学期版)</a> </h2> <div class="description"> <p>26-30题:根据短文内容判断给出的语句是否正确,正确的写“T”,错误的写“F”,并将答案写在答题纸上。 Passage 2Logistics is the physical flow proc...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">414 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/abcdab4bfffc4bab4fc7bd1a9b4eb15e_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/6021365223243092017.html" target="_blank"> 「2022秋期版」1380国开电大本科《商务英语3》期末一体化、纸质考试第四大题阅读判断题库</a> </h2> <div class="description"> <p>[2022秋期版]1380国开电大本科《商务英语3》期末一体化、纸质考试第四大题阅读判断题库说明:更新至2022年7月试题,适用于2023年1月期末期末一体化、纸质考试。[首字母音序A]Acc...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 1年前 </span>   <div class=" pull-right"> <span class="number">336 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/8c680195aee858c276baea70fca49121_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/6121293299149834704.html" target="_blank"> 国开电大本科《商务英语3》机考第四大题阅读判断题库</a> </h2> <div class="description"> <p>A01 Accounting errors will happen from time to time,but many common accounting mistakes can be av...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 1年前 </span>   <div class=" pull-right"> <span class="number">1296 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/ae91b52fb3e6b7d6c62f5b582509ec0f_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/4884567919795834569.html" target="_blank"> 商业计划书模板-英文版</a> </h2> <div class="description"> <p>BUSINESS PLAN TEMPLATE BUSINESS PLAN [My Company] 123 Main Street Anytown,...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="我***家" data-id="1312946" href="https://user.xiangdang.net/u/1312946"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 6年前 </span>   <div class=" pull-right"> <span class="number">22235 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/2247ed562793dfc260e70521a6da3bcb_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5634377169204898135.html" target="_blank"> 3935国开电大理工英语2历年期末考试(第三题阅读理解判断题)题库(排序考试版)</a> </h2> <div class="description"> <p>[试题]Almost everyone is familiar with video conferencing today, and for good reasons. Video confer...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 3年前 </span>   <div class=" pull-right"> <span class="number">955 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/5e19b0104cf6e7d901397c0a581ca6ae_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5982979129495809145.html" target="_blank"> 「2022秋期版」3936国开电大专科《商务英语2》期末一体化考试第四大题阅读理解判断题库</a> </h2> <div class="description"> <p>[2022秋期版]3936国开电大专科《商务英语2》期末一体化考试第四大题阅读理解判断题库说明:试题随机组合。[短文首字母音序B]Business EthicsNowadays,more an...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">386 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/2958878340585fe63dc9b9c5d9ef3e53_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/6110490009368336527.html" target="_blank"> 国开电大专科《商务英语2》一平台机考第四大题阅读理解判断题库</a> </h2> <div class="description"> <p>BBusiness EthicsNowadays,more and more attention is being paid to“Business Ethics”.But what does ...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 1年前 </span>   <div class=" pull-right"> <span class="number">239 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/5a0fa68e44a2a5104096d55bf19e16a4_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5972063970173330369.html" target="_blank"> 美国采购水果目录</a> </h2> <div class="description"> <p>如下为有关美国采购商的名录-------------------------------------------------------------------------------2092...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="https://static.xiangdang.net/img/avatar/animal-16.jpg" data-avatar="https://static.xiangdang.net/img/avatar/animal-16.jpg" data-name="静***雅" data-id="2224531" href="https://user.xiangdang.net/u/2224531"><img src="https://static.xiangdang.net/img/avatar/animal-16.jpg" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 7个月前 </span>   <div class=" pull-right"> <span class="number">315 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/5120b3e40abc7c2fe0ffffde5fbe8916_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5917928467187284772.html" target="_blank"> 3935国开电大专科《理工英语2》期末纸质考试(第三题阅读理解判断题)题库(分学期版)</a> </h2> <div class="description"> <p>2021年7月试题及答案Improving work safety is a must and a goal.It should be on everyone’s mind.According ...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">448 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/db9e9690d1ce32880120ee4991abf8d1_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5828740822844002920.html" target="_blank"> 1380国开电大本科《商务英语3》历年期末考试试题及答案汇编(排序考试版)</a> </h2> <div class="description"> <p>1380国开电大本科《商务英语3》历年期末考试试题及答案汇编[排序考试版]说明:可以根据试题首字母音序查找试题及答案。第一大题 交际用语__________It includes mer...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="None" data-avatar="None" data-name="h***s" data-id="2078944" href="https://user.xiangdang.net/u/2078944"><img src="https://static.xiangdang.net/img/avatar/privary/default.png" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 2年前 </span>   <div class=" pull-right"> <span class="number">2228 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> <div class="item ut-mt10"> <div class="rank w-120"> <img class="lazy" src="https://static.xiangdang.net/img/fm.png" style="border: 1px #ccc solid;" data-original="https://sdoc.xiangdang.net/image/8601047cca672b89939c8cab1790e9ee_thumb" width="100"> </div> <div class="content"> <h2 class="title"> <a href="/doc/5695841199984866482.html" target="_blank"> 企业大数据基础平台搭建和实用开发代码</a> </h2> <div class="description"> <p>在现代的企业环境中,单机容量往往无法存储大量数据,需要跨机器存储。统一管理分布在集群上的文件系统称为分布式文件系统。而一旦在系统中,引入网络,就不可避免地引入了所有网络编程的复杂性,例如挑战之一...</p> </div> <div class="extra"> <a class="ui tiny pop image" data-avatar="https://simg.xiangdang.net/show/cd50e9308e256cdd9ad683505614df20.jpg" data-avatar="https://simg.xiangdang.net/show/cd50e9308e256cdd9ad683505614df20.jpg" data-name="章***明" data-id="2288249" href="https://user.xiangdang.net/u/2288249"><img src="https://simg.xiangdang.net/show/cd50e9308e256cdd9ad683505614df20.jpg" width="24"></a> <span class="t"> <i class="fa fa-clock-o"></i> 3年前 </span>   <div class=" pull-right"> <span class="number">2413 <i aria-hidden="true" class="fa fa-eye"></i></span>   <span class="number">0 <i aria-hidden="true" class="fa fa-thumbs-o-up"></i></span> </div> </div> </div> </div> </section> </div> <!--right--> <div class="col-md-3"> <div class="side-affix" data-spy="affix" data-offset-top="560" data-offset-bottom="260"> <div class="thumbnail "> <div class="ui items"> <div class="item"> <a class="btn btn-block buy btn-danger download" href="javascript:void(null);" data-type="3" data-num="3" data-download="true"><i aria-hidden="true" class="fa fa-yen"></i> 下载pdf</a> </div> </div> <div> 下载需要 <span style="font-size: 24px;">3</span> <span style="font-size: 14px;padding-right: 20px;color: #888888">香币</span> <a href="javascript:void(null);" onclick="JC.redirect('https://user.xiangdang.net/pay')" style="color: #cf6a07"> [香币充值 ] </a> <div class="kind-tip">亲,您也可以通过 <a href="javascript:void(0) " onclick="JC.redirect('/pdf/create')">分享原创pdf</a> 来获得香币奖励!</div> </div> </div> <div class="ui-box ut-pd20 border ut-mt20"> <div class="title"><h3>该用户的其他文档</h3></div> <ul class="ui-list"></ul> </div> <div class="ui-box ut-pd20 border ut-mt20"> <div class="title"><h3>相关pdf</h3></div> <ul> <li class="ellipsis"><a href="/pdf/5066941168026581976.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  Amazon Simple Storage ServiceAPI ReferenceAPI Version 2006-03-01</a></li> <li class="ellipsis"><a href="/pdf/5066941167703921422.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  Amazon Simple Storage ServiceGetting Started GuideAPI Version 2006-03-01</a></li> <li class="ellipsis"><a href="/pdf/4987446089624683540.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  LM2596 SIMPLE SWITCHER® Power Converter 150-kHz</a></li> <li class="ellipsis"><a href="/pdf/4944625741623462531.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  国外实习报告</a></li> <li class="ellipsis"><a href="/pdf/5249062979254284104.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  N9927-90001说明书</a></li> <li class="ellipsis"><a href="/pdf/5062609686327267139.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  ADVANTEST R3131A SPECTRUM ANALYZER频谱仪使用说明</a></li> <li class="ellipsis"><a href="/pdf/5259178223124470389.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  技术资料整理归集系列之vSphere虚拟化优化与排错</a></li> <li class="ellipsis"><a href="/pdf/4945417561833821993.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  新东方四级试卷附赠:英语广播原声听力100篇 听力原文</a></li> <li class="ellipsis"><a href="/pdf/5245356740881113383.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  储罐的完整性管理</a></li> <li class="ellipsis"><a href="/pdf/5116253300498018008.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  2019年北京市东城区高三一模英语试卷及答案</a></li> <li class="ellipsis"><a href="/pdf/5066791463263676467.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  H3C UniServer T1100 G3服务器 RAID卡用户指南-5W102-整本手册</a></li> <li class="ellipsis"><a href="/pdf/5090793821760957617.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  HE9710_V11规格书赫尔微(HEERMICR)</a></li> <li class="ellipsis"><a href="/pdf/5090793828553566589.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  LD1117规格书赫尔微(HEERMICR)</a></li> <li class="ellipsis"><a href="/pdf/5090793817553141736.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  HE7110_V10规格书赫尔微(HEERMICR)</a></li> <li class="ellipsis"><a href="/pdf/5090793815098699816.html"><i class="fa fa-file-word-o" aria-hidden="true"></i>  HE4054规格书赫尔微(HEERMICR)</a></li> </ul> </div> <div class="ui-box ut-pd20 border ut-mt20"> <div class="title"><h3>相关ppt</h3></div> <div class="items"> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4881217652507343192.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/d56956bd463de101bc83624f919a67bd_0_100"></a></div><div class="content"><a href="/ppt/4881217652507343192.html"> UNSW介绍欧美风格PPT案例</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4881218972514358626.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/527fe2625bfb19f33d739cf3b7f3e861_0_100"></a></div><div class="content"><a href="/ppt/4881218972514358626.html"> 《云计算(第三版)》大数据与云计算</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4881216046199272985.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/780e33a815a856d9ce1ed0c94e1d6f2e_0_100"></a></div><div class="content"><a href="/ppt/4881216046199272985.html"> 某某的互联网战略资料</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4905334521775236186.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/7426024e8e73cde4eae8567e718876b2_0_100"></a></div><div class="content"><a href="/ppt/4905334521775236186.html"> 微软win8风格ppt图表</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4881215143860786702.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/19fe357e1cc11c4c1ac4bee92a723313_0_100"></a></div><div class="content"><a href="/ppt/4881215143860786702.html"> 供应链管理(1)</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/5531579860040573456.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/0d29cc4f75c7ad30aba5bd4488f7ac62_0_100"></a></div><div class="content"><a href="/ppt/5531579860040573456.html"> 云计算ppt课件</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4881215145439239357.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/549e6dacb574246b9c92fd8b3baa25f1_0_100"></a></div><div class="content"><a href="/ppt/4881215145439239357.html"> 供应链管理10</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4881215165791947773.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/cb1531984a11232f922db2d65330239c_0_100"></a></div><div class="content"><a href="/ppt/4881215165791947773.html"> 供应链管理18</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4881217282941998356.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/d04708ec91b74d4f1a4061a11fcbdedb_0_100"></a></div><div class="content"><a href="/ppt/4881217282941998356.html"> 波士顿--中国callcenter外包投资项目咨询报告</a></div></div> <div class="item ut-mt10 clearfix"><div class="image" style="width: 100px;"><a href="/ppt/4881218632532029311.html" target="_blank"><img src="https://sppt.xiangdang.net/img_w/14f8ea1403391089d43fa270d2da2944_0_100"></a></div><div class="content"><a href="/ppt/4881218632532029311.html"> 网络营销发展趋势</a></div></div> </div> </div> </div> </div> </div> </div> </div> <footer class="bs-docs-footer ut-mt20"> <div class=container> <div class="row is-flex"> <div class="col-md-2 col-sm-6 col-xs-12 paddingtop-bottom"><h6 class=heading7>关于我们</h6> <ul class=footer-ul> <li><a href="https://www.xiangdang.net/about.html" target=_blank>关于香当网</a></li> <li><a href="https://www.xiangdang.net/duty.html">免责声明</a></li> <li><a href="/">网站地图</a></li> <li><a href="https://www.xiangdang.net/contact.html" target=_blank>联系我们</a></li> </ul> </div> <div class="col-md-2 col-sm-6 col-xs-12 paddingtop-bottom"><h6 class=heading7>帮助中心</h6> <ul class=footer-ul> <li><a href="https://www.xiangdang.net/sell-help.html" target="_blank">如何销售文档</a></li> <li><a href="https://www.xiangdang.net/how-dl.html" target="_blank">如何下载文档</a></li> <li><a href="https://www.xiangdang.net/upload.html" target="_blank">文档上传须知</a></li> </ul> </div> <div class="col-md-2 col-sm-6 col-xs-12 paddingtop-bottom"><h6 class=heading7>关注我们</h6> <div class="weixin_img_left"> <i class="docerqrcode_sprite sprite_footer_docer"></i> </div> </div> <div class="col-md-2 col-sm-6 col-xs-12 paddingtop-bottom"> </div> <div class="col-md-4 col-sm-12 col-xs-12 footerleft"> <div class=logofooter><img class=center-block src="https://static.xiangdang.net/img/logo01.svg" width=200px alt="香当网"></div> </div> </div> </div> <div class="copy"> <div class="container"> <div class="row"> <div style="line-height: 60px;" class="col-md-12">© 2006-2021 香当网 —— 工作总结,个人工作总结,工作计划,述职报告,范文,论文   杭州精创信息技术有限公司   <img src="https://static.xiangdang.net/img/beian.png"/><a target="_blank" href="http://www.beian.gov.cn/portal/registerSystemInfo?recordcode=33018302001162">  浙公网安备 33018302001162号</a> <a target="_blank" href="https://beian.miit.gov.cn">浙ICP备09019653号-34</a> <script>var _hmt = _hmt || [];(function() {var hm = document.createElement("script");hm.src = "https://hm.baidu.com/hm.js?6600a1cc9ed25ba2bcceeda1f2a917f9";var s = document.getElementsByTagName("script")[0];s.parentNode.insertBefore(hm, s);})();</script> </div> </div> </div> </div> </footer> <div id="fTools"> <span id="gotop"> <i class="fa fa-arrow-up" aria-hidden="true"></i> </span> <span id="feedback" title="建议反馈"> <i class="fa fa-inbox" aria-hidden="true"></i></span> </div> <!-- IE10 viewport hack for Surface/desktop Windows 8 bug --> <!-- JavaScript at the bottom for fast page loading --> <!-- end scripts --> </body> </html>