AWS S3 - Private and Public Access

aws security


As I noticed yet another S3 bucket incident where data was not only accessed, but JavaScript code was added, I’ve decided to write a post describing the steps required to create a bucket with private access, and (intentionally) make the objects public using a Content Delivery Network(CDN).

I also added a few words regarding VPC Endpoints, and detection of non-compliant resources (i.e. buckets with public access).

Working with default settings

Public access is disabled by default

In recent years, AWS has went out of its way to make it difficult for users to accidentally configure public access for an S3 bucket.

So I created an S3 bucket, accepting the default settings.

If I would store sensitive data in this bucket, I would consider using encryption, however this is not the case, nor is it the object of this experiment.

Testing public access

I’ll upload a photo of a bucket spilling its contents and try to access it from the browser.

Public access is denied - as expected, so unless you want to make the objects public there’s not much else to do.

Using a VPC Endpoint

A VPC endpoint offers private connectivity to services hosted in AWS, from within your VPC without using an internet gateway.

One of the benefits is that access can be restricted to a specific VPC Endpoint rather than using IP address ranges.

This could be used to allow write access from within the VPC.

Another benefit is that it can help you reduce costs, e.g. data transfer charges for NAT gateways

More info in the documentation regarding Gateway VPC endpoints and Endpoints for Amazon S3.

What if I want to make the content public?

I will always use a Content Delivery Network (CDN) instead of exposing the S3 bucket directly because of the following.

Lower cost (at least for some CDNs)

You can find the cost of serving objects by looking at Data Transfer OUT From Amazon S3 To Internet S3 pricing - in the Data Transfer tab.

As there are many CDNs, I cannot claim that all of them are cheaper, so I picked 2: AWS Cloudfront and Cloudflare.

AWS Cloudfront

Data transfer between AWS S3 and AWS Cloudfront is free, so I would only pay for Cloudfront usage. More details on the Cloudfront pricing page


Data transfer between AWS S3 and other CDNs like Cloudflare is not free, however I will only pay for objects which are not already cached by the CDN.

As Cloudflare has a free plan for personal use, the only cost I wil incur is the transfer from S3 to Cloudflare for objects which are not cached. More about how Cloudflare works

Increased performance

Even if you follow the S3 performance best practices you could run into some rate limits.

The objects contained in S3 buckets are stored in an AWS region, therefore the users in distant parts of the globe will experience increased latency. More info in the Performance Guidelines for Amazon S3

Allowing public access through Cloudflare

I will create a bucket named s3-demo.testbox.blue which I plan to access it using the domain with the same name. More details in the AWS documentation

Bucket Policy

As I want to only allow Cloudflare to access my S3 bucket directly, I will do this via a Bucket Policy.

In the code below I left only one IP in the list to keep it shorter, the full list and a more detailed explanation can be found in this Cloudflare article

The action is s3:GetObject, so the CDN will only be able to retrieve files from the S3 bucket.

I also took an optional step of preventing the removal of this bucket until the statement denying s3:DeleteBucket is removed.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:DeleteBucket",
            "Resource": "arn:aws:s3:::s3-demo.testbox.blue"
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::s3-demo.testbox.blue/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [

If I try to save this policy I will not be allowed with the current configuration.

First, public access must be allowed.

I also had to confirm this action

Now I could save the policy.

Static website hosting

Next I enabled static hosting for the bucket.

CNAME for S3 bucket

I will add a CNAME record s3-demo for the testbox.blue domain pointing to s3-demo.testbox.blue.s3-website-eu-west-1.amazonaws.com

Testing public access (again)

The uploaded photo is now accessible using the custom domain.

The photo is not accessible from S3, not even when using the website URL, which was the expected outcome.

Some more comments

Avoiding “Objects can be public” state

Creating a new bucket, and unchecking Block all public access will display the bucket’s access as Objects can be public

Now I could upload another photo, and update the object’s Access Control List (ACL) in order to make it public

The object is now publicly accessible

However if someone would look at the list of S3 buckets, it wouldn’t be clear if there are any public objects or not.

The security tools I’ve seen so far do not report this issue, so it’s best to avoid this gray area.

Setting up notifications/remediation actions

AWS Config has the s3-bucket-public-read-prohibited, and support for remediation actions.

If I wouldn’t want to use AWS Config, then one idea would be to use GetBucketPolicyStatus, for example from a Lambda function which runs regularly, then send a notification, or even implement the remediation action as another Lambda.


In my opinion, the best way to prepare for such an incident is to assume that it will happen sooner or later and you need to be prepared - get notified, and if possible remediate the problem automatically.