/ mozey.co / blog

S3 Website

August 28, 2021

🔗 Generate a static website with hugo

Generate new site and select a theme

brew install hugo

hugo new site s3website

cd s3website

git clone https://github.com/theNewDynamic/gohugo-theme-ananke.git themes/ananke

echo 'theme = "ananke"' >> config.toml

Change the title

sed -i.bak 's/My New Hugo Site/S3 Website/g' config.toml

Create a (draft) post

hugo new posts/my-first-post.md

echo 'It Works!' >> content/posts/my-first-post.md

To preview on localhost, first start the hugo server with drafts enabled

hugo server -D

Publish the draft (by changing “draft: true” to “draft: false”)

sed -i.bak 's/draft: true/draft: false/g' content/posts/my-first-post.md

Build static pages


List site map

tree public

🔗 Configure S3 bucket as a static website

First, install and configure awscli. The commands below assume you’ve created a named profile called your-profile

Create a S3 bucket. The bucket name must be unique

aws --profile your-profile s3 mb s3://s3website.mozey.co

Configure the bucket we just created as a static website. Note that the index and error documents must match the paths as per the site map above

aws --profile your-profile s3 website s3://s3website.mozey.co/ --index-document index.html --error-document 404.html

aws --profile your-profile s3api get-bucket-website --bucket s3website.mozey.co

Edit public access settings, “By default, Amazon S3 blocks public access to your account and buckets.", using put-public-access-block

# TODO The `aws s3 website` command already does this?
#aws --profile your-profile s3api put-public-access-block \
#--bucket s3website.mozey.co \
#--public-access-block-configuration "BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false"

Create a bucket policy to make content public, note the “Resource” contains the bucket name s3website.mozey.co, the rest is standard policy

echo '{
    "Version": "2012-10-17",
    "Statement": [
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
            "Resource": [
}' > bucket-policy-s3website.json

Apply the policy to your bucket

aws --profile your-profile s3api put-bucket-policy --bucket s3website.mozey.co --policy file://bucket-policy.json

aws --profile your-profile s3api get-bucket-policy --bucket s3website.mozey.co

Your bucket is now publicly accessible!, note


If we haven’t deployed the site yet, you might see something like the following

404 Not Found
Message: The specified key does not exist.
Key: index.html

🔗 Deploy to S3

Using aws s3 sync

aws --profile your-profile s3 sync public s3://s3website.mozey.co

Alternative deployment tools (not AWS specific)


Create a CNAME for s3website.mozey.co to the bucket URL (must end with a dot)

Note, if you want the root domain URL to redirect to the subdomain, e.g. you’d like your site to be available via both:

Configure the root domain bucket to redirect to the subdomain, the command to do that will look something like this

echo '{
    "RedirectAllRequestsTo": {
        "HostName": "www.mozey.co",
        "Protocol": "https"
}' > bucket-redirect.json

aws --profile your-profile s3api put-bucket-website --bucket mozey.co --website-configuration file://bucket-redirect.json

Question Why not use one bucket for both root and subdomain? The answer is

Remember to set baseurl in your hugo site config, e.g.

"baseurl": "https://www.mozey.co/"

🔗 CloudFront

“Amazon S3 website endpoints do not support HTTPS or access points. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3."

See Using a website endpoint as the origin, with access restricted by a Referrer header “When you use the Amazon S3 static website endpoint, connections between CloudFront and Amazon S3 are available only over HTTP. To use HTTPS for connections between CloudFront and Amazon S3, configure an S3 REST API endpoint for your origin”

TLDR Requires a bunch of commands for setting up a CloudFront distribution, requesting a cert, etc

🔗 Cloudflare

“You can use Cloudflare to proxy sites that rely on Amazon Web Services (AWS) to store static content using Amazon’s Simple Storage Service (S3)"

Using the aws s3api put-bucket-policy command, replace the public policy with the one from the link, to ”…ensures that your site only responds to requests coming from the Cloudflare proxy. This is the current list of IP address ranges used by the Cloudflare proxy”

For existing domains, do it like this

For this blog the Cloudflare settings are

Automatic HTTPS Rewrites: ON
Always use HTTPS: ON
Auto Minify: NONE
Brotli Compression: ON

🔗 Backup Strategy

Assuming the site above is hosted in the primary AWS account. Create a backup AWS account with a different email address (and billing?). The backup AWS account pulls data from the primary account, i.e. the primary does not have any permissions in the backup account. Other than the provision just mentioned, and the globally unique S3 bucket name requirement, the primary and backup accounts must be identical. In case of emergency change the Cloudflare DNS to use resources in the backup account until the primary has been restored

🔗 Troubleshooting

The bucket URL depends on the region, some use a dash (-), while other use a dot (.), e.g. eu-west-2 uses dot and us-west-2 uses dash

From the link above for the aws s3 website command, “All files in the bucket that appear on the static site must be configured to allow visitors to open them. File permissions are configured separately from the bucket website configuration”. See Setting permissions for website access, however the aws s3api commands listed above should have taken care of this