Nuxt.js & AWS: Zero Downtime Deployment

Jan 27, 2019(6 years ago)-
Fuxing's Picture
Fuxing Loh
⚠️

This is an old post that I wrote in 2019 for a problem I faced in 2017-2018. Today there are many solutions to solve this problem, and I recommend you check our Vercel or Netlify which provides a NoOps solution to this problem. Originally posted on Medium

The Problem

I have been using Nuxt.js with AWS ECS, ELB & CloudFront for the past few months. Occasionally I come across this error: Loading chunk \_ failed.

ELB rolling deployment routes traffic via a round-robin which splits up into different target groups. The first request that gets routed to a target group requires all the remaining traffic to get routed to the same target group.

Using green/blue deployment won’t solve it either, there is still a short window where the target group switches.

The Solution

I searched for a variety of solutions. Most of them will address the problem one way or another, but none of them utilizes the AWS stack to solve the problem. I don’t mind incurring a little more cost that can potentially save much more in DevOps.

My Considerations

  • Utilise AWS Stack (ECS, ELB, CloudFront)
  • As little DevOps as possible (Utilise Managed Services)
  • Must use Rolling Deployment (slowly migrate users instead of blue-green)

Existing Architecture

nuxt-existing-architecture

Existing architecture routes the traffic from Route 53 to CDN to ALB then to multiple target groups in the ECS. ALB to ECS is where the error surfaced.

Proposed Architecture

nuxt-proposed-architecture

The new architecture splits the traffic at the CloudFront layer into two paths. ALB will take care of routing all API services and non-cached objects. S3 will receive all compiled assets (JS/images/manifest).

On deployment, ECS will generate the assets and push /_nuxt and other versioned content into S3. ECS will then proceed to start the website and a health check will be up subsequently. (ELB will not route the traffic to ECS before a healthy health check.)

Whichever target group ELB hits, it will be able to complete the request as all and every possible chunk are now inside S3.

With this deployment strategy, a user that doesn’t refresh their browser even after the deployment is taken offline will still have access to the required chunks. Furthermore, I can also route some decoupled traffic to Lambda@Edge if I need to. (e.g., Generating 3rd party API token or internal authentication.)

Dockerfile
FROM node:8.12.0-alpine
 
# Installing AWS CLI
RUN apk add --no-cache --virtual .build-deps
RUN apk add bash
RUN apk add make && apk add curl && apk add openssh
RUN apk add git
RUN ln -sf /usr/share/zoneinfo/Etc/UTC /etc/localtime
RUN apk -Uuv add groff less python py-pip
RUN pip install awscli
RUN apk --purge -v del py-pip
RUN rm /var/cache/apk/*
 
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
 
ENV HOST 0.0.0.0
EXPOSE 3000
 
COPY package.json yarn.lock /usr/src/app/
RUN yarn --pure-lockfile
 
# ENV must be after RUN yarn, else dev dependency will be ignored
ENV NODE_ENV production
COPY . /usr/src/app/
 
ENV ORIGIN https://www.munch.app
 
RUN yarn run build
CMD [ "yarn", "run", "cloudfront" ]
package.json
{
  "name": "website",
  "scripts": {
    "dev": "nuxt",
    "dev-debug": "node_modules/nuxt/bin/nuxt-dev",
    "build": "nuxt build",
    "start": "nuxt start",
    "cloudfront": "./upload-s3.sh && nuxt start",
    "lint": "eslint --ext .js,.vue --ignore-path .gitignore .",
    "precommit": "npm run lint"
  }
}
upload-s3.sh
#!/usr/bin/env bash
 
# 1 Year, _nuxt/ auto-versioned
aws s3 cp .nuxt/dist s3://www.munch.app.bucket/_nuxt \
                      --acl "public-read" \
                      --cache-control "public, max-age=31536000" \
                      --exclude '*' \
                      --include 'img/*' \
                      --include 'layouts/*' \
                      --include 'pages/*' \
                      --include 'app.*.js' \
                      --include 'manifest.*.js' \
                      --include 'vendor.*.js' \
                      --recursive
 
# 1 Year, static/meta self-versioned
aws s3 cp static/meta s3://www.munch.app.bucket/meta \
                      --acl "public-read" \
                      --cache-control "public, max-age=31536000" \
                      --recursive
 
# 1 Day
aws s3 cp static s3://www.munch.app.bucket \
                      --acl "public-read" \
                      --cache-control "public, max-age=86400" \
                      --exclude '*' \
                      --include 'favicon.ico' \
                      --include 'robots.txt' \
                      --include 'browserconfig.xml' \
                      --recursive

Why?

I needed a zero-downtime deployment strategy that is scalable and works well with the AWS stack.

Nuxt.js has its built-in caching ability for assets. One of the advantages of using this method is the ability to cache additional files that are not inside /assets folder and have them delivered from a CDN.