Host the server container in an AWS ECS Fargate cluster

We have already created a Docker image for the server using Nginx. We will create an AWS ECS Fargate cluster in AWS and host the container there.

Create an ECR repository for the image

Select the Elastic Container Registry

Create a new repository

Enter a name, enable Tag immutability and Scan on push

Select the repository you just created and click the View push commands button

Follow the instructions on the next page to authenticate in the registry, build your Docker image and push it to the registry.

 # Authenticate in ECR
 aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin MY_ACCOUNT_NUMBER.dkr.ecr.us-east-1.amazonaws.com/MY_ECR_REPOSITORY_NAME
 # Build the image
 docker build -t MY_DOCKER_IMAGE_NAME .
 # Tag the image
 docker tag robbers-rummy-server:latest MY_ACCOUNT_NUMBER.dkr.ecr.us-east-1.amazonaws.com/MY_DOCKER_IMAGE_NAME:$1
 # Push the image
 docker push MY_ACCOUNT_NUMBER.dkr.ecr.us-east-1.amazonaws.com/MY_DOCKER_IMAGE_NAME:$1

If this is the first ECS cluster of the account the Getting Started button launches the ECS Wizard. See Using the ECS wizard to create the cluster, service, and task definition below.

Create the ECS cluster

Create a new ECS cluster in the new VPC

  • Select the Fargate cluster template

For production clusters, add a third subnet for redundancy. This way of one of the availability zones develop issues, the cluster can use the third subnet for high availability.

For production clusters also enable Container Insights for advanced logging

Create a security group

Create a security group in the new VPC with an ingress rule for the necessary port and protocol. Open port 3000-3001 for production and test for blue-green deployment.

Create an Application Load Balancer

Create a new Application Load Balancer in the new VPC, but do not add any listeners and target groups. Those will be created by the ECS Fargate Service creation.

This is fine, we don;t need listeners now.

Add the security group to the Load Balancer.

We have to create a temporary target group, we will delete it later.

Do not register any targets, the ECS service creation process will create the target group and register the target.

Create an ECS Task Definition

We will use the task definition when we will create the Service

In this example, we will create a Fargate Task Definition

Select the memory, CPU sizes and click the Add container button

Configure the container

Set the environment variables

Create a service role for CodeDeploy

Create a service role for CodeDeploy in the IAM console.

Create the service

Create a new Farate service in the new cluster. Click the name of the cluster.

On the Services tab click the Create button

  • Select the new VPC, the subnets, and click the Edit button to select the new security group
  • Select the new security group

Click the Add to load balancer button to add the container to the load balancer. Select the Application Load Balancer type

  • Select HTTP for the listeners, for some reason at the time of writing we cannot select the SSL certificate on this page

Create a new listener for testing during the blue-green deployment

Edit the name of the target groups if needed

For now, we don’t set up autoscaling

Enable HTTPS in the load balancer listeners

Select HTTPS, port 3000, and the certificate

Add 404 to the health check success codes

Socker.IO returns 404 when we call the root path, so add 404 to the target group health check success codes

  • Select the target group name
  • In the Health Check settings panel click the Edit button
  • Click the Advanced Settings arrow

Add 404 to the success codes

If this is the first service of the cluster, the wizard will guide you through the Service creation process.

In the AWS console select Elastic Container Service

Click the Get started button

Click the Configure button in the custom configuration

Enter the

  • Container name
  • Image
  • Memory limits (soft limit) = 512
  • Container port = 3000

Click the Advanced container configuration arrow

Add the environment variable NODE_ENV=production

Under Storage and Logging enable Auto-configure CloudWatch Logs

Click the Save button

Keep the default task definition and click Next

Edit the Service definition

Create the load balancer

Add 404 to the health check success codes

When you return from the Load Balancer creation refresh the Load Balancer list

Keep the Cluster definition and click Next

Click the Create button to create the cluster

When enabled, click the View service button

Create a CI/CD pipeline and connect it to an ECR repository

Enable HTTPS in the listener

  • Create an SSL certificate in the AWS Certificate Manager
  • Update the load balancer listener to use HTTPS on port 3000

Host a static web application in AWS S3

We will host our static website in AWS S3.

Install the AWS SDK Node.js module

 npm install aws-sdk

Configure the AWS CLI with the access key and secret key in the ~/.aws/credentials file to access your AWS account.

Host the static website of the client in an S3 bucket

Create an S3 bucket using the AWS console

To be able to use Route 53 to route traffic to this S3 bucket, make sure you name the bucket to match the website address, like example.com, and it is created in the region of your choice.

Enable public access to the bucket

Click the bucket name, select the Properties tab and click Static website hosting

Select Use this bucket to host a website

Enter the name of the index and error pages, and copy the URL of the bucket

Add the S3 bucket policy

On the Permissions, Bucket Policy tab enter the bucket policy. Replace MY_BUCKET_NAME in the script with the bucket name.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::MY_BUCKET_NAME/*"
            ]
        }
    ]
}

Upload the client website to the S3 bucket

Copy the contents of the client/dist folder into the bucket. The webpack local test server deletes the contents of the dist folder, so you always have to copy the error.html file there before the upload to S3.

pushd client

# Copy the assets to the dist directory
cp error.html dist/
# Upload to S3
aws s3 cp dist s3://MY_BUCKET_NAME --recursive

popd

Test the static website

Navigate to the address you have copied from the Static website hosting page

Create an SSL certificate

Modern browsers display the “Not secure” message in the address line if the site is not accessed through HTTPS. To use HTTPS we need an SSL certificate.

  • Open the Certificate Manager and click the Request a certificate button
  • Select Request a public certificate

To use the certificate for www.mysite.com or api.mysite.com create the *.mysite.com wildcard certificate. The wildcard certificate does not work without the subdomain, to attach the certificate to mysite.com create a separate certificate for mysite.com.

Create a CloudFront Distribution

To be able to attach an SSL certificate to the URL we need a CloudFront Distribution in front of the S3 bucket.

  • Open the CloudFront console and click the Create Distribution button
  • Select the Web delivery method
  • Select the S3 bucket which contains the files of the static site
  • Enter the URL of your website into the Alternate Domain Names (CNAMES) field
  • Select the SSL certificate you have created above. Make sure you specify the entry point of the site (index.html) as the Default Root Object

Deploy a new version of a task in an ECS Fargate cluster

To deploy the new version of a Docker container image and launch new tasks with the new version

Build and push the new Docker image

  • Build the new Docker container image
  • Push the new image to ECR (Elastic Container Registry)

Create a new revision of the ECS Task Definition

Open the ECS section of the AWS Console

On the Amazon ECS page click Clusters and select the cluster

On the Services tab click the Task Definition

On the Task Definition page click the Create new revision button

Scroll down to the Container Definitions section select the container definition

In the Image field update the Docker image version

Click the Update button at the bottom of the Container page

Click the Create button at the bottom of the Task Definition page

A new task definition revision has been created

Update the Service to use the new Task Definition revision

Go back to the Cluster

On the Services tab select the service

In the upper right corner click the Update button

In the Revision dropdown select the new Task Definition revision

At the bottom of the Configure service page click the Next step button. If you click the “Skip to review” button, the task definition revision is not updated in the service!!!

Select the CodeDeploy deployment

At the bottom of the Review page click the Update Service button

Click the service name to return to the service

Deregister the old Task Definition revision

If we don’t use the blue-green deployment with CodeDeploy, we need to manually deregister the old revision of the task definition to force the service to direct all traffic to the new task definition.

To tell the service to use only the new revision of the Task Definition deregister the old revision otherwise both versions will run side-by-side in the service

Return to the Task Definition

Select the old revision of the Task Definition and select Deregister in the drop-down

Click the Deregister button

Check the running tasks

On the Tasks tab of the cluster, only the new revision of the Task Definition should run. If there are open connections to the old revision, it stays in running state with the INACTIVE status until those connections are closed.

Update the Scheduled Tasks

If you have configured a scheduled task based on the task definition you need to update the task definition reference to specify the latest revision.

Select the cluster

Select the Scheduled Tasks tab

Select the scheduled task

Click the arrow next to the Target name and check the Task definition revision

To edit the Task definition revision click the Edit button in the upper right corner

In the Schedule targets section click the arrow next to the Target name. The revision will be auto-populated with the latest value.

Click the Update button at the bottom of the page to save the new value.

Click the View scheduled task button to check the revision

Click the arrow next to the Target name and check the revision.

Phaser 3 game sprites are not displayed on iOS, iPad and iPhone

If the Phaser 3 game sprites are not displayed on iOS change the type in the index.js file to type: Phaser.CANVAS

const config = {
    type: Phaser.CANVAS,
    backgroundColor: '005500',
    parent: "robbers-rummy",
    width:  window.innerWidth,
    height: window.innerHeight,
    scene: [
        Game
    ]
};

In CANVAS mode Phaser cannot set the background color with this.cameras.main.backgroundColor or this.backgroundColor. Set the background color too in the index.js file as seen above.

Attach an AWS EBS volume to a Linux server

Format and mount the volume

List the available disk devices and their mount points

lsblk

The nvme1n1 volume is not yet mounted

Create a partition on the volume

List the existing partitions

fdisk -l

Create a new partition

fdisk /dev/nvme1n1
# enter n to create a new partition and follow the defaults to maximize the drive space used 
# enter p tp view the partition table
# enter w to write the partition table to the disk

Check the partition list

lsblk

Detect the new partition with

partprobe

If there is a file system on the partition to determine the file system of the volume

file -s /dev/nvme1n1p1

“data” means no file system

If there is no file system on the volume, create one

mkfs -t xfs /dev/nvme1n1p1
# If the partition already has a files system and you want to overwrite it use the xfs -f option
mkfs -t xfs -f /dev/nvme1n1p1

If the mkfs tool is not found, install it with yum install xfsprogs

Create a mount point

Create a directory where the volume will be mounted

mkdir /data

Mount the volume to the directory

mount /dev/nvme1n1p1 /data

Automatically mount the volume after reboot

The mounting above will not be retained after a reboot. To keep the volume mounted after a reboot add am entry to the /etc/fstab file

Make a safety copy of the original fstab file

cp /etc/fstab /etc/fstab.orig

Use blkid to find the UUID of the device

blkid

# On Ubuntu 18.04
lsblk -o +UUID

Open the /etc/fstab file in an editor

vim /etc/fstab

Add an entry to the /etc/fstab file for the volume

UUID=7c6cb20b-ada0-4cd7-9c3a-342d6faf87a2  /data  xfs  defaults,nofail  0  2
  • UUID
  • Mount point
  • file system
  • recommended file system mount options. The nofail option will allow this server to boot even if the volume is not available. On Debian derivatives, including earlier than Ubuntu 16.04 nobootwait is also necessary

To test if the file entry is correct unmount the volume and use the /etc/fstab to mount it again

umount /data
mount -a

If there are no errors, the file should be correct.

To list the directory sizes

du -sh *

To empty a file

cat /dev/null > ./MY_LARGE_LOG_FILE

Check the load on the computer

uptime

23:58:50 up 318 days, 16:32, 1 user, load average: 0.03, 5.34, 18.68

The load averages are from the past 1, 5, and 15 minutes

has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource

When your website calls the Socket.IO backend API from another domain, the browser console displays the error message

Access to XMLHttpRequest at ‘http://…:3000/socket.io/?EIO=3&transport=polling&t=N7Y-Fot’ from origin ‘http://….com’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource.

To enable Cross-origin resource sharing add the code to the top of your Socket.IO server.js file

const server = require('express')();

// require 'cors'
const cors = require('cors')
// Add CORS before any other routing
server.use(cors());

const http = require('http').createServer(server);
const io = require('socket.io')(http);

Before building the application install the cors package

npm install cors

Uncaught TypeError: _helpers_formUtil__WEBPACK_IMPORTED_MODULE_6___default.a is not a constructor

When you import a Phaser 3 module and run the Node.js web application the following error is displayed

Uncaught TypeError: _helpers_formUtil__WEBPACK_IMPORTED_MODULE_6___default.a is not a constructor

Make sure the imported module and all dependent modules imported by that module has the following class definition format

export default class MY_CLASS_NAME {

Creating a multiplayer online card game with Node.js and Phaser 3

As the world is locked down due to the COVID-19 Corona Virus, we are quarantined at home. We miss the company of our families and friends, so online games are the only option to play together. We are going to create an online multiplayer game that can be used for any tabletop gameplay.

The frontend is going to be JavaScript, Node.js, Phaser3, the backend is Express and Socket.IO.

The framework for this game came from the great tutorial at How to Build a Multiplayer Card Game with Phaser 3, Express, and Socket.IO

Install a web server for development

I have installed XAMPP from https://www.apachefriends.org/index.html

The home directory where the index.html should be is at C:\xampp\htdocs

Install Node.js

Install the latest version of Node.js from https://nodejs.org/en/download/

Install Phaser3

Install Phaser 3 based on http://phaser.io/download/stable as of writing with

npm install phaser@3.22.0

The getting started guide on Phaser 3 is at Getting Started with Phaser 3

A great game tutorial is at Making your first Phaser 3 game

The multiplayer online game development

Client

To test the client on your workstation, start the Node.js client from a terminal window to display the web page for the players.

cd client
npm install
npm start

The default browser opens with the http://localhost:8080/ address.

Server

To test the server on your workstation start the multiplayer server in another terminal window

cd into the root directory above the client. The next command will ask questions and create a new package.json file

cd server
npm init

Install Express, Socket.IO, and Nodemon

npm install --save express socket.io nodemon

Start the server

npm run start
or
node server.js

Build the application

Stop the client development server, otherwise you will get the error message

Error: EPERM: operation not permitted, lstat ‘…\client\dist\src\assets’

Build the client application

cd client
npm update
npm run build

Deploy the application on the workstation

Copy the assets into the dist directory

cd client
mkdir -p dist/src/assets
cp src/assets/* dist/src/assets

Copy the contents of the client\dist directory to the webserver

mkdir -p C:/xampp/htdocs/rummy
cp -r dist/* C:/xampp/htdocs/rummy

Start the Express server

To provide the Socket.IO functionality in the root of the game development directory execute

cd server
npm run start

Test the multiplayer application in the local network

Use your workstation as the test server and connect to it from another computer.

Set the Socket.IO server URL

To be able to connect to the same Express server from the workstation and from another computer on the same network change the Socket.IO URL in the client/src/scenes/game.js file.

this.socket = io('http://MY_COMPUTER_IP:3000');

Expose the Express server on your workstation to the local network

Open port 3000 in the Windows firewall.

Start the webserver

Open the XAMPP Control Panel and click the Apache Start button

Open the website

In a web browser navigate to http://MY_COMPUTER_IP/rummy/

Build the Docker image

Based on the great post at Dockerizing a Node.js web app

Create a Dockerfile in the server directory

FROM node:12

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# The wildcard will copy both the package.json AND package-lock.json files (npm@5+)
COPY package*.json ./

RUN npm install
# If you are building your code for production
# RUN npm ci --only=production

# Copy all files
COPY . .

# The server listens on port 3000 by default
EXPOSE 3000

CMD [ "node", "server.js" ]

Build the server Docker image

cd server
docker build -t robbers-rummy-server .

Create a Dockerfile in the client directory. We will use a two-stage build process to make the final image as lean as possible. We build the application in a Node.js container and run it in an Nginx container.

FROM node:12 as builder

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install
# If you are building your code for production
# RUN npm ci --only=production

# Bundle app source
COPY . .

RUN npm update

RUN npm run build

# Create asset directory
RUN mkdir -p dist/src/assets

# Copy the images to the dist folder
COPY src/assets/* dist/src/assets/

FROM nginx:alpine as runner

WORKDIR /usr/share/nginx/html

COPY --from=builder usr/src/app/dist/* ./

# Copy the images from the source
RUN mkdir -p src/assets

COPY --from=builder /usr/src/app/src/assets/* src/assets/

EXPOSE 80

Build the client Docker image

cd client
docker build -t robbers-rummy .

Launch the Docker containers

Start the server container from any directory

docker run -p 3000:3000 -d robbers-rummy-server

Start the client website container from any directory

 docker run -p 80:80  -d robbers-rummy

Stop the Docker conainers

Get the container IDs

docker ps

Stop and remove the containers using the container IDs

docker stop MY_CONTAINER_ID
docker rm MY_CONTAINER_ID

Use Docker Compose

To launch the server and the client website together we will create a docker-compose.yml file in the root of the application

version: '3'
services:
  api:
    image: robbers-rummy-server
    build: .
    networks:
      - backend
    ports:
      - "3000:3000"

  web-cli:
    image: robbers-rummy
    networks:
      - backend
    ports:
      - "80:80"

networks:
  backend:
    driver: bridge

Start the containers with Docker Compose from the application root directory

docker-compose up -d

To stop the containers launched with Docker Compose from the application root directory

docker-compose down

Host the application in AWS

The application consists of two parts: a static website with the game Javascript files, and the Express server to run Socket.IO

We will host the static website in AWS S3, and the server Docker container in an AWS ECS Fargate cluster.