The installation of this package failed

When we tried to install the Microsoft Access Database Engine and Office 2007 System Driver on a Windows Server 2016 an error message popped up immediately:

The installation of this package failed

When we ran the installation with the logging option, we have found a message at the bottom of the file:

./AccessDatabaseEngine_x64_2010 /passive /log:enginelog.txt

Will create the folder ‘\MSECache\AceRedist\1033’
CActionCreateFolder::execute ends
CActionIf::execute starts
Begin evaluation of the condition
The property ‘SYS.ERROR.INERROR’ is equal to ‘1’

The installer could create the \MSECache\AceRedist\1033 on the D: drive where we executed the program, but for some reason the directory was empty.

We decided to approach the problem in two steps:

  • Extract the installer files to two separate subfolders of the current directory, as the extracted file names are the same for the two packages.
 ./AccessDatabaseEngine_x64_2010 /passive /extract:extract_dir_2010
 ./AccessDatabaseEngine-2007-Office-System-Driver /passive /extract:extract_dir_2007
  • Run the extracted files to install the applications.
cd extract_dir_2010
./AceRedist.msi /passive

cd ../extract_dir_2007
./AceRedist.msi /passive

Invoke-WebRequest : The request was aborted: Could not create SSL/TLS secure channel.

Older PowerShell versions do not use TLS1.2 as the default version during the SSL handshake. When the API requires TLS1.2 the error message appears:

 Invoke-WebRequest : The request was aborted: Could not create SSL/TLS secure channel. 

To force PowerShell to use TLS1.2 during the SSL handshake, issue this command before executing the Invoke-Webrequest

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;
Invoke-WebRequest ....

Extend a Linux partition

When the Linux disk drive is full, first we need to identify the reason for the drive overuse. Many times the drive is filled with one large log file, that can be identified and truncated.

To list the directory sizes under the current directory execute

du -sh *

To empty a file, overwrite it with nothing, so the process that writes into it still can access it.

cat /dev/null > ./MY_LARGE_LOG_FILE

When we free up the disk space, the server needs time to recover and do some housekeeping. The load average numbers show how busy the server was in the recent minutes. Check the load on the computer with


23:58:50 up 318 days, 16:32, 1 user, load average: 0.03, 5.34, 18.68

The load averages are from the past 1, 5, and 15 minutes

Grow the partition

If extra drive space is needed, enlarge the volume. We also need to grow the partition, so the operating system can access the additional space on the volume.

Grow the partition to use the entire volume. Use the growpart command, and specify the name of the volume and the partition number.

sudo growpart /dev/nvme0n1 1

If the file system is xfs, update the xfs file system metadata to use the entire partition

sudo xfs_growfs /

Configure AWS Route 53 to host a web site in AWS

To be able to fully control the routing of a DNS name use Route 53

Create a new Hosted Zone

In the Route 53 console click the Create Hosted Zone button

Create a new public Hosted Zone

Return to the domain registrars website and set the name servers to use the AWS Route 53 hosted zone’s name servers

Create a new record set for the

If HTTP is sufficient, you can route directly to the S3 bucket

For HTTPS connections create a CloudFront distribution for the S3 bucket, attach an SSL certificate to it and route to the CloudFront Distribution

To route to an ECS cluster, select the Application load balancer

Error: EPERM: operation not permitted, rename

Error: EPERM: operation not permitted, rename ‘C:\Users\MY_USERNAME\.config\configstore\update-notifier-nodemon.json.3604946166’ -> ‘C:\Users\MY_USERNAME\.config\configstore\update-notifier-nodemon.json’

In case of this error rename the C:\Users\MY_USERNAME\.config\configstore\update-notifier-nodemon.json to update-notifier-nodemon.json.ORIGINAL to allow NPM to use the file name as the target of a rename operation.

Test Socket.IO connectivity

To test the connectivity to the Express Socket.IO server use the following command from a terminal window

npx wscat -c ws://localhost:3000/\?transport=websocket

npx: installed 11 in 4.89s

If the return value is 40, in the next 30 seconds the server is listening for new events (pingInterval ms + pingTimeout ms).

You can send one with the command

# Example

Blue-green deployment in AWS ECS Fargate with CodeDeploy

We will use CodeDeploy to automate the application deployment in our AWS ECS Fargate cluster.

Create an AIM role for CodeDeploy to assume the ECS service role

In the AWS console navigate IAM and click the Roles link

Click the Create role button

Click the CodeDeploy link

Select CodeDeploy ECS

Keep the default setting

Enter a name for the role

Create the CodeDeploy application

We will use Python and Boto3 to create and configure the CodeDeploy application

Install Python on your workstation

Install Boto3 on your workstation

pip install boto3

Create the appspec.json file

The AppSpec file contains instructions for CodeDeploy to deploy the new version of the application. To get the “taskDefinitionArn” of the Task Definition, execute the command in a terminal

aws ecs describe-task-definition --task-definition MY_TASK_DEFINITION_NAME

Save this file as appspec.json

  "version": 0.0,
  "Resources": [
          "TargetService": {
              "Type": "AWS::ECS::Service",
              "Properties": {
                  "TaskDefinition": "arn:aws:ecs:us-east-1:MY_ACCOUNT_NUMBER:task-definition/MY_TASK_DEFINITION_NAME:MY_REVISION",
                  "LoadBalancerInfo": {
                      "ContainerName": "MY_ECS_CONTAINER_NAME",
                      "ContainerPort": 3000

Create the CodeDeploy application

We will use a Python script with Boto3 to create and configure the CodeDeploy application. Create the file

import boto3

# Update the appspec.json file
# Get the "taskDefinitionArn" with
# aws ecs describe-task-definition --task-definition MY_TASK_DEFINITION_NAME

application_name = 'MY_APPLICATION_NAME'
cluster_name = 'MY_ECS_CLUSTER_NAME'
service_name = 'MY_ECS_SERVICE_NAME'
listener_prod_arn = 'arn:aws:elasticloadbalancing:us-east-1:MY_ACCOUNT_NUMBER:listener/app/MY_LISTENERNAME'
listener_test_arn = 'arn:aws:elasticloadbalancing:us-east-1:MY_ACCOUNT_NUMBER:listener/app/MY_LISTENERNAME'
target_group_1_name = 'MY_PROD_TARGETGROUP_NAME'
target_group_2_name = 'MY_TEST_TARGETGROUP_NAME'
service_role_arn = 'arn:aws:iam::MY_ACCOUNT_NUMBER:role/MY_CODEDEPLOY_ROLE_NAME'
region = 'us-east-1'
termination_wait_minutes = 60
app_spec_file = 'appspec.json'

# Create an SNS topic

# Create an SNS client
client = boto3.client(

topic = client.create_topic(Name="notifications")
topic_arn = topic['TopicArn']  

# ----------------------------------------------------

# Create a CodeDeploy application using Python/Boto3:

cd_client = boto3.client('codedeploy')
response = cd_client.create_application(
    applicationName='App-' + application_name,

# ----------------------------------------------------

# Create a CodeDeploy deployment group using Python/Boto3:

response = cd_client.create_deployment_group(
    applicationName='App-' + application_name,
    deploymentGroupName='Dgp-' + application_name,
    deploymentConfigName='CodeDeployDefault.ECSAllAtOnce', serviceRoleArn=service_role_arn,
      'triggerName': application_name + '-trigger',
      'triggerTargetArn': topic_arn,
      'triggerEvents': [
    'enabled': True,
    'events': [
    'deploymentType': 'BLUE_GREEN',
    'deploymentOption': 'WITH_TRAFFIC_CONTROL'
    'terminateBlueInstancesOnDeploymentSuccess': {
      'action': 'TERMINATE',
      'terminationWaitTimeInMinutes': termination_wait_minutes
    'deploymentReadyOption': {
       'actionOnTimeout': 'CONTINUE_DEPLOYMENT'
    'targetGroupPairInfoList': [
        'targetGroups': [
            'name': target_group_1_name
            'name': target_group_2_name
        'prodTrafficRoute': {
          'listenerArns': [listener_prod_arn]
        'testTrafficRoute': {
          'listenerArns': [listener_test_arn]
      'serviceName': service_name,
      'clusterName': cluster_name

# ----------------------------------------------------

# Create a CodeDeploy deployment:

file = open(app_spec_file)
app_spec =

response = cd_client.create_deployment(
    applicationName='App-' + application_name,
    deploymentGroupName='Dgp-' + application_name,
      'revisionType': 'AppSpecContent',
      'appSpecContent': {
        'content': app_spec
      'enabled': True,
      'events': [

Create the CodeDeploy application

Execute the above script with

python .\

Monitor the deployment

If the script successfully created the CodeDeploy application the first deployment starts automatically

In CodeDeploy

  • In the AWS console open the CodeDeploy page
  • Select Applications
  • Select the application name
  • On the Deployments tab select the deployment
  • Check the deployment status

In the ECS cluster

  • In the AWS console select the cluster and the service
  • Select the Deployments tab
  • CodeDeploy starts to launch a new, Replacement task
  • At this pint the prod and test listeners of the load balancer both point to the old task version
  • When the new task started 100% of the traffic still routed to the old version
  • The load balancer’s Test listener starts to route traffic to the new task behind target group “b”
  • When the deployment succeeded and none of the specified Hook Lambdas (if any) returned failure, the Test and Production traffic both are routed to the new task version
  • The old (blue) task stays active during the time span we specified in the “termination_wait_minutes” variable of the Python script. During that time we can click the Stop and roll back deployment button to restore the prior version of the task.
  • While the old (blue) task is still available the deployment is still “running”. To be able to start a new deployment we need to click the “Terminate original task set” button.
  • When the wait time is over, the old deployment terminates in the service


If you get the error message

AWS CodeDeploy does not have the permissions required to assume the role …

make sure you have used the correct role ARN from

Deployment fails with error code 404

If you deploy a Socket.IO server make sure you add 404 to the valid Success Codes in both Load Balancer target groups.

Host the server container in an AWS ECS Fargate cluster

We have already created a Docker image for the server using Nginx. We will create an AWS ECS Fargate cluster in AWS and host the container there.

Create an ECR repository for the image

Select the Elastic Container Registry

Create a new repository

Enter a name, enable Tag immutability and Scan on push

Select the repository you just created and click the View push commands button

Follow the instructions on the next page to authenticate in the registry, build your Docker image and push it to the registry.

 # Authenticate in ECR
 aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin
 # Build the image
 docker build -t MY_DOCKER_IMAGE_NAME .
 # Tag the image
 docker tag robbers-rummy-server:latest$1
 # Push the image
 docker push$1

If this is the first ECS cluster of the account the Getting Started button launches the ECS Wizard. See Using the ECS wizard to create the cluster, service, and task definition below.

Create the ECS cluster

Create a new ECS cluster in the new VPC

  • Select the Fargate cluster template

For production clusters, add a third subnet for redundancy. This way of one of the availability zones develop issues, the cluster can use the third subnet for high availability.

For production clusters also enable Container Insights for advanced logging

Create a security group

Create a security group in the new VPC with an ingress rule for the necessary port and protocol. Open port 3000-3001 for production and test for blue-green deployment.

Create an Application Load Balancer

Create a new Application Load Balancer in the new VPC, but do not add any listeners and target groups. Those will be created by the ECS Fargate Service creation.

This is fine, we don;t need listeners now.

Add the security group to the Load Balancer.

We have to create a temporary target group, we will delete it later.

Do not register any targets, the ECS service creation process will create the target group and register the target.

Create an ECS Task Definition

We will use the task definition when we will create the Service

In this example, we will create a Fargate Task Definition

Select the memory, CPU sizes and click the Add container button

Configure the container

Set the environment variables

Create a service role for CodeDeploy

Create a service role for CodeDeploy in the IAM console.

Create the service

Create a new Farate service in the new cluster. Click the name of the cluster.

On the Services tab click the Create button

  • Select the new VPC, the subnets, and click the Edit button to select the new security group
  • Select the new security group

Click the Add to load balancer button to add the container to the load balancer. Select the Application Load Balancer type

  • Select HTTP for the listeners, for some reason at the time of writing we cannot select the SSL certificate on this page

Create a new listener for testing during the blue-green deployment

Edit the name of the target groups if needed

For now, we don’t set up autoscaling

Enable HTTPS in the load balancer listeners

Select HTTPS, port 3000, and the certificate

Add 404 to the health check success codes

Socker.IO returns 404 when we call the root path, so add 404 to the target group health check success codes

  • Select the target group name
  • In the Health Check settings panel click the Edit button
  • Click the Advanced Settings arrow

Add 404 to the success codes

If this is the first service of the cluster, the wizard will guide you through the Service creation process.

In the AWS console select Elastic Container Service

Click the Get started button

Click the Configure button in the custom configuration

Enter the

  • Container name
  • Image
  • Memory limits (soft limit) = 512
  • Container port = 3000

Click the Advanced container configuration arrow

Add the environment variable NODE_ENV=production

Under Storage and Logging enable Auto-configure CloudWatch Logs

Click the Save button

Keep the default task definition and click Next

Edit the Service definition

Create the load balancer

Add 404 to the health check success codes

When you return from the Load Balancer creation refresh the Load Balancer list

Keep the Cluster definition and click Next

Click the Create button to create the cluster

When enabled, click the View service button

Create a CI/CD pipeline and connect it to an ECR repository

Enable HTTPS in the listener

  • Create an SSL certificate in the AWS Certificate Manager
  • Update the load balancer listener to use HTTPS on port 3000

Host a static web application in AWS S3

We will host our static website in AWS S3.

Install the AWS SDK Node.js module

 npm install aws-sdk

Configure the AWS CLI with the access key and secret key in the ~/.aws/credentials file to access your AWS account.

Host the static website of the client in an S3 bucket

Create an S3 bucket using the AWS console

To be able to use Route 53 to route traffic to this S3 bucket, make sure you name the bucket to match the website address, like, and it is created in the region of your choice.

Enable public access to the bucket

Click the bucket name, select the Properties tab and click Static website hosting

Select Use this bucket to host a website

Enter the name of the index and error pages, and copy the URL of the bucket

Add the S3 bucket policy

On the Permissions, Bucket Policy tab enter the bucket policy. Replace MY_BUCKET_NAME in the script with the bucket name.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
            "Resource": [

Upload the client website to the S3 bucket

Copy the contents of the client/dist folder into the bucket. The webpack local test server deletes the contents of the dist folder, so you always have to copy the error.html file there before the upload to S3.

pushd client

# Copy the assets to the dist directory
cp error.html dist/
# Upload to S3
aws s3 cp dist s3://MY_BUCKET_NAME --recursive


Test the static website

Navigate to the address you have copied from the Static website hosting page

Create an SSL certificate

Modern browsers display the “Not secure” message in the address line if the site is not accessed through HTTPS. To use HTTPS we need an SSL certificate.

  • Open the Certificate Manager and click the Request a certificate button
  • Select Request a public certificate

To use the certificate for or create the * wildcard certificate. The wildcard certificate does not work without the subdomain, to attach the certificate to create a separate certificate for

Create a CloudFront Distribution

To be able to attach an SSL certificate to the URL we need a CloudFront Distribution in front of the S3 bucket.

  • Open the CloudFront console and click the Create Distribution button
  • Select the Web delivery method
  • Select the S3 bucket which contains the files of the static site
  • Enter the URL of your website into the Alternate Domain Names (CNAMES) field
  • Select the SSL certificate you have created above. Make sure you specify the entry point of the site (index.html) as the Default Root Object

Deploy a new version of a task in an ECS Fargate cluster

To deploy the new version of a Docker container image and launch new tasks with the new version

Build and push the new Docker image

  • Build the new Docker container image
  • Push the new image to ECR (Elastic Container Registry)

Create a new revision of the ECS Task Definition

Open the ECS section of the AWS Console

On the Amazon ECS page click Clusters and select the cluster

On the Services tab click the Task Definition

On the Task Definition page click the Create new revision button

Scroll down to the Container Definitions section select the container definition

In the Image field update the Docker image version

Click the Update button at the bottom of the Container page

Click the Create button at the bottom of the Task Definition page

A new task definition revision has been created

Update the Service to use the new Task Definition revision

Go back to the Cluster

On the Services tab select the service

In the upper right corner click the Update button

In the Revision dropdown select the new Task Definition revision

At the bottom of the Configure service page click the Next step button. If you click the “Skip to review” button, the task definition revision is not updated in the service!!!

Select the CodeDeploy deployment

At the bottom of the Review page click the Update Service button

Click the service name to return to the service

Deregister the old Task Definition revision

If we don’t use the blue-green deployment with CodeDeploy, we need to manually deregister the old revision of the task definition to force the service to direct all traffic to the new task definition.

To tell the service to use only the new revision of the Task Definition deregister the old revision otherwise both versions will run side-by-side in the service

Return to the Task Definition

Select the old revision of the Task Definition and select Deregister in the drop-down

Click the Deregister button

Check the running tasks

On the Tasks tab of the cluster, only the new revision of the Task Definition should run. If there are open connections to the old revision, it stays in running state with the INACTIVE status until those connections are closed.

Update the Scheduled Tasks

If you have configured a scheduled task based on the task definition you need to update the task definition reference to specify the latest revision.

Select the cluster

Select the Scheduled Tasks tab

Select the scheduled task

Click the arrow next to the Target name and check the Task definition revision

To edit the Task definition revision click the Edit button in the upper right corner

In the Schedule targets section click the arrow next to the Target name. The revision will be auto-populated with the latest value.

Click the Update button at the bottom of the page to save the new value.

Click the View scheduled task button to check the revision

Click the arrow next to the Target name and check the revision.