MSSQL database migration to another database server

When a database is migrated to another server by copying the database file or restoring it from a backup file, the original database user account references are also carried with it.

Those accounts contain the account IDs specific to the original database server.

To provide access to the restored database on the new database server we need to delete the old user accounts from the restored database and configure the database access in the new database server. This will re-create the user accounts with the correct IDs in the restored database.

For more information see https://docs.microsoft.com/en-us/sql/sql-server/failover-clusters/troubleshoot-orphaned-users-sql-server?view=sql-server-ver15

RuntimeError: Volume vol-… attached at xvdf but does not conform to this resource’s specifications

When the Chef aws cookbook’s ebs_volume.rb resource tries to bring a volume online, partition, and format it we get the error message:

RuntimeError: Volume vol-… attached at xvdf but does not conform to this resource’s specifications
C:/chef/cache/cookbooks/aws/resources/ebs_volume.rb:46:in `block in class_from_file’

Make sure the “size” attribute value in the aws_ebs_volume resource call matches the actual size of the volume in GiB.

The installation of this package failed

When we tried to install the Microsoft Access Database Engine and Office 2007 System Driver on a Windows Server 2016 an error message popped up immediately:

The installation of this package failed

When we ran the installation with the logging option, we have found a message at the bottom of the file:

./AccessDatabaseEngine_x64_2010 /passive /log:enginelog.txt

Will create the folder ‘\MSECache\AceRedist\1033’
CActionCreateFolder::execute ends
CActionIf::execute starts
Begin evaluation of the condition
The property ‘SYS.ERROR.INERROR’ is equal to ‘1’

The installer could create the \MSECache\AceRedist\1033 on the D: drive where we executed the program, but for some reason the directory was empty.

We decided to approach the problem in two steps:

  • Extract the installer files to two separate subfolders of the current directory, as the extracted file names are the same for the two packages.
 ./AccessDatabaseEngine_x64_2010 /passive /extract:extract_dir_2010
 ./AccessDatabaseEngine-2007-Office-System-Driver /passive /extract:extract_dir_2007
  • Run the extracted files to install the applications.
cd extract_dir_2010
./AceRedist.msi /passive

cd ../extract_dir_2007
./AceRedist.msi /passive

Invoke-WebRequest : The request was aborted: Could not create SSL/TLS secure channel.

Older PowerShell versions do not use TLS1.2 as the default version during the SSL handshake. When the API requires TLS1.2 the error message appears:

 Invoke-WebRequest : The request was aborted: Could not create SSL/TLS secure channel. 

To force PowerShell to use TLS1.2 during the SSL handshake, issue this command before executing the Invoke-Webrequest

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;
Invoke-WebRequest ....

Extend a Linux partition

When the Linux disk drive is full, first we need to identify the reason for the drive overuse. Many times the drive is filled with one large log file, that can be identified and truncated.

To list the directory sizes under the current directory execute

du -sh *

To empty a file, overwrite it with nothing, so the process that writes into it still can access it.

cat /dev/null > ./MY_LARGE_LOG_FILE

When we free up the disk space, the server needs time to recover and do some housekeeping. The load average numbers show how busy the server was in the recent minutes. Check the load on the computer with

uptime

23:58:50 up 318 days, 16:32, 1 user, load average: 0.03, 5.34, 18.68

The load averages are from the past 1, 5, and 15 minutes

Grow the partition

If extra drive space is needed, enlarge the volume. We also need to grow the partition, so the operating system can access the additional space on the volume.

Grow the partition to use the entire volume. Use the growpart command, and specify the name of the volume and the partition number.

sudo growpart /dev/nvme0n1 1

If the file system is xfs, update the xfs file system metadata to use the entire partition

sudo xfs_growfs /

Configure AWS Route 53 to host a web site in AWS

To be able to fully control the routing of a DNS name use Route 53

Create a new Hosted Zone

In the Route 53 console click the Create Hosted Zone button

Create a new public Hosted Zone

Return to the domain registrars website and set the name servers to use the AWS Route 53 hosted zone’s name servers

Create a new record set for the

If HTTP is sufficient, you can route directly to the S3 bucket

For HTTPS connections create a CloudFront distribution for the S3 bucket, attach an SSL certificate to it and route to the CloudFront Distribution

To route to an ECS cluster, select the Application load balancer

Error: EPERM: operation not permitted, rename

Error: EPERM: operation not permitted, rename ‘C:\Users\MY_USERNAME\.config\configstore\update-notifier-nodemon.json.3604946166’ -> ‘C:\Users\MY_USERNAME\.config\configstore\update-notifier-nodemon.json’

In case of this error rename the C:\Users\MY_USERNAME\.config\configstore\update-notifier-nodemon.json to update-notifier-nodemon.json.ORIGINAL to allow NPM to use the file name as the target of a rename operation.

Test Socket.IO connectivity

To test the connectivity to the Express Socket.IO server use the following command from a terminal window

npx wscat -c ws://localhost:3000/socket.io/\?transport=websocket

npx: installed 11 in 4.89s
0{“sid”:”vOQHhey-0EYPaAvVAAAA”,”upgrades”:[],”pingInterval”:25000,”pingTimeout”:5000}
40

If the return value is 40, in the next 30 seconds the server is listening for new events (pingInterval ms + pingTimeout ms).

You can send one with the command
42[“EVENT_NAME”, “ATTRIBUTE1”, “ATTRIBUTE2”]

# Example
42["version"]

Blue-green deployment in AWS ECS Fargate with CodeDeploy

We will use CodeDeploy to automate the application deployment in our AWS ECS Fargate cluster.

Create an AIM role for CodeDeploy to assume the ECS service role

In the AWS console navigate IAM and click the Roles link

Click the Create role button

Click the CodeDeploy link

Select CodeDeploy ECS

Keep the default setting

Enter a name for the role

Create the CodeDeploy application

We will use Python and Boto3 to create and configure the CodeDeploy application

Install Python on your workstation

Install Boto3 on your workstation

pip install boto3

Create the appspec.json file

The AppSpec file contains instructions for CodeDeploy to deploy the new version of the application. To get the “taskDefinitionArn” of the Task Definition, execute the command in a terminal

aws ecs describe-task-definition --task-definition MY_TASK_DEFINITION_NAME

Save this file as appspec.json

{
  "version": 0.0,
  "Resources": [
      {
          "TargetService": {
              "Type": "AWS::ECS::Service",
              "Properties": {
                  "TaskDefinition": "arn:aws:ecs:us-east-1:MY_ACCOUNT_NUMBER:task-definition/MY_TASK_DEFINITION_NAME:MY_REVISION",
                  "LoadBalancerInfo": {
                      "ContainerName": "MY_ECS_CONTAINER_NAME",
                      "ContainerPort": 3000
                  }
              }
          }
      }
  ]
}

Create the CodeDeploy application

We will use a Python script with Boto3 to create and configure the CodeDeploy application. Create the file create-codedeploy.py

import boto3

# Update the appspec.json file
# Get the "taskDefinitionArn" with
# aws ecs describe-task-definition --task-definition MY_TASK_DEFINITION_NAME

application_name = 'MY_APPLICATION_NAME'
cluster_name = 'MY_ECS_CLUSTER_NAME'
service_name = 'MY_ECS_SERVICE_NAME'
listener_prod_arn = 'arn:aws:elasticloadbalancing:us-east-1:MY_ACCOUNT_NUMBER:listener/app/MY_LISTENERNAME'
listener_test_arn = 'arn:aws:elasticloadbalancing:us-east-1:MY_ACCOUNT_NUMBER:listener/app/MY_LISTENERNAME'
target_group_1_name = 'MY_PROD_TARGETGROUP_NAME'
target_group_2_name = 'MY_TEST_TARGETGROUP_NAME'
service_role_arn = 'arn:aws:iam::MY_ACCOUNT_NUMBER:role/MY_CODEDEPLOY_ROLE_NAME'
region = 'us-east-1'
termination_wait_minutes = 60
app_spec_file = 'appspec.json'

# Create an SNS topic

# Create an SNS client
client = boto3.client(
    "sns",
    region_name=region
)

topic = client.create_topic(Name="notifications")
topic_arn = topic['TopicArn']  

# ----------------------------------------------------

# Create a CodeDeploy application using Python/Boto3:

cd_client = boto3.client('codedeploy')
response = cd_client.create_application(
    applicationName='App-' + application_name,
    computePlatform='ECS'
)


# ----------------------------------------------------

# Create a CodeDeploy deployment group using Python/Boto3:

response = cd_client.create_deployment_group(
    applicationName='App-' + application_name,
    deploymentGroupName='Dgp-' + application_name,
    deploymentConfigName='CodeDeployDefault.ECSAllAtOnce', serviceRoleArn=service_role_arn,
    triggerConfigurations=[
    {
      'triggerName': application_name + '-trigger',
      'triggerTargetArn': topic_arn,
      'triggerEvents': [
        "DeploymentStart",
        "DeploymentSuccess",
        "DeploymentFailure",
        "DeploymentStop",
        "DeploymentRollback",
        "DeploymentReady"
      ]
    },
],
autoRollbackConfiguration={
    'enabled': True,
    'events': [
      'DEPLOYMENT_FAILURE', 'DEPLOYMENT_STOP_ON_ALARM', 
      'DEPLOYMENT_STOP_ON_REQUEST',
    ]
},
deploymentStyle={
    'deploymentType': 'BLUE_GREEN',
    'deploymentOption': 'WITH_TRAFFIC_CONTROL'
},
blueGreenDeploymentConfiguration={
    'terminateBlueInstancesOnDeploymentSuccess': {
      'action': 'TERMINATE',
      'terminationWaitTimeInMinutes': termination_wait_minutes
    },
    
    'deploymentReadyOption': {
       'actionOnTimeout': 'CONTINUE_DEPLOYMENT'
    }
},
loadBalancerInfo={
    'targetGroupPairInfoList': [
      {
        'targetGroups': [
          {
            'name': target_group_1_name
          },
          {
            'name': target_group_2_name
          }
        ],
        'prodTrafficRoute': {
          'listenerArns': [listener_prod_arn]
        },
        'testTrafficRoute': {
          'listenerArns': [listener_test_arn]
        }
      },
    ]
},
ecsServices=[
    {
      'serviceName': service_name,
      'clusterName': cluster_name
    }
]
)

# ----------------------------------------------------

# Create a CodeDeploy deployment:

file = open(app_spec_file)
app_spec = file.read()
file.close()

response = cd_client.create_deployment(
    applicationName='App-' + application_name,
    deploymentGroupName='Dgp-' + application_name,
    revision={
      'revisionType': 'AppSpecContent',
      'appSpecContent': {
        'content': app_spec
      }
    },
    ignoreApplicationStopFailures=False,
    autoRollbackConfiguration={
      'enabled': True,
      'events': [
        'DEPLOYMENT_FAILURE',
        'DEPLOYMENT_STOP_ON_ALARM',
        'DEPLOYMENT_STOP_ON_REQUEST'
      ]
    }
)

Create the CodeDeploy application

Execute the above script with

python .\create-codedeploy.py

Monitor the deployment

If the script successfully created the CodeDeploy application the first deployment starts automatically

In CodeDeploy

  • In the AWS console open the CodeDeploy page
  • Select Applications
  • Select the application name
  • On the Deployments tab select the deployment
  • Check the deployment status

In the ECS cluster

  • In the AWS console select the cluster and the service
  • Select the Deployments tab
  • CodeDeploy starts to launch a new, Replacement task
  • At this pint the prod and test listeners of the load balancer both point to the old task version
  • When the new task started 100% of the traffic still routed to the old version
  • The load balancer’s Test listener starts to route traffic to the new task behind target group “b”
  • When the deployment succeeded and none of the specified Hook Lambdas (if any) returned failure, the Test and Production traffic both are routed to the new task version
  • The old (blue) task stays active during the time span we specified in the “termination_wait_minutes” variable of the Python script. During that time we can click the Stop and roll back deployment button to restore the prior version of the task.
  • While the old (blue) task is still available the deployment is still “running”. To be able to start a new deployment we need to click the “Terminate original task set” button.
  • When the wait time is over, the old deployment terminates in the service

Troubleshooting

If you get the error message

AWS CodeDeploy does not have the permissions required to assume the role …

make sure you have used the correct role ARN from

Deployment fails with error code 404

If you deploy a Socket.IO server make sure you add 404 to the valid Success Codes in both Load Balancer target groups.

Host the server container in an AWS ECS Fargate cluster

We have already created a Docker image for the server using Nginx. We will create an AWS ECS Fargate cluster in AWS and host the container there.

Create an ECR repository for the image

Select the Elastic Container Registry

Create a new repository

Enter a name, enable Tag immutability and Scan on push

Select the repository you just created and click the View push commands button

Follow the instructions on the next page to authenticate in the registry, build your Docker image and push it to the registry.

 # Authenticate in ECR
 aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin MY_ACCOUNT_NUMBER.dkr.ecr.us-east-1.amazonaws.com/MY_ECR_REPOSITORY_NAME
 # Build the image
 docker build -t MY_DOCKER_IMAGE_NAME .
 # Tag the image
 docker tag robbers-rummy-server:latest MY_ACCOUNT_NUMBER.dkr.ecr.us-east-1.amazonaws.com/MY_DOCKER_IMAGE_NAME:$1
 # Push the image
 docker push MY_ACCOUNT_NUMBER.dkr.ecr.us-east-1.amazonaws.com/MY_DOCKER_IMAGE_NAME:$1

If this is the first ECS cluster of the account the Getting Started button launches the ECS Wizard. See Using the ECS wizard to create the cluster, service, and task definition below.

Create the ECS cluster

Create a new ECS cluster in the new VPC

  • Select the Fargate cluster template

For production clusters, add a third subnet for redundancy. This way of one of the availability zones develop issues, the cluster can use the third subnet for high availability.

For production clusters also enable Container Insights for advanced logging

Create a security group

Create a security group in the new VPC with an ingress rule for the necessary port and protocol. Open port 3000-3001 for production and test for blue-green deployment.

Create an Application Load Balancer

Create a new Application Load Balancer in the new VPC, but do not add any listeners and target groups. Those will be created by the ECS Fargate Service creation.

This is fine, we don;t need listeners now.

Add the security group to the Load Balancer.

We have to create a temporary target group, we will delete it later.

Do not register any targets, the ECS service creation process will create the target group and register the target.

Create an ECS Task Definition

We will use the task definition when we will create the Service

In this example, we will create a Fargate Task Definition

Select the memory, CPU sizes and click the Add container button

Configure the container

Set the environment variables

Create a service role for CodeDeploy

Create a service role for CodeDeploy in the IAM console.

Create the service

Create a new Farate service in the new cluster. Click the name of the cluster.

On the Services tab click the Create button

  • Select the new VPC, the subnets, and click the Edit button to select the new security group
  • Select the new security group

Click the Add to load balancer button to add the container to the load balancer. Select the Application Load Balancer type

  • Select HTTP for the listeners, for some reason at the time of writing we cannot select the SSL certificate on this page

Create a new listener for testing during the blue-green deployment

Edit the name of the target groups if needed

For now, we don’t set up autoscaling

Enable HTTPS in the load balancer listeners

Select HTTPS, port 3000, and the certificate

Add 404 to the health check success codes

Socker.IO returns 404 when we call the root path, so add 404 to the target group health check success codes

  • Select the target group name
  • In the Health Check settings panel click the Edit button
  • Click the Advanced Settings arrow

Add 404 to the success codes

If this is the first service of the cluster, the wizard will guide you through the Service creation process.

In the AWS console select Elastic Container Service

Click the Get started button

Click the Configure button in the custom configuration

Enter the

  • Container name
  • Image
  • Memory limits (soft limit) = 512
  • Container port = 3000

Click the Advanced container configuration arrow

Add the environment variable NODE_ENV=production

Under Storage and Logging enable Auto-configure CloudWatch Logs

Click the Save button

Keep the default task definition and click Next

Edit the Service definition

Create the load balancer

Add 404 to the health check success codes

When you return from the Load Balancer creation refresh the Load Balancer list

Keep the Cluster definition and click Next

Click the Create button to create the cluster

When enabled, click the View service button

Create a CI/CD pipeline and connect it to an ECR repository

Enable HTTPS in the listener

  • Create an SSL certificate in the AWS Certificate Manager
  • Update the load balancer listener to use HTTPS on port 3000