SQL Server AWS RDS instance ALARM FreeableMemory <=... MB

The SQL database servers use the available memory for caching to speed up the database operation. If we do not restrict the SQL database server memory usage, the operating system will not have enough memory to run. This setting is also necessary for an AWS RDS instance, otherwise, you will get the alert

ALARM FreeableMemory <=… MB

In AWS we can specify the maximum SQL server memory dynamically, so every RDS instance type will leave enough memory for the operating system regardless of the size of the available memory size. in this example, we will leave 1.5 GB (1536 MB) memory for the operating system so the default 1024 MB free memory alarm will not sound.

DBInstanceClassMemory returns the total memory size in bytes, so we need to convert the value to MB, to be able to set the value of “max server memory (mb)” to the correct number.

If you use Terraform to create your RDS instance, create a script with the aws_db_parameter_group resource to create a Parameter Group in your AWS account. You need to execute it once, as all RDS instances will use the same group.

resource "aws_db_parameter_group" "default" {

  name = "max-server-memory"
  family = "sqlserver-se-12.0"
  description = "DBInstanceClassMemory"

  parameter {
    name = "custom-sqlserver-se-12-0"
    value = "SUM({DBInstanceClassMemory/1048576},-1536)"
    apply_method = "immediate"
  }
}

In the RDS instance creation script assign the Parameter Group to the RDS instance and increase the timeout of the create and delete operations to make sure Terraform waits during the creation and deletion process.

resource "aws_db_instance" "default" {
...
  # Add the Max Server Memory parameter group to the instance
  parameter_group_name = "custom-sqlserver-se-12-0"
...
  timeouts {
    create = "120m"
    delete = "120m"
  }
...
}

 

 

 

Terraform provider.aws: no suitable version installed

The new versions of Terraform do not contain all plugins after the application installation. When you try to access a provider the first time, Terraform may not be able to communicate with it.

* provider.aws: no suitable version installed
version requirements: “(any version)”
versions installed: none

To download the necessary plugins, initialize the Terraform script directory

terraform init

The command will instruct Terraform to get all referenced modules and download all necessary plugins.

 

 

Setup failed: Failed to copy slug dir: lstat /Users: no such file or directory error in Terraform Enterprise

When you try to execute a Terraform configuration in Terraform Enterprise (Atlas) you may get the error message:

Setup failed: Failed to copy slug dir: lstat /Users: no such file or directory

Cause:

The Git repository contains the .terraform/modules directory, and the Terraform Enterprise server cannot get the latest modules from GitHub.

Solution:

  1. Create a .gitignore file in the repository and add these lines:
    */.terraform/modules/
    
    # Ignore DS_Store if working on a mac
    .DS_Store
  2. Delete the .terraform directory and push the changes to GitHub

 

 

Create a Terraform Enterprise environment

If you just start to work with Terraform Enterprise, you need to create a Terraform environment.

Preparation

GitHub account

To access GitHub from Terraform Enterprise, create a GitHub team and account with admin access to the GitHub repository that will store the Terraform scripts.

  1. Create a GitHub team who will have admin access to the Terraform script repository,
  2. Create a GitHub user what Terraform Enterprise will use to access the Terraform script repository,
  3. Add the new GitHub user to your GitHub organization,
  4. Log into GitHub with the new user account credentials and wait for the email verification email from GitHub,
  5. Open the verification email from GitHub and click the button in the email to verify your email address,
  6. Stay logged into GitHub with the new user account and wait for the invitation email from GitHub to join the organization,
  7. Open the invitation email from GitHub and click the button to accept the invitation,
  8. In GitHub add the new GitHub user to the new team,
  9. Create a GitHub repository to store the Terraform scripts,
  10. Add the new team to the Terraform script repository with admin rights.

If the Terraform modules are in a separate GitHub repository, add the new team to that repository with Read rights.

AWS account

Create a new user account that Terraform Enterprise will use to access AWS

  1. Create a new AWS user,
  2. Add user rights
    1. AmazonEC2FullAccess
    2. AmazonS3FullAccess

Connect your Atlas account to GitHub

We will authorize Terraform Enterprise to access the Terraform script GitHub repository. We set this connection up in our personal profile in Terraform enterprise, but when we create new Terraform environments Terraform Enterprise will use this connection to access the GitHub repository the environment will be connected to. Do not use your personal GitHub account to make the connection, because if your personal account loses access to the GitHub repository, the Terraform environment will not be able to connect to it. For this connection, we have already created a new user in GitHub above.

  1. In a web browser log into GitHub with the user account you want Terraform Enterprise use to access the GitHub repository,
  2. In the same web browser open a new tab and navigate to https://atlas.hashicorp.com/settings/connections and log into Atlas with your Hashicorp account,
  3. Click Connect GitHub to Atlas
  4. On the GitHub authorization page click the Authorize hashicorp button,
  5. Later if you want to unlink Atlas from your GitHub account, in the Personal section select Connections, and click the Unlink button.

Create the GitHub repository

Create a repository in GitHub to store the Terraform config files. You can specify subdirectories for each Terraform environment to watch, so one repository can serve the security group creation, and EC2 instance creation for the same developer group.

  1. Create the GitHub repository,
  2. Create a folder that the Terraform Enterprise environment will monitor for changes.

Create a new environment

    1. In a web browser navigate to https://atlas.hashicorp.com/configurations/import,
    2. Log in with your Atlas user account,
    3. In the dropdown list of the New Environment page (click the text of the dropdown, not the arrow) select Link to GitHub.
      Name the environment to include the type of the server you want to launch and the server environment, for example, test_linux_sandbox.
      To save the environment click the Create & Continue button,
    4. On the left side select Variables,
    5. In the Environment section click the Edit button,
  1. If the environment is linked to the Terraform Enterprise Server

    1. The environment has been created. Click the your account settings page to create a personal token. It will allow you to access your account without using your username and password.
    2. Enter a description for the token and click the Generate Token button,
    3. Save the token value at a safe place
    4. To return back to the Runs page select Terraform Enterprise in the upper left corner
    5. Click the name of the environment you want to work with
    6. Configure the environment. On the left side click Settings
      1. Uncheck Plan on artifact uploads
    7. Store your personal access token on your workstation to be able to access Terraform Enterprise. Open a Bash window and execute the export command:

      export ATLAS_TOKEN=[YOUR AUTHENTICATION TOKEN]
    8. Add the terraform backend configuration to the .tf file on your workstation

      terraform {
        backend "atlas" {
          name    = "YOUR_ORGANIZATION/YOUR_ENVIRONMENT"
          address = "https://atlas.hashicorp.com"
        }
      }
    9. Navigate to the Terraform configuration folder and initialize it with the backend configuration you have entered to the .tf file.
      terraform init
    10. Push the Terraform configuration to the Enterprise server

      terraform push -name="YOUR_ORGANIZATION/YOUR_ENVIRONMENT" -atlas-address="https://atlas.hashicorp.com"
    11. On the left side click Configurations to see the list of configs your organization have pushed to the server.
    12. Run a Terraform Plan to see if your code works
      terraform plan
    13. Execute your Terraform config on your workstation
      terraform apply
    14. Push the result of the run to the Terraform Enterprise server
      terraform push -name="YOUR_ORGANIZATION/YOUR_ENVIRONMENT" -atlas-address="https://atlas.hashicorp.com"
    15. To view the result of the run, on the left side click Runs.
    16. To view the created infrastructure states on the left side click States

Create the Terraform scripts

 

More resources

For a tutorial on Multiple Environments in Terraform see https://github.com/hashicorp/multiple-envs

Terraform Enterprise Administration

Create a new organization

  1. In your web browser navigate to https://atlas.hashicorp.com/help/organizations,
  2. On the left menu bar under Organizations click Create,
  3. In the middle of the screen click the new organization page link,
  4. Enter the email address and user name for the organization owner and click the Create organization button.

Error waiting for instance (i-…) to become ready: unexpected state ‘terminated’, wanted target ‘running’

When you launch a server instance with Terraform, sometimes the error message does not contain the underlying cause. When the cloud provider cannot complete the request, many times Terraform displays a generic error message:

Error waiting for instance (i-...) to become ready: unexpected state 'terminated', wanted target 'running'

To find the root cause of the error in AWS

  1. Log into the AWS console and navigate to the EC2 section,
  2. Search for the instance by the instance Id,
  3. You can find the error message at the bottom of the Description tab

In our specific case, it was Client.VolumeLimitExceeded: Volume limit exceeded

We had to increase the volume limit to be able to launch more large EC2 instances.

Keep multiple versions of applications on Macintosh

Most of the DevOps tools are still in beta versions, many times the new version is not compatible with your existing scripts or have an error that stops your scripts working. To be able to keep multiple versions of the applications and easily switch between them, create symbolic links and point to the version you want to execute.

Create version-specific locations

Create a folder for your optional applications.

mkdir /opt

Set you as the owner of the “opt” directory and its children.

YOUR_GROUP should be on:

  • MacOS: wheel
  • Ubuntu: your username 
sudo chown -R YOUR_USER_NAME:YOUR_GROUP /opt

Set the security of the folder.

sudo chmod 755 /opt

We will create a directory structure to place each version of the application into its own folder. This example uses Packer.

Create a symbolic link to point to the application in the appropriate version folder. These are the examples for Packer, Terraform, and Vagrant. Notice, that the Vagrant application is in the “bin” directory, so the symbolic link has to point there.

If there is an existing symbolic link that points to the latest version of the application, rename it to be able to quickly return to the prior version of the application. This can come handy when we need to destroy instances created with the prior version of Terraform.

cd /opt/terraform
mv terraform terraform_v_0.9.11

Create a new symbolic link that points to the latest version of the application.

# For Packer
cd /opt/packer
ln -s /opt/packer/packer_1.0.0/packer packer

# For Terraform
cd /opt/terraform
ln -s /opt/terraform/terraform_0.10.0/terraform terraform

# For Vagrant
cd /opt/vagrant
ln -s /opt/vagrant/vagrant_1.9.5/bin/vagrant vagrant #The vagrant application is in the bin folder

Add the main application folder to the path in the config file of your terminal

  • for iTerm2 with zterminal: ~/.zshrc
  • for other terminals: ~/.bash_profile
PATH=/opt/packer:$PATH
PATH=/opt/terraform:$PATH
PATH=/opt/vagrant:$PATH

Vagrant

For Vagrant installation see Vagrant


To change the version you want to execute

  • Delete the symbolic link,
    cd /opt/packer
    rm packer
  • Create a new symbolic link pointing to the other version of the application as above.

Variable ‘…’: duplicate found. Variable names must be unique.

Starting with Terraform version 0.7.3 you can only define a variable once within a directory or a module. Before that release you could copy variable definition files from other modules and did not throw an error if you had the same variable defined in multiple files within the module.

In the new version of Terraform if you define the same variable name multiple times within the module you will get the following error message,

Variable '...': duplicate found. Variable names must be unique.

Make sure a variable is only defined once in the module or directory.

How to migrate the DevOps development environment to another workstation

Git

Move the Git repositories to a new workstation

If you want to move Git repositories to your new workstation

  • Commit and push all repositories to GitHub on the old workstation,
  • Copy the folders from your old workstation to the new,
  • Execute the following command in all Git repositories on the new workstation.
    git reset --hard

Terraform

Copying Terraform scripts from a Windows workstation

If the Terraform scripts reference modules, terraform creates symbolic links to those modules in the .terraform/modules directories. When you try to copy those symbolic links on a Windows machine the copy process stops. To copy the Terraform scripts

  • Start Windows explorer in the folder above the Terraform script folders,
  • Search for .terraform,
  • In the search result list delete the .terraform folders.

Test Kitchen

If you migrate between Windows and Macintosh you need to update a few paths, because the two operating systems store user specific files at different places.

The ssh_key location in the .kitchen.yml file should start with

  • On Macintosh: ~/.aws/
  • On Windows: C:\Users\YOUR_USER_NAME\.aws\

HTTP Request Returned 409 Conflict: Client already exists

The Chef server maintains the list of registered nodes and clients in its database.  When you launch a new instance with Chef you may encounter the following error message:

*** Input CHEF_CLIENT_NODE_NAME is undefined, using: IP-0AFE6965
...
*** Starting chef-client
*** Finished chef-client
Printing Log
...
[...] INFO: Client key C:\chef\client.pem is not present - registering
[...] INFO: HTTP Request Returned 409 Conflict: Client already exists
[...] INFO: HTTP Request Returned 403 Forbidden: error
[...] ERROR: Running exception handlers
[...] ERROR: Exception handlers complete
[...] FATAL: Stacktrace dumped to C:/chef/cache/chef-stacktrace.out
[...] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[...] FATAL: Net::HTTPServerException: 403 "Forbidden"

Cause:

In this case the CHEF_CLIENT_NODE_NAME already exists in the client database.You can search for it from a Bash window.

Solution:

Add a line to the Chef configuration to remove the existing node with the same node name

If you use Terraform to launch server instances and configure Chef, add this line to the Chef provisioner:

recreate_client = true

To delete nodes from the Chef server:

Use the knife command to search for the existing node from a Bash window.

To find the node in the Chef server database

knife search node '*:IP-0AFE6965'

To find the client in the Chef server database

knife search client '*:IP-0AFE6965'

If any of them found, those are leftovers from a previous launch. AWS reuse the IDs, so you have to remove the nodes from the Chef server database when you terminate the instances. To delete the unnecessary entries use the following commands:

knife node delete 'IP-0AFE6965'
knife client delete 'IP-0AFE6965'

To remove the terminated instances from the Chef server database you can also use the Chef web interface.

  • On the Nodes tab enter the IP address of the instance into the search box and hit Enter
  • In the Actions column click the down arrow next to the instance and select Delete