Execute Terraform scripts with Octopus

Terraform is a very powerful, free command-line tool to launch servers in any cloud or virtual machine environment. Hashicorp, the creator of Terraform just introduced the paid Terraform Enterprise server, that orchestrates the execution of the Terraform scripts.

Octopus is another tool that added Terraform orchestration functionality in version 2018.3

In this example, we will set up a Multi-Tenant Octopus project to launch servers in AWS with Terraform scripts and configure them with Chef.

Terraform script location

Create a GitHub repository for the Terraform script

GitHub is the most flexible way to store the Terraform scripts. Octopus can download and execute the same script in multiple Octopus projects. This eliminates the code duplication and makes the scripts available for execution from the command line.

Create a GitHub user with read access to the repository

If the GitHub repository is private, Octopus will need a way to access it with a username and password.

Tag the source code version, Octopus needs a version to refer to the version of your script

git tag 1.0.0
git push --tags

Automation with Octopus

We will create a Multi-Tenant Octopus project to be able to use the same Octopus project for multiple server types in multiple AWS environments.

We will create

  • an Octopus project group to hold the AWS launch projects,
  • a generic Octopus project to launch an AWS EC2 instance,
  • an operating system specific Octopus project that calls the generic project
  • environments for each AWS account, application, application environment,
  • tenants for each server.

Create the Environments

  1. On the Infrastructure Octopus page select Environments
  2. On the Environments page click the ADD ENVIRONMENT button
  3. Create the environments you need (aws-dev, aws-qa, aws-uat, aws-prod)


Create a Lifecycle

The lifecycle governs how the project is promoted between environments.

  1. On the Library Octopus page select Lifecycles
  2. Click the ADD LIFECYCLE button
  3. Enter the name and description of the Lifecycle and click the ADD PHASE button

  4. Enter a name for the Phase and click the ADD ENVIRONMENT link to select the environment
  5. Select the environment and the Users will need to manually… radio button
  6. Keep the All environments must be deployed… and click the Save button.

Create a project group

The project group holds your launch projects together.

  1. On the Projects Octopus page select ADD GROUP
  2. Enter a name and description for the project group

Create a GitHub feed

This feed will allow Octopus to connect to GitHub

  1. On the Library Octopus page select External Feeds
  2. Click the ADD FEED button
  3. Select the GitHub Repository Feed type, enter a name and copy the recommended URL. Add the GitHub user’s name and password you have created above, and click the SAVE AND TEST link

  4. Enter the complete name of your Terraform repository and click the SEARCH button to make sure Octopus has access to the private repository. If you do not specify the full path of the repository, the search will display public repositories with similar names.

Store an AWS account reference in Octopus

AWS accounts store the credentials to connect to Amazon Web Sevices

  1. On the Infrastructure page select Accounts
  2. In the Amazon Web Services section click the ADD ACCOUNT button
  3. Enter a name, description, Access Key and Secret Key for the account. Enable the usage in tenanted and untenanted deployments, and click the SAVE AND TEST link.



Store SSH Keys in Octopus

Store your private keys in Octopus, so you can reference them in your variables.

  1. On the infrastructure page select Accounts
  2. In the SSH Key Pairs section click the ADD ACCOUNT button
  3. Enter the SSH key name, the username, and click the up arrow to upload the SSH private key file to the Octopus server.
    1. Select the Include in both tenanted and untenanted deployments radio button.
    2. In the Associated Tenants section select the tenant groups you want to use the key for.
  4. Click the SAVE button to save the certificate in Octopus

Create a Project

The project contains the reference to the Terraform script to launch the server, the connection to the GitHub repository where the Terraform script is stored, and credentials to access the AWS account. The Terraform script and GitHub connection are going to be the same for every server launch, but the AWS account credentials have to be different for each AWS account. If you work with multiple AWS accounts, you need to create separate projects for each AWS account.

  1. On the Projects Octopus page scroll down to the DevOps project group and click the ADD PROJECT button
  2. Enter a name and description for the new project, and click the ADVANCED SETTINGS link
  3. Select the lifecycle you have created earlier, and click the SAVE button.

Create the launch process

  1. On the left side select the Process menu item
  2. On the Process page click the ADD STEP button
  3. Type “terraform” into the filter field and select Apply a Terraform template step

Create the variables

  1. In the project click the Variables menu item
  2. Enter the name of the variable and click the value field
  3. Click the CHANGE TYPE link
  4. Select the Amazon Web Services Account type
  5. Enter a description and select the AWS account you have created above, and click the DONE button.
  6. Create the rest of the variables.
  7. Add prefixes to the variables that will be set in the OS-specific parent project, in the tenant, environment, and target.

Update the variable names in the Terraform script

When we added prefixes to the project variables to show which component responsible for setting it, we have changed the variable name. Update the Terraform script to reflect it.

ami = "#{os.instance_ami}"


Create an OS-specifc project that calls the generic project

There are values that are different for each operating system in each AWS account.

The AMI ID depends on the operating system and the application environment. We can test the new version of the operating system in the development environment, while we are still launching servers with the old, tested version for production.

The common, OS-specific security group ID is different in every AWS account. It depends on the AWS account and the operating system.

To be able to set the value of this variable based on the OS and the AWs account, or the OS and application environment, we need to create an OS-specific project that passes in the appropriate variable values to the generic project.

  1. On the Projects page scroll down to the DevOps project group and click the ADD PROJECT button
  2. Enter the name and description of the new project and click the ADVANCED SETTINGS link
  3. Select the DevOps-aws-lifecycle
  4. Click the Process menu item
  5. Click the ADD STEP button
  6. Select the Deploy a Release step
  7. Enter the name for the step and select the generic AWS EC2 Launch project

Create variables

Set the OS-specific variables in the OS-specific parent project. To avoid variable name collision later add the parent.prefix to the variables passed in from the parent to the child project.

Settings

  1. Click the Settings menu item
  2. Require tenant for the deployment, allow the deployment without target, and ask if there is an error. Chef raises an error when it reboots the server, so if the Chef cookbook contains the reboot resource, we need to accept the error for a successful deployment.
  3. Click the Variables menu item
  4. Add the OS-specific variables

Create Library Variable Templates

Library Variable Sets specify the variables that have to be set in a higher level component, like a tenant, environment, and target.

  1. On the Library Octopus page select Variable Sets
  2. On the Variable Sets page click the ADD NEW VARIABLE SET button
  3. Enter a name and description for the library variable set
  4. Click the VARIABLE TEMPLATES tab
  5. On the VARIABLE TEMPLATES tab click the ADD TEMPLATE link
  6. Add the variables to the Library Variable Template. For pre-defined choices use the Drop down control type. To avoid variable name collisions when we pass the variable values to the child project later, add the “parent.” prefix to every variable.

  7. Save the Library Variable Template

Add the Library Variable Templates to the OS-specific project

Add the Library Variable Sets to the OS-specific project, so the tenants can set them with the instance-specific values.

  1. Open the project and select the Variables menu item
  2. Select the Library Sets menu item
  3. Click the INCLUDE LIBRARY VARIABLE SETS button
  4. Select the newly created Library Variable Set

Pass the variables from the parent project to the child project

The parent project can pass variables into the child project. Select the variable to pass into the child project. You need to do this in every parent project that call the common child project.

  1. In every parent project select Process
  2. Select the process that calls the child project
  3. Click the Variables section
  4. Click the ADD VARIABLE link
  5. Add the OS-specific variables set in the parent project and the host-specific variables defined in the Library Variable Template and set in the tenant.

Create a Tenant

Tenants are server types that use the same Octopus project. A tenant can be an application or a software development project where the servers share common attributes and run on the same operating system type, like all Linux, or all Windows, so the same Terraform script can launch them.

  1. In Octopus select the Tenants page and click the ADD TENANT button
  2. Enter the name of the new tenant
  3. Click the Setting button to select a logo for the tenant
  4. Click the logo placeholder to open the region
  5. Click the file selection button to browse to a 100×100 image file

Tag the Tenant

Tags help you to organize your tenants.

  1. On the Library page select Tenant Tag Sets
  2. Click the ADD TAG SET button
  3. Enter a name and description for the tag set, and create the first tag, and click the ADD link
  4. Click the EDIT TAGS link to add tags to the tenant
  5. Click the small arrow to select the tags

Connect the Tenant to the project

  1. On the Tenants page select the tenant you want to add to the project
  2. Click the CONNECT PROJECT button
  3. Select the project and all environments you want to launch instances for this tenant (software project)

Set the tenant variable values

  1. On the Tenant page select Variables
  2. Select the COMMON VARIABLES tab
  3. Click the name of the template to open it
  4. Set the value and click the Bind icon
  5. When all required variables are set, the SAVE button becomes available.

Create a deployment target (server)

Deployment targets are the servers we are launching. In enterprise settings, all servers have a standardized name that conforms to the company standards.

  1. In the
  2. Click the ADD DEPLOYMENT TARGET button
  3. Select the Cloud Region radio button
  4.  Enter the server name, deployment and a role, and do not press the Save button yet
  5. In the Restrictions region select the Include in both tenanted and untenanted deployments radio button select the Tenant you have created.

Create project releases

If there was a change in the child project, we need to create a release for the child project and all parent projects that call it.

Create a release of the child project

  1. On the project page click the CREATE RELEASE button
  2. If you use GitHub as the Terraform script source, the no version specified message displays in the Packages section
  3. Tag the GitHub repository. Octopus needs a version to link to a GitHub package.
    git tag 1.0.0
    git push --tags
  4. Click the SEARCH link to find the latest version of the repository
  5. Select the latest version of the GitHub repository
  6. Click the SAVE button

Create releases for the parent projects

  1. Open the project
  2. Click the CREATE RELEASE button
  3. Enter the Release Notes, so later you can identify this release

Deploy the server

  1. Open the OS-specific project
  2. Click the Releases menu item
  3. Select the release
  4. Click the DEPLOY TO button and select the environment

    1. If the project was already deployed to an environment, that environment does not show up in the list, so click the ellipsis and select Delpoy To from the dropdown menu.
    2. Select the environment from the list
  5. Click the tenant section to select the tenant (server) to launch
  6. Click the DEPLOY button

Adding new servers, operating systems, environments, and AWS accounts

New Server

  1. Create a new Tenant

New AMI

  1. Update the AMI in the OS-specific parent project
  2. Open the Releases page of the parent project
  3. Select the latest release
  4. Click the UPDATE VARIABLES link

New Application

  1. Create a new Tenant Tag in the “DevOps Applications” Tenant Tag Set
  2. Add the new Tenants to the Tenant Tag of the application

New Operating System

  1. Clone one of the DevOps AWS EC2 OS… projects
  2. Update the project settings, because those are not cloned
    1. Tenants required for all deployments
    2. Deployments with no target allowed
  3. Update the project variables for the new operating system
  4. Create a release of the new project
  5. Create new Tenants and link them to the new OS project

New Environment or AWS account

  1. Add the new account if necessary. See Add AWS AccountStore an AWS account reference in Octopus above.
  2. Create the DevOps-env-[environment][account number] environment
  3. Add the new environment to the “DevOps-aws-lifecycle”
  4. Add the new environment to the variables in the
    1. Add to the base project: “DevOps AWS EC2 Launch”
      1. account.aws_key
      2. account.aws_key_name
      3. environment.aws_account_credentials
      4. environment.chef_environment
      5. environment.instance_tag_environment
    2. Add to all OS specific projects: “DevOps AWS EC2 OS…
      1. parent.os.account.instance_common_security_group_id

New Chef Attribute

To pass in a new Chef attribute from the Tenant

  1. Add the new attribute to the chef_attributes.json file with variable substitution
    {
     "set_hostname": "#{host.set_hostname}",
     "agent_name": "#{host.agent_name}",
     "add_to_domain": "#{host.add_to_domain}"
    }
  2. Add the new variable to the Library Variable Set’s Variable Template with the parent. prefix
  3. Add the new variable to the passed in variable list in every parent project process. See Pass the variables from the parent project to the child project

Troubleshooting

To save the variable values to the log add the following two variables to the project

OctopusPrintVariables True
OctopusPrintEvaluatedVariables True

For the change to take effect

Create a new release

or 

Update the variable snapshot of the existing release

  1. On the Project -> Releases  page select the latest release
  2. In the Variable Snapshot section click the SHOW link
  3. Click the UPDATE VARIABLES link

More info at https://octopus.com/docs/support/debug-problems-with-octopus-variables

 

 

 

amazon-ebs: Error waiting for SSH: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

When you launch a Linux AWS EC2 instance with Terraform or create a Linux AWS image with Packer, one of the following errors are displayed:

amazon-ebs: Error waiting for SSH: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

aws_instance.default: 1 error(s) occurred:
* ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

Possible causes

Incorrect username

In the Packer template the “ssh_username”, in the Terraform script the “user” attribute contains the username to connect to the instance. It is important to use the correct username for each operating system. For the list of correct usernames for each operating system, see Create a server image with Packer

Wrong SSH key

Make sure you use the correct SSH key to connect to the instance. In the Terraform file, the “key_name”  attribute of the “aws_instance resource” specifies the SSH key to launch the instance with, and the “private_key” attribute of the “connection” contains the key itself as a string.

 

SQL Server AWS RDS instance ALARM FreeableMemory <=... MB

The SQL database servers use the available memory for caching to speed up the database operation. If we do not restrict the SQL database server memory usage, the operating system will not have enough memory to run. This setting is also necessary for an AWS RDS instance, otherwise, you will get the alert

ALARM FreeableMemory <=… MB

In AWS we can specify the maximum SQL server memory dynamically, so every RDS instance type will leave enough memory for the operating system regardless of the size of the available memory size. in this example, we will leave 1.5 GB (1536 MB) memory for the operating system so the default 1024 MB free memory alarm will not sound.

DBInstanceClassMemory returns the total memory size in bytes, so we need to convert the value to MB, to be able to set the value of “max server memory (mb)” to the correct number.

If you use Terraform to create your RDS instance, create a script with the aws_db_parameter_group resource to create a Parameter Group in your AWS account. You need to execute it once, as all RDS instances will use the same group.

resource "aws_db_parameter_group" "default" {

  name = "max-server-memory"
  family = "sqlserver-se-12.0"
  description = "DBInstanceClassMemory"

  parameter {
    name = "custom-sqlserver-se-12-0"
    value = "SUM({DBInstanceClassMemory/1048576},-1536)"
    apply_method = "immediate"
  }
}

In the RDS instance creation script assign the Parameter Group to the RDS instance and increase the timeout of the create and delete operations to make sure Terraform waits during the creation and deletion process.

resource "aws_db_instance" "default" {
...
  # Add the Max Server Memory parameter group to the instance
  parameter_group_name = "custom-sqlserver-se-12-0"
...
  timeouts {
    create = "120m"
    delete = "120m"
  }
...
}

 

 

 

Terraform provider.aws: no suitable version installed

The new versions of Terraform do not contain all plugins after the application installation. When you try to access a provider the first time, Terraform may not be able to communicate with it.

* provider.aws: no suitable version installed
version requirements: “(any version)”
versions installed: none

To download the necessary plugins, initialize the Terraform script directory

terraform init

The command will instruct Terraform to get all referenced modules and download all necessary plugins.

 

 

Setup failed: Failed to copy slug dir: lstat /Users: no such file or directory error in Terraform Enterprise

When you try to execute a Terraform configuration in Terraform Enterprise (Atlas) you may get the error message:

Setup failed: Failed to copy slug dir: lstat /Users: no such file or directory

Cause:

The Git repository contains the .terraform/modules directory, and the Terraform Enterprise server cannot get the latest modules from GitHub.

Solution:

  1. Create a .gitignore file in the repository and add these lines:
    */.terraform/modules/
    
    # Ignore DS_Store if working on a mac
    .DS_Store
  2. Delete the .terraform directory and push the changes to GitHub

 

 

Create a Terraform Enterprise environment

If you just start to work with Terraform Enterprise, you need to create a Terraform environment.

Preparation

GitHub account

To access GitHub from Terraform Enterprise, create a GitHub team and account with admin access to the GitHub repository that will store the Terraform scripts.

  1. Create a GitHub team who will have admin access to the Terraform script repository,
  2. Create a GitHub user what Terraform Enterprise will use to access the Terraform script repository,
  3. Add the new GitHub user to your GitHub organization,
  4. Log into GitHub with the new user account credentials and wait for the email verification email from GitHub,
  5. Open the verification email from GitHub and click the button in the email to verify your email address,
  6. Stay logged into GitHub with the new user account and wait for the invitation email from GitHub to join the organization,
  7. Open the invitation email from GitHub and click the button to accept the invitation,
  8. In GitHub add the new GitHub user to the new team,
  9. Create a GitHub repository to store the Terraform scripts,
  10. Add the new team to the Terraform script repository with admin rights.

If the Terraform modules are in a separate GitHub repository, add the new team to that repository with Read rights.

AWS account

Create a new user account that Terraform Enterprise will use to access AWS

  1. Create a new AWS user,
  2. Add user rights
    1. AmazonEC2FullAccess
    2. AmazonS3FullAccess

Connect your Atlas account to GitHub

We will authorize Terraform Enterprise to access the Terraform script GitHub repository. We set this connection up in our personal profile in Terraform enterprise, but when we create new Terraform environments Terraform Enterprise will use this connection to access the GitHub repository the environment will be connected to. Do not use your personal GitHub account to make the connection, because if your personal account loses access to the GitHub repository, the Terraform environment will not be able to connect to it. For this connection, we have already created a new user in GitHub above.

  1. In a web browser log into GitHub with the user account you want Terraform Enterprise use to access the GitHub repository,
  2. In the same web browser open a new tab and navigate to https://atlas.hashicorp.com/settings/connections and log into Atlas with your Hashicorp account,
  3. Click Connect GitHub to Atlas
  4. On the GitHub authorization page click the Authorize hashicorp button,
  5. Later if you want to unlink Atlas from your GitHub account, in the Personal section select Connections, and click the Unlink button.

Create the GitHub repository

Create a repository in GitHub to store the Terraform config files. You can specify subdirectories for each Terraform environment to watch, so one repository can serve the security group creation, and EC2 instance creation for the same developer group.

  1. Create the GitHub repository,
  2. Create a folder that the Terraform Enterprise environment will monitor for changes.

Create a new environment

    1. In a web browser navigate to https://atlas.hashicorp.com/configurations/import,
    2. Log in with your Atlas user account,
    3. In the dropdown list of the New Environment page (click the text of the dropdown, not the arrow) select Link to GitHub.
      Name the environment to include the type of the server you want to launch and the server environment, for example, test_linux_sandbox.
      To save the environment click the Create & Continue button,
    4. On the left side select Variables,
    5. In the Environment section click the Edit button,
  1. If the environment is linked to the Terraform Enterprise Server

    1. The environment has been created. Click the your account settings page to create a personal token. It will allow you to access your account without using your username and password.
    2. Enter a description for the token and click the Generate Token button,
    3. Save the token value at a safe place
    4. To return back to the Runs page select Terraform Enterprise in the upper left corner
    5. Click the name of the environment you want to work with
    6. Configure the environment. On the left side click Settings
      1. Uncheck Plan on artifact uploads
    7. Store your personal access token on your workstation to be able to access Terraform Enterprise. Open a Bash window and execute the export command:

      export ATLAS_TOKEN=[YOUR AUTHENTICATION TOKEN]
    8. Add the terraform backend configuration to the .tf file on your workstation

      terraform {
        backend "atlas" {
          name    = "YOUR_ORGANIZATION/YOUR_ENVIRONMENT"
          address = "https://atlas.hashicorp.com"
        }
      }
    9. Navigate to the Terraform configuration folder and initialize it with the backend configuration you have entered to the .tf file.
      terraform init
    10. Push the Terraform configuration to the Enterprise server

      terraform push -name="YOUR_ORGANIZATION/YOUR_ENVIRONMENT" -atlas-address="https://atlas.hashicorp.com"
    11. On the left side click Configurations to see the list of configs your organization have pushed to the server.
    12. Run a Terraform Plan to see if your code works
      terraform plan
    13. Execute your Terraform config on your workstation
      terraform apply
    14. Push the result of the run to the Terraform Enterprise server
      terraform push -name="YOUR_ORGANIZATION/YOUR_ENVIRONMENT" -atlas-address="https://atlas.hashicorp.com"
    15. To view the result of the run, on the left side click Runs.
    16. To view the created infrastructure states on the left side click States

Create the Terraform scripts

 

More resources

For a tutorial on Multiple Environments in Terraform see https://github.com/hashicorp/multiple-envs

Terraform Enterprise Administration

Create a new organization

  1. In your web browser navigate to https://atlas.hashicorp.com/help/organizations,
  2. On the left menu bar under Organizations click Create,
  3. In the middle of the screen click the new organization page link,
  4. Enter the email address and user name for the organization owner and click the Create organization button.

Error waiting for instance (i-…) to become ready: unexpected state ‘terminated’, wanted target ‘running’

When you launch a server instance with Terraform, sometimes the error message does not contain the underlying cause. When the cloud provider cannot complete the request, many times Terraform displays a generic error message:

Error waiting for instance (i-...) to become ready: unexpected state 'terminated', wanted target 'running'

To find the root cause of the error in AWS

  1. Log into the AWS console and navigate to the EC2 section,
  2. Search for the instance by the instance Id,
  3. You can find the error message at the bottom of the Description tab

In our specific case, it was Client.VolumeLimitExceeded: Volume limit exceeded

We had to increase the volume limit to be able to launch more large EC2 instances.

Keep multiple versions of applications on Macintosh

Most of the DevOps tools are still in beta versions, many times the new version is not compatible with your existing scripts or have an error that stops your scripts working. To be able to keep multiple versions of the applications and easily switch between them, create symbolic links and point to the version you want to execute.

Create version-specific locations

Create a folder for your optional applications.

mkdir /opt

Set you as the owner of the “opt” directory and its children.

YOUR_GROUP should be on:

  • MacOS: wheel
  • Ubuntu: your username 
sudo chown -R YOUR_USER_NAME:YOUR_GROUP /opt

Set the security of the folder.

sudo chmod 755 /opt

We will create a directory structure to place each version of the application into its own folder. This example uses Packer.

Create a symbolic link to point to the application in the appropriate version folder. These are the examples for Packer, Terraform, and Vagrant. Notice, that the Vagrant application is in the “bin” directory, so the symbolic link has to point there.

If there is an existing symbolic link that points to the latest version of the application, rename it to be able to quickly return to the prior version of the application. This can come handy when we need to destroy instances created with the prior version of Terraform.

cd /opt/terraform
mv terraform terraform_v_0.9.11
cd /opt/packer
mv packer packer_v_1.0.4

Create a new symbolic link that points to the latest version of the application.

# For Packer
cd /opt/packer
ln -s /opt/packer/packer_1.2.0/packer packer

# For Terraform
cd /opt/terraform
ln -s /opt/terraform/terraform_0.10.0/terraform terraform

# For Vagrant
cd /opt/vagrant
ln -s /opt/vagrant/vagrant_1.9.5/bin/vagrant vagrant #The vagrant application is in the bin folder

Add the main application folder to the path in the config file of your terminal

  • for iTerm2 with zterminal: ~/.zshrc
  • for other terminals: ~/.bash_profile
PATH=/opt/packer:$PATH
PATH=/opt/terraform:$PATH
PATH=/opt/vagrant:$PATH

Vagrant

For Vagrant installation see Vagrant


To change the version you want to execute

  • Delete the symbolic link,
    cd /opt/packer
    rm packer
  • Create a new symbolic link pointing to the other version of the application as above.