The specified Security Group and Parameter Group are not set in the RDS instance

If the Terraform apply execution times out during the RDS instance creation, the specified Security Group and Parameter Group is not set in the RDS instance.

The solution is to set the timeout in the aws_db_instance resource. When a multi-az RDS instance is launched from a snapshot, the process can take more than 55 minutes. The default value is 40 minutes.

resource "aws_db_instance" "default" {
...
  timeouts {
    create = "120m"
    delete = "120m"
  }
...
}

SQL Server AWS RDS instance ALARM FreeableMemory <=... MB

The SQL database servers use the available memory for caching to speed up the database operation. If we do not restrict the SQL database server memory usage, the operating system will not have enough memory to run. This setting is also necessary for an AWS RDS instance, otherwise, you will get the alert

ALARM FreeableMemory <=… MB

In AWS we can specify the maximum SQL server memory dynamically, so every RDS instance type will leave enough memory for the operating system regardless of the size of the available memory size. in this example, we will leave 1.5 GB (1536 MB) memory for the operating system so the default 1024 MB free memory alarm will not sound.

DBInstanceClassMemory returns the total memory size in bytes, so we need to convert the value to MB, to be able to set the value of “max server memory (mb)” to the correct number.

If you use Terraform to create your RDS instance, create a script with the aws_db_parameter_group resource to create a Parameter Group in your AWS account. You need to execute it once, as all RDS instances will use the same group.

resource "aws_db_parameter_group" "default" {

  name = "max-server-memory"
  family = "sqlserver-se-12.0"
  description = "DBInstanceClassMemory"

  parameter {
    name = "custom-sqlserver-se-12-0"
    value = "SUM({DBInstanceClassMemory/1048576},-1536)"
    apply_method = "immediate"
  }
}

In the RDS instance creation script assign the Parameter Group to the RDS instance and increase the timeout of the create and delete operations to make sure Terraform waits during the creation and deletion process.

resource "aws_db_instance" "default" {
...
  # Add the Max Server Memory parameter group to the instance
  parameter_group_name = "custom-sqlserver-se-12-0"
...
  timeouts {
    create = "120m"
    delete = "120m"
  }
...
}

 

 

 

Failed to complete #create action: [undefined method `version’ for nil:NilClass] on default…

When you execute kitchen converge to launch an EC2 instance in AWS with Chef Test Kitchen, you get the error message:

>>>>> ——Exception——-
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>> Failed to complete #create action: [undefined method `version’ for nil:NilClass] on default-windows-2012r2
>>>>>> ———————-
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose –all` for configuration

The kitchen-ec2 driver tries to read the operating system version from the name of the image, but it cannot find it.

When you create your own AMI (Amazon Machine Image), make sure the version of the operating system is in the name:

Windows-2012
Windows-2012r2
Windows-2012r2sp1
RHEL-7.2

An acceptable name is my_windows-2012r2_base-1

Could not load the ‘ec2’ driver from the load path

When you execute kitchen list and the driver in your .kitchen file is “ec2“, the following error message appears:

>>>> ——Exception——-
>>>>>> Class: Kitchen::ClientError
>>>>>> Message: Could not load the ‘ec2’ driver from the load path. Please ensure that your driver is installed as a gem or included in your Gemfile if using Bundler.
>>>>>> ———————-
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose –all` for configuration

If you have recently installed the Chef Development Kit, you need to install the Kitchen EC2 driver to be able to launch servers in AWS.

To install the Kitchen EC2 driver, execute

chef gem install kitchen-ec2

 

CloudExceptions::CloudException – 400: VPCIdNotSpecified: No default VPC for this user

When you launch a new EC2 instance in the AWS cloud from the command line or with other cloud management platforms, you may get the error message:

CloudExceptions::CloudException – 400: VPCIdNotSpecified: No default VPC for this user (RequestID: …)

This can happen when the specified Subnet Id is not a valid subnet in the selected availability zone (datacenter).

Create the AWS credentials file from a Chef Data Bag

When a process on a server instance needs access to an AWS account, the user who will execute the AWS CLI commands needs to be able to automatically authenticate in AWS.

For automatic AWS authentication, the AWS CLI creates two files in the .aws directory:

  • config and
  • credentials.

The location of this directory depends on the operating system and the type of user.

  • On Linux, the location is ~/.aws ( the user’s home directory )
  • On Windows, it is located at C:\Users\USER_NAME\.aws
  • On Windows, if the file was created by SYSTEM, the location is C:\Windows\System32\config\systemprofile\.aws

Store the AWS key values

To create these files, you need to store the AWS Access Key and Secret Key. The safest place for these values is an encrypted data bag. To automatically generate the AWS files, create a data bag file and name it the same as the “id” in the following structure:

{
  "id": "MY_DATA_BAG_ITEM_NAME",
  "MY_PROFiLE_1": {
    "region": "MY_REGION_1",
    "aws_access_key_id": "MY_ACCESSKEY_1",
    "aws_secret_access_key": "MY_SECRET_KEY_1"
  },
  "MY_PROFiLE_2": {
    "region": "MY_REGION_2",
    "aws_access_key_id": "MY_ACCESSKEY_2",
    "aws_secret_access_key": "MY_SECRET_KEY_2"
  }
}

To create and encrypt the data bag see my post on Chef Data Bags

Create the AWS authentication files

  1. In your Chef recipe, first install the AWS CLI and reboot the server, so the new path entry will be available for the Chef process.
  2. The following Chef code will create the AWS config and credential files. The script
    1. opens and decrypts the data bag,
    2. loads it into a hash table,
    3. iterates through the hash items,
    4. skips the “id” item,
    5. stores the AWS key values in a temporary file,
    6. executes the “aws configure” command to generate the AWS config and credential files.
  # Iterate through the data bag and create the credentials file

  puts "***** Creating the AWS credentials file"

  # Load the encrypted data bag into a hash
  aws_credentials = Chef::EncryptedDataBagItem.load('MY_DATA_BAG_NAME', 'MY_DATA_BAG_ITEM_NAME').to_hash

  # Iterate through the items, skip the "id"
  aws_credentials.each_pair do |key, value|

    # skip the "id"
    next if key == "id"

    # Add the credentials to the .aws/credentials file
    puts "Account #{key}, Region #{value['region']}"

    batch "add_aws_credentials_#{key}" do
      code <<-EOF echo #{value["aws_access_key_id"]}> input.txt
        echo #{value["aws_secret_access_key"]}>> input.txt
        echo #{value["region"]}>> input.txt
        echo.>> input.txt
        aws configure --profile #{key} < input.txt
      EOF
    end

  end

 

Create a Terraform Enterprise environment

If you just start to work with Terraform Enterprise, you need to create a Terraform environment.

Preparation

GitHub account

To access GitHub from Terraform Enterprise, create a GitHub team and account with admin access to the GitHub repository that will store the Terraform scripts.

  1. Create a GitHub team who will have admin access to the Terraform script repository,
  2. Create a GitHub user what Terraform Enterprise will use to access the Terraform script repository,
  3. Add the new GitHub user to your GitHub organization,
  4. Log into GitHub with the new user account credentials and wait for the email verification email from GitHub,
  5. Open the verification email from GitHub and click the button in the email to verify your email address,
  6. Stay logged into GitHub with the new user account and wait for the invitation email from GitHub to join the organization,
  7. Open the invitation email from GitHub and click the button to accept the invitation,
  8. In GitHub add the new GitHub user to the new team,
  9. Create a GitHub repository to store the Terraform scripts,
  10. Add the new team to the Terraform script repository with admin rights.

If the Terraform modules are in a separate GitHub repository, add the new team to that repository with Read rights.

AWS account

Create a new user account that Terraform Enterprise will use to access AWS

  1. Create a new AWS user,
  2. Add user rights
    1. AmazonEC2FullAccess
    2. AmazonS3FullAccess

Connect your Atlas account to GitHub

We will authorize Terraform Enterprise to access the Terraform script GitHub repository. We set this connection up in our personal profile in Terraform enterprise, but when we create new Terraform environments Terraform Enterprise will use this connection to access the GitHub repository the environment will be connected to. Do not use your personal GitHub account to make the connection, because if your personal account loses access to the GitHub repository, the Terraform environment will not be able to connect to it. For this connection, we have already created a new user in GitHub above.

  1. In a web browser log into GitHub with the user account you want Terraform Enterprise use to access the GitHub repository,
  2. In the same web browser open a new tab and navigate to https://atlas.hashicorp.com/settings/connections and log into Atlas with your Hashicorp account,
  3. Click Connect GitHub to Atlas
  4. On the GitHub authorization page click the Authorize hashicorp button,
  5. Later if you want to unlink Atlas from your GitHub account, in the Personal section select Connections, and click the Unlink button.

Create the GitHub repository

Create a repository in GitHub to store the Terraform config files. You can specify subdirectories for each Terraform environment to watch, so one repository can serve the security group creation, and EC2 instance creation for the same developer group.

  1. Create the GitHub repository,
  2. Create a folder that the Terraform Enterprise environment will monitor for changes.

Create a new environment

    1. In a web browser navigate to https://atlas.hashicorp.com/configurations/import,
    2. Log in with your Atlas user account,
    3. In the dropdown list of the New Environment page (click the text of the dropdown, not the arrow) select Link to GitHub.
      Name the environment to include the type of the server you want to launch and the server environment, for example, test_linux_sandbox.
      To save the environment click the Create & Continue button,
    4. On the left side select Variables,
    5. In the Environment section click the Edit button,
  1. If the environment is linked to the Terraform Enterprise Server

    1. The environment has been created. Click the your account settings page to create a personal token. It will allow you to access your account without using your username and password.
    2. Enter a description for the token and click the Generate Token button,
    3. Save the token value at a safe place
    4. To return back to the Runs page select Terraform Enterprise in the upper left corner
    5. Click the name of the environment you want to work with
    6. Configure the environment. On the left side click Settings
      1. Uncheck Plan on artifact uploads
    7. Store your personal access token on your workstation to be able to access Terraform Enterprise. Open a Bash window and execute the export command:

      export ATLAS_TOKEN=[YOUR AUTHENTICATION TOKEN]
    8. Add the terraform backend configuration to the .tf file on your workstation

      terraform {
        backend "atlas" {
          name    = "YOUR_ORGANIZATION/YOUR_ENVIRONMENT"
          address = "https://atlas.hashicorp.com"
        }
      }
    9. Navigate to the Terraform configuration folder and initialize it with the backend configuration you have entered to the .tf file.
      terraform init
    10. Push the Terraform configuration to the Enterprise server

      terraform push -name="YOUR_ORGANIZATION/YOUR_ENVIRONMENT" -atlas-address="https://atlas.hashicorp.com"
    11. On the left side click Configurations to see the list of configs your organization have pushed to the server.
    12. Run a Terraform Plan to see if your code works
      terraform plan
    13. Execute your Terraform config on your workstation
      terraform apply
    14. Push the result of the run to the Terraform Enterprise server
      terraform push -name="YOUR_ORGANIZATION/YOUR_ENVIRONMENT" -atlas-address="https://atlas.hashicorp.com"
    15. To view the result of the run, on the left side click Runs.
    16. To view the created infrastructure states on the left side click States

Create the Terraform scripts

 

More resources

For a tutorial on Multiple Environments in Terraform see https://github.com/hashicorp/multiple-envs

Terraform Enterprise Administration

Create a new organization

  1. In your web browser navigate to https://atlas.hashicorp.com/help/organizations,
  2. On the left menu bar under Organizations click Create,
  3. In the middle of the screen click the new organization page link,
  4. Enter the email address and user name for the organization owner and click the Create organization button.

The Splunk Search Language (SPL)

 

Search Terms: see Searching in Splunk

Commands: tell Splunk what we want to do with the search result

  • Charts
  • Computing statistics
  • Formatting

Functions: explain how we want to chart, compute and evaluate the results

Arguments: variables we apply to the functions

Clauses: grouping and definition of results

Separator

Use pipes (|) to separate the components of the search language. The result of the component on the left is passed to the next component, no more data is read.

sourcetype=access_combined | top age | fields name

Editor features

  • Color coding
    • orange: Boolean operators and command modifiers
    • blue: commands
    • green: command arguments
    • purple: functions
  • If the cursor id behind a parenthesis, the matching parenthesis is highlighted
  • Hotkeys
    • Move each pipe to a new line: ⌘-\ (Mac) , ctrl-\ (Windows)

Commands

fields

Include and exclude fields from the search result. Separate the fields with space or comma.

  • Include fields. Happens before field extraction, can improve performance.
sourcetype=access_combined | fields status, clientip
  • Exclude fields (use negative sign after the word fields). It only affects the displayed result, no benefit to performance.
sourcetype=access_combined | fields - status, clientip


table

Retains the data in a tabulated format. Separate the fields with a comma.

  • Field names are the table column headers.
sourcetype=access_combined | table status, clientip


rename

Renames table fields fo display. Use space to separate the fields.

  • Wrap the name in quotes if the name contains space,
sourcetype=access_combined
| table status, clientip
| rename clientip as "IP Address"
status as "Status"
  • In subsequent components, we need to use the new name of the field, because that is passed forward by the pipe separator.
sourcetype=access_combined
| table status, clientip
| rename clientip as "IP Address"
| fields - "IP Address"


dedup

Removes duplicate events that share common values. Separate the fields with space.

sourcetype=access_combined
| dedup first_name last_name 
| table first_name last_name


sort

Ascending or descending order of the results.

  • Ascending order. The default order is ascending, the plus sign (+) also causes ascending sort.
sourcetype=access_combined
| table first_name last_name
| sort first_name last_name
  • Descending order
    • If there is a space between the minus sign and the field name, the descending order applies to all specified fields:
      sourcetype=access_combined
      | table first_name last_name
      | sort - age wage
    • If there is no space between the minus sign and the field name, the descending order only applies to that field:
      sourcetype=access_combined
      | table first_name last_name
      | sort -age wage

limit argument

To limit the number of events returned, use the limit argument.

sourcetype=access_combined
| table first_name last_name
| sort -age wage limit=10


top

Finds the most common values of the given fields in the result set. Used to render the result in graphs.

sourcetype=vendor_sales
| top Vendor

Automatically provides the data in tabular form and displays the count and percent columns, and limits the results to 10.

limit clause

  • Set the desired number or results.
sourcetype=vendor_sales
| top Vendor limit=20
  • To get all results, use limit=0
sourcetype=vendor_sales
| top Vendor limit=0
  • You can add more fields to the list separated by space or comma.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID, file
  •   Change the title of the count and percentage columns.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file countfield = "Product count" percentfield = "Product percent"
  • Control the visibility of the count and percent fields.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file showcount = True/False showperc = True/False

Add count and percent numbers for not within the limit.

index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file useother = True/False

  • Specify the display value of the OTHER row:
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file otherstr = "Total count"

by clause

Top three product sold by each vendor

sourcetype=vendor_sales
| top product_name by Vendor limit=3


rare

Shows the least common values of the field set.

Has the same options as the top command.



stats

Produces statistics of the search results.

Stats functions

count

  • The number of events matching the search criteria.
index=main sourcetype=access_combined_wcookie 
| stats

  • To rename the “count” header us “as”
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files"
  • Use “by” to group the result
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files" by file

  • Add more fields with comma
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files" by file, productId

  • Add a field to the count function to count events where the field is present
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files"

  • Compare the count to the total number of events
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files", count as "Total events"


distinct_count or dc

Count of unique values for a field.

index=main sourcetype=access_combined_wcookie 
| stats distinct_count(file) as "Total files"

index=main sourcetype=access_combined_wcookie 
| stats distinct_count(file) as "Total files" by productId


sum

Returns the sum of the numerical values.

index=main sourcetype=access_combined_wcookie 
| stats sum(bytes)


  • Count the events and sum the value
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files" sum(bytes)

  • Group the sum and count values by a field. These must be within the same pipe to work on the same set of data.
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files" sum(bytes) by productId


avg

Returns the average of numerical values.

index=main sourcetype=access_combined_wcookie 
| stats avg(bytes) as "Average bytes"

  • Group the values by a field
index=main sourcetype=access_combined_wcookie 
| stats avg(bytes) as "Average bytes" by productId

  • Add count to the table
index=main sourcetype=access_combined_wcookie 
| stats count as "Number of files" avg(bytes) as "Average bytes" by productId


list

Lists all values of a given field.

index=main sourcetype=access_combined_wcookie 
| stats list(file) as "Files"

  • Group the list of values by another field, but lists all repeated values.
index=main sourcetype=access_combined_wcookie 
| stats list(file) as "Files" by productId


values

Works like the list function, but returns the unique values of a given field.

index=main sourcetype=access_combined_wcookie 
| stats values(file) as "Unique Files"

  • Group the unique values by another field
index=main sourcetype=access_combined_wcookie 
| stats values(file) as "Unique Files" by productId



 

Berks update fails with ‘Missing artifacts’ error message

When you add cookbooks as dependencies with the “depends” statement to the metadata.rb file of your Chef cookbook, to be able to test your cookbooks in Chef Test Kitchen, you also have to specify the location of those cookbooks in the Berksfile file.

For all the cookbooks that are available on the Chef Supermarket, one line

source "https://supermarket.chef.io"

is sufficient to specify their location. If a cookbook is only available at GitHub, specify the location with

cookbook 'COOKBOOK_NAME', git: 'git@github.com:PATH_TO_COOKBOOK.git'

If the cookbook is available on the local drive of the workstation, specify the path with

cookbook 'COOKBOOK_NAME', path: '../COOKBOOK_FOLDER_NAME'

Use the above relative path if all of your cookbooks are under the same cookbooks directory.

If a reference to a Chef cookbook is missing from the Berksfile file, the following message appears when you execute berks update.

Unable to satisfy constraints on package …, which does not exist, due to solution constraint (… = …). Solution constraints that may result in a constraint on …: [(… = …) -> (… >= …)]
Missing artifacts: ...
Demand that cannot be met: (… = …)
Unable to find a solution for demands: … (…)