Using Git

 

Frequently used Git commands

Display the list of added, deleted and modified files in the repository

git status

Display the changes in files since the last git add

git diff

Display the changes in the stage area after you have executed  git add

git diff --staged

Add the changes to the staging area

git add .

Commit the changes with a message

git commit -m "My message"

Push the changes to the remote repository at GitHub, Bitbucket and others

git push

Pull the latest changes from the remote repository

git pull

 

Splunk configuration

Splunk stores the configuration values in files in the /opt/splunkforwarder directory structure.

Description Location
Splunk Deployment server /opt/splunkforwarder/etc/system/local/deploymentclient.conf
  Example
targetUri = DEPLOYMENT_SERVER_URL:8089
Splunk Forwarder address /opt/splunkforwarder/etc/apps/tcpout-aws/local/outputs.conf
   Example
server = FORWARDER1_ADDRESS:9997,FORWARDER2_ADDRESS:9997
 Linux event log. Splunk tails this file. /var/log/messages
   To log a message in the Linux event log
logger "My message"
   To find a message in the Linux event log
grep "My message" /var/log/messages

 

Splunk lookups

Lookups provide readable information to users, so they don’t have to understand the returned codes in the reports.

Lookups are defined for a specific app, and not accessible from other apps.

Lookup options

Lookup code, description (input, output) values can be defined in multiple ways

  1. Comma delimited text file (csv),
  2. Search results saved as lookup table,
  3. External script or command,
  4. Splunk DB Connect application,
  5. Geospatial lookups,
  6. KV Store collection.

Create a lookup data .csv file

Save the lookup values in a “.csv” file on your workstation, with comma separated input and output values:

code,description
1,Success
2,Failure
3,Error …

To import a lookup table

Upload the data to the Splunk server

  1. In the Settings menu select Lookups,

  2. In the Lookup table files row click Add new,
  3. Select the Destination app where the lookup table will be available,
  4. Browse to the data file on your workstation,
  5. Enter the Destination filename for the uploaded file on the Splunk server,
  6. Click Save to upload the file to the Splunk server.

Import the data to the Splunk server

  1. In the Settings menu select Lookups again,
  2. Click Lookup definitions,
  3. Make sure the correct App context is selected in the drop-down, and click New,
  4. Make sure the correct Destination app and Lookup file are selected. Enter a name for the lookup definition, and keep File-based selected,
  5. Click Save.

Verify the imported lookup table

  1. Click the Splunk icon in the upper left corner to return to the home page,
  2. Click Search & Reporting,
  3. In the New Search field enter the following command with the “Name” you have entered on the Lookup definitions page to see the table of lookup values.
    | inputlookup MY_LOOKUP_NAME

Using lookup

Pipe the data into the lookup command to convert code to description

sourcetype=... | lookup products_lookup productId as productId OUTPUT product_name as ProductName

Pipe the result forward to the stats command for further processing

sourcetype=... | lookup products_lookup productId as productId OUTPUT product_name as ProductName | stats count by ProductName

Automatic lookup definition

If you want the lookup automatically appear in reports, create an automatic lookup definition.

  1. In the Settings menu select Lookups,
  2. Click Automatic lookups,

    1. Select the App context, and click New,
    2. Make sure the correct Destination app is selected where the lookup will be accessible,
    3. Create a name,
    4. Select the lookup table from the dropdown,
    5. In the Apply to section select the data type to use the lookup table for,
    6. In the Lookup input fields section enter the name of the code column in the lookup table and the code field name in the report.
    7. In the Lookup output fields section specify the display values.You can specify multiple fields using the Add another field link.
    8. If you want to overwrite existing field values, check the Overwrite field values checkbox.
    9. Click Save to save the lookup.

 

The Splunk Search Language (SPL)

 

Search Terms: see Searching in Splunk

Commands: tell Splunk what we want to do with the search result

  • Charts
  • Computing statistics
  • Formatting

Functions: explain how we want to chart, compute and evaluate the results

Arguments: variables we apply to the functions

Clauses: grouping and definition of results

Separator

Use pipes (|) to separate the components of the search language. The result of the component on the left is passed to the next component, no more data is read.

sourcetype=access_combined | top age | fields name

Editor features

  • Color coding
    • orange: Boolean operators and command modifiers
    • blue: commands
    • green: command arguments
    • purple: functions
  • If the cursor id behind a parenthesis, the matching parenthesis is highlighted
  • Hotkeys
    • Move each pipe to a new line: ⌘-\ (Mac) , ctrl-\ (Windows)

Commands

fields

Include and exclude fields from the search result. Separate the fields with space or comma.

  • Include fields. Happens before field extraction, can improve performance.
sourcetype=access_combined | fields status, clientip
  • Exclude fields (use negative sign after the word fields). It only affects the displayed result, no benefit to performance.
sourcetype=access_combined | fields - status, clientip


table

Retains the data in a tabulated format. Separate the fields with a comma.

  • Field names are the table column headers.
sourcetype=access_combined | table status, clientip


rename

Renames table fields fo display. Use space to separate the fields.

  • Wrap the name in quotes if the name contains space,
sourcetype=access_combined
| table status, clientip
| rename clientip as "IP Address"
status as "Status"
  • In subsequent components, we need to use the new name of the field, because that is passed forward by the pipe separator.
sourcetype=access_combined
| table status, clientip
| rename clientip as "IP Address"
| fields - "IP Address"


dedup

Removes duplicate events that share common values. Separate the fields with space.

sourcetype=access_combined
| dedup first_name last_name 
| table first_name last_name


sort

Ascending or descending order of the results.

  • Ascending order. The default order is ascending, the plus sign (+) also causes ascending sort.
sourcetype=access_combined
| table first_name last_name
| sort first_name last_name
  • Descending order
    • If there is a space between the minus sign and the field name, the descending order applies to all specified fields:
      sourcetype=access_combined
      | table first_name last_name
      | sort - age wage
    • If there is no space between the minus sign and the field name, the descending order only applies to that field:
      sourcetype=access_combined
      | table first_name last_name
      | sort -age wage

limit argument

To limit the number of events returned, use the limit argument.

sourcetype=access_combined
| table first_name last_name
| sort -age wage limit=10


top

Finds the most common values of the given fields in the result set. Used to render the result in graphs.

sourcetype=vendor_sales
| top Vendor

Automatically provides the data in tabular form and displays the count and percent columns, and limits the results to 10.

limit clause

  • Set the desired number or results.
sourcetype=vendor_sales
| top Vendor limit=20
  • To get all results, use limit=0
sourcetype=vendor_sales
| top Vendor limit=0
  • You can add more fields to the list separated by space or comma.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID, file
  •   Change the title of the count and percentage columns.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file countfield = "Product count" percentfield = "Product percent"
  • Control the visibility of the count and percent fields.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file showcount = True/False showperc = True/False

Add count and percent numbers for not within the limit.

index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file useother = True/False

  • Specify the display value of the OTHER row:
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file otherstr = "Total count"

by clause

Top three product sold by each vendor

sourcetype=vendor_sales
| top product_name by Vendor limit=3


rare

Shows the least common values of the field set.

Has the same options as the top command.



stats

Produces statistics of the search results.

Stats functions

count

  • The number of events matching the search criteria.
index=main sourcetype=access_combined_wcookie 
| stats

  • To rename the “count” header us “as”
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files"
  • Use “by” to group the result
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files" by file

  • Add more fields with comma
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files" by file, productId

  • Add a field to the count function to count events where the field is present
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files"

  • Compare the count to the total number of events
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files", count as "Total events"


distinct_count or dc

Count of unique values for a field.

index=main sourcetype=access_combined_wcookie 
| stats distinct_count(file) as "Total files"

index=main sourcetype=access_combined_wcookie 
| stats distinct_count(file) as "Total files" by productId


sum

Returns the sum of the numerical values.

index=main sourcetype=access_combined_wcookie 
| stats sum(bytes)


  • Count the events and sum the value
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files" sum(bytes)

  • Group the sum and count values by a field. These must be within the same pipe to work on the same set of data.
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files" sum(bytes) by productId


avg

Returns the average of numerical values.

index=main sourcetype=access_combined_wcookie 
| stats avg(bytes) as "Average bytes"

  • Group the values by a field
index=main sourcetype=access_combined_wcookie 
| stats avg(bytes) as "Average bytes" by productId

  • Add count to the table
index=main sourcetype=access_combined_wcookie 
| stats count as "Number of files" avg(bytes) as "Average bytes" by productId


list

Lists all values of a given field.

index=main sourcetype=access_combined_wcookie 
| stats list(file) as "Files"

  • Group the list of values by another field, but lists all repeated values.
index=main sourcetype=access_combined_wcookie 
| stats list(file) as "Files" by productId


values

Works like the list function, but returns the unique values of a given field.

index=main sourcetype=access_combined_wcookie 
| stats values(file) as "Unique Files"

  • Group the unique values by another field
index=main sourcetype=access_combined_wcookie 
| stats values(file) as "Unique Files" by productId



 

Create a server image with Packer

Packer is a free, open source application from Hashicorp. It can generate a server image based on an existing one, and configure it for your special needs. You can use the generated image when you launch a server instance in the cloud or on your local workstation.

Install Packer

Generate the server image with Packer

  1. Open a Bash window,
  2. Navigate to the folder of the Packer JSON script,
  3. Execute the following command. Get the AWS access key and secret key from the ~/.aws/credentials file on your Macintosh or Linux workstation. On Windows, the file is at C:\Users\YOUR_USER_NAME\.aws\credentials.
    packer build -var 'aws_access_key=MY_ACCESS_KEY' -var 'aws_secret_key=MY_SECRET_KEY' ./MY_PACKER_SCRIPT.json
  4. The command window will display the ID of the generated image, or you can find it by name in the EC2 section of the AWS console under AMIs.

Share the generated server image with other cloud accounts

If you work in multiple cloud accounts you need to share the generated server image with other accounts

AWS

  1. Log into the AWS account you have used to generate the server image,
  2. On the left side of the EC2 section select AMI and find the new image by name of ID,
  3. On the Permissions tab click the Edit button,
  4. Make sure the Private radio button is selected if you don’t want to share the image publicly,
  5. Enter the account number of the account you want to share the image with,
  6. Check the Add “create volume” permissions… checkbox,
  7. Click the Add Permission button,
  8. When you have added all accounts to share with, click the Save button.

 

Chef Attributes

Chef attributes are global variables that are available for every cookbook on the node. There are multiple formats to declare and use an attribute. For important notes on the syntax, please see Undefined method or attribute error in a Chef recipe.

To override the value of an attribute that is defined in another cookbook, use the following syntax

node.override['ATTRIBUTE_NAME'] = 'NEW_VALUE'

During compilation, this line will replace the default value of the attribute with the NEW_VALUE.

 

Prevent the auto-termination of stranded instances in RighScale

When you launch an instance with RightScale Self Service, and the Chef cookbook execution fails, the instance goes into “stranded” mode. By default RightScale Self Service terminates the stranded instances, so there is no way to remote into them and read log files to find the cause of the problem.

To keep stranded instances running in RightScale

  1. Find the booting instance in Cloud Management and click the instance name,
  2. Click the lock icon on the top of the screen

RightScale Self Service cannot terminate locked instances. To terminate the instance after the troubleshooting process, unlock the instance and terminate the instance by hand.

Getting started with InSpec

InSpec is an open-source testing framework to verify your infrastructure satisfies the design requirements.

In this article, we will learn to install and use InSpec with Chef.

Install InSpec

  1. Navigate to https://downloads.chef.io/inspec, and download the installer for the operating system of your workstation.
  2. Execute the downloaded installer.

Start to use InSpec

To use InSpec as the default integration testing tool in Chef Test Kitchen

  1. Open the .kitchen.yml file of the cookbook,
  2. Delete the following lines from the platform section if exist:
    busser:
      sudo: true
  3. Add the following lines to the file:
    verifier:
      name: inspec
  4. Add this to every suite, so InSpec will search for the test files in the test/recipes directory. Otherwise, the test file needs to be in the test/recipes/SUITE_NAME directory
     verifier:
       inspec_tests:
         - test/recipes
  5. Create a folder structure in the cookbook folder for the InSpec integration tests,
    test
    |--recipes
  6. Create an integration test for your recipe. Create a new file in the test/recipes folder and name it RECIPE_NAME_test.rb. For the default recipe call it default_test.rb,
  7. The following is a simple example of an InSpec integration test:
    # # encoding: utf-8
    
    # Inspec test for recipe my_cookbook::default
    
    # The Inspec reference, with examples and extensive documentation, can be
    # found at https://docs.chef.io/inspec_reference.html
    
    unless os.windows?
     describe user('root') do
     it { should exist }
     skip 'This is an example test, replace with your own test.'
     end
    end
    
    describe port(80) do
     it { should_not be_listening }
     skip 'This is an example test, replace with your own test.'
    end

As you can see, the syntax of InSpec is (intentionally) very similar to ServerSpec, that it replaces. It is very easy to convert existing ServerSpec integration tests to InSpec compliance tests.

Differences between ServerSpec and InSpec

ServerSpec “process”

changed from

 describe process('PROCESS_NAME') do
   it { should be_running }
 end

to

describe processes('PROCESS_NAME') do
  its('states') { should eq ['R<'] }
end

Undefined method or attribute error in a Chef recipe

There are multiple reasons Chef can display the following error message

NoMethodError
 -------------
 Undefined method or attribute `...' on `node'

 

There are multiple ways to reference an attribute in a Chef recipe:

node['ATTRIBUTE_NAME'] (the recommended way)
node["ATTRIBUTE_NAME"] 
node[:ATTRIBUTE_NAME]  ( use it only if the single or double quotes (' or ") would cause a problem in the expression)

node.ATTRIBUTE_NAME

To check if the attribute value is nil, use the following format:

if ( !node['ATTRIBUTE_NAME'].nil? )
   ...
end

If you use the node.ATTRIBUTE_NAME format and the value is nil Chef throws the above error message.

How to restore a Microsoft SQL database from backup with the Microsoft SQL Server Management Studio (MSSMS) user interface

There are multiple reasons to restore a database from backup. One of them can be disaster recovery, the other is to bring the production database to the developer machine. In both cases, the computer already has the old version of the database. In the new version of Microsoft SQL Server Management Studio (MSSMS) we cannot find the “Close connections” checkbox anymore, so we have to make sure all connections are closed, and we specify a unique database file name to restore the database to.

  1. On the developer machine close all instances of Visual Studio to close the open database connections,
  2. Close all Microsoft SQL Server Management Studio tabs that are connected to the database,
  3. Move the backup file to the C:\Temp folder. Microsoft SQL Server Management Studio cannot see it in your Downloads folder.
  4. Make sure your user account does not automatically connect to the database you want to restore:
    1. In Security, Logins right click your username and select Properties,
    2. Set the Default database to master
  5.  Disconnect from the database server:
    1.  Right-click the database server and select Disconnect,

  6. Connect to the database engine, but do not open the database you want to restore:
    1. Execute,
      USE Master
  7. Import the database from production with the user interface:
    1. Right-click the database and select Tasks, Restore, Files and Filegroups
    2. Select the From device radio button
    3. Click the button to select the backup file
    4. In the Select backup devices window click the Add button to select the file
    5. Navigate to the C:\Temp folder and select the database backup file
    6. Click the OK button
    7. In the Select the backup sets section select the Restore check box in the file row, and click Options on the left side,
    8. On the Options tab select the Overwrite existing database (WITH REPLACE) check box, and click the button to set the new unique database file name,
    9. Click OK to ignore the error message on the Locate Database Files – … message box
    10. Navigate to the current database file, click it, and append the date of the backup to the end of the file name to specify a unique data file name, and click OK,
    11. On the Restore Files and FileGroups window click the OK button to restore the database

Update the database to make it work at the new location

  1. On a development machine set the databases to Simple recovery mode, so the log files do not grow out of bound.
    ALTER DATABASE my_database SET RECOVERY SIMPLE;
  2. Delete and recreate the users in the restored database, because those have different internal IDs if the database was migrated from a different server
    USE my_database;
    GO
    spDropUser 'my_user', 'dbo'
    GO
    DROP SCHEMA my_user
    GO

    For some reason If I type DECLARE below, WordPress crashes. Please add the letter E to the end of the first word below.

    DECLAR @user_name varchar(50)
    SET @user_name = 'my_user'
    USE my_database; EXEC sp_grantdbaccess @user_name; EXEC sp_addrolemember @rolename = 'db_datareader', @membername = @user_name; EXEC sp_addrolemember @rolename = 'db_datawriter', @membername = @user_name; 
    GO