Cannot restart the Atlassian Confluence service on Windows

When the Atlassian Confluence wiki is installed on a Windows server, it frequently becomes unavailable. Sometimes it is possible to restart the Atlassian Confluence Windows service, but most of the time the Stop phase times out with:

Windows could not stop the Atlassian Confluence service on Local Computer.
Error 1053: The service did not respond to the start or control request in a timely fashion.

To make Atlassian Confluence work again

  1. Open Task Manager,
  2. End the tomcat…exe process,
  3. Start the Atlassian Confluence Windows service.

Splunk lookups

Lookups provide readable information to users, so they don’t have to understand the returned codes in the reports.

Lookups are defined for a specific app, and not accessible from other apps.

Lookup options

Lookup code, description (input, output) values can be defined in multiple ways

  1. Comma delimited text file (csv),
  2. Search results saved as lookup table,
  3. External script or command,
  4. Splunk DB Connect application,
  5. Geospatial lookups,
  6. KV Store collection.

Create a lookup data .csv file

Save the lookup values in a “.csv” file on your workstation, with comma separated input and output values:

code,description
1,Success
2,Failure
3,Error …

To import a lookup table

Upload the data to the Splunk server

  1. In the Settings menu select Lookups,

  2. In the Lookup table files row click Add new,
  3. Select the Destination app where the lookup table will be available,
  4. Browse to the data file on your workstation,
  5. Enter the Destination filename for the uploaded file on the Splunk server,
  6. Click Save to upload the file to the Splunk server.

Import the data to the Splunk server

  1. In the Settings menu select Lookups again,
  2. Click Lookup definitions,
  3. Make sure the correct App context is selected in the drop-down, and click New,
  4. Make sure the correct Destination app and Lookup file are selected. Enter a name for the lookup definition, and keep File-based selected,
  5. Click Save.

Verify the imported lookup table

  1. Click the Splunk icon in the upper left corner to return to the home page,
  2. Click Search & Reporting,
  3. In the New Search field enter the following command with the “Name” you have entered on the Lookup definitions page to see the table of lookup values.
    | inputlookup MY_LOOKUP_NAME

Using lookup

Pipe the data into the lookup command to convert code to description

sourcetype=... | lookup products_lookup productId as productId OUTPUT product_name as ProductName

Pipe the result forward to the stats command for further processing

sourcetype=... | lookup products_lookup productId as productId OUTPUT product_name as ProductName | stats count by ProductName

Automatic lookup definition

If you want the lookup automatically appear in reports, create an automatic lookup definition.

  1. In the Settings menu select Lookups,
  2. Click Automatic lookups,

    1. Select the App context, and click New,
    2. Make sure the correct Destination app is selected where the lookup will be accessible,
    3. Create a name,
    4. Select the lookup table from the dropdown,
    5. In the Apply to section select the data type to use the lookup table for,
    6. In the Lookup input fields section enter the name of the code column in the lookup table and the code field name in the report.
    7. In the Lookup output fields section specify the display values.You can specify multiple fields using the Add another field link.
    8. If you want to overwrite existing field values, check the Overwrite field values checkbox.
    9. Click Save to save the lookup.

 

The Splunk Search Language (SPL)

 

Search Terms: see Searching in Splunk

Commands: tell Splunk what we want to do with the search result

  • Charts
  • Computing statistics
  • Formatting

Functions: explain how we want to chart, compute and evaluate the results

Arguments: variables we apply to the functions

Clauses: grouping and definition of results

Separator

Use pipes (|) to separate the components of the search language. The result of the component on the left is passed to the next component, no more data is read.

sourcetype=access_combined | top age | fields name

Editor features

  • Color coding
    • orange: Boolean operators and command modifiers
    • blue: commands
    • green: command arguments
    • purple: functions
  • If the cursor id behind a parenthesis, the matching parenthesis is highlighted
  • Hotkeys
    • Move each pipe to a new line: ⌘-\ (Mac) , ctrl-\ (Windows)

Commands

fields

Include and exclude fields from the search result. Separate the fields with space or comma.

  • Include fields. Happens before field extraction, can improve performance.
sourcetype=access_combined | fields status, clientip
  • Exclude fields (use negative sign after the word fields). It only affects the displayed result, no benefit to performance.
sourcetype=access_combined | fields - status, clientip


table

Retains the data in a tabulated format. Separate the fields with a comma.

  • Field names are the table column headers.
sourcetype=access_combined | table status, clientip


rename

Renames table fields fo display. Use space to separate the fields.

  • Wrap the name in quotes if the name contains space,
sourcetype=access_combined
| table status, clientip
| rename clientip as "IP Address"
status as "Status"
  • In subsequent components, we need to use the new name of the field, because that is passed forward by the pipe separator.
sourcetype=access_combined
| table status, clientip
| rename clientip as "IP Address"
| fields - "IP Address"


dedup

Removes duplicate events that share common values. Separate the fields with space.

sourcetype=access_combined
| dedup first_name last_name 
| table first_name last_name


sort

Ascending or descending order of the results.

  • Ascending order. The default order is ascending, the plus sign (+) also causes ascending sort.
sourcetype=access_combined
| table first_name last_name
| sort first_name last_name
  • Descending order
    • If there is a space between the minus sign and the field name, the descending order applies to all specified fields:
      sourcetype=access_combined
      | table first_name last_name
      | sort - age wage
    • If there is no space between the minus sign and the field name, the descending order only applies to that field:
      sourcetype=access_combined
      | table first_name last_name
      | sort -age wage

limit argument

To limit the number of events returned, use the limit argument.

sourcetype=access_combined
| table first_name last_name
| sort -age wage limit=10


top

Finds the most common values of the given fields in the result set. Used to render the result in graphs.

sourcetype=vendor_sales
| top Vendor

Automatically provides the data in tabular form and displays the count and percent columns, and limits the results to 10.

limit clause

  • Set the desired number or results.
sourcetype=vendor_sales
| top Vendor limit=20
  • To get all results, use limit=0
sourcetype=vendor_sales
| top Vendor limit=0
  • You can add more fields to the list separated by space or comma.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID, file
  •   Change the title of the count and percentage columns.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file countfield = "Product count" percentfield = "Product percent"
  • Control the visibility of the count and percent fields.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file showcount = True/False showperc = True/False

Add count and percent numbers for not within the limit.

index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file useother = True/False

  • Specify the display value of the OTHER row:
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file otherstr = "Total count"

by clause

Top three product sold by each vendor

sourcetype=vendor_sales
| top product_name by Vendor limit=3


rare

Shows the least common values of the field set.

Has the same options as the top command.



stats

Produces statistics of the search results.

Stats functions

count

  • The number of events matching the search criteria.
index=main sourcetype=access_combined_wcookie 
| stats

  • To rename the “count” header us “as”
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files"
  • Use “by” to group the result
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files" by file

  • Add more fields with comma
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files" by file, productId

  • Add a field to the count function to count events where the field is present
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files"

  • Compare the count to the total number of events
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files", count as "Total events"


distinct_count or dc

Count of unique values for a field.

index=main sourcetype=access_combined_wcookie 
| stats distinct_count(file) as "Total files"

index=main sourcetype=access_combined_wcookie 
| stats distinct_count(file) as "Total files" by productId


sum

Returns the sum of the numerical values.

index=main sourcetype=access_combined_wcookie 
| stats sum(bytes)


  • Count the events and sum the value
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files" sum(bytes)

  • Group the sum and count values by a field. These must be within the same pipe to work on the same set of data.
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files" sum(bytes) by productId


avg

Returns the average of numerical values.

index=main sourcetype=access_combined_wcookie 
| stats avg(bytes) as "Average bytes"

  • Group the values by a field
index=main sourcetype=access_combined_wcookie 
| stats avg(bytes) as "Average bytes" by productId

  • Add count to the table
index=main sourcetype=access_combined_wcookie 
| stats count as "Number of files" avg(bytes) as "Average bytes" by productId


list

Lists all values of a given field.

index=main sourcetype=access_combined_wcookie 
| stats list(file) as "Files"

  • Group the list of values by another field, but lists all repeated values.
index=main sourcetype=access_combined_wcookie 
| stats list(file) as "Files" by productId


values

Works like the list function, but returns the unique values of a given field.

index=main sourcetype=access_combined_wcookie 
| stats values(file) as "Unique Files"

  • Group the unique values by another field
index=main sourcetype=access_combined_wcookie 
| stats values(file) as "Unique Files" by productId



 

Berks update fails with ‘Missing artifacts’ error message

When you add cookbooks as dependencies with the “depends” statement to the metadata.rb file of your Chef cookbook, to be able to test your cookbooks in Chef Test Kitchen, you also have to specify the location of those cookbooks in the Berksfile file.

For all the cookbooks that are available on the Chef Supermarket, one line

source "https://supermarket.chef.io"

is sufficient to specify their location. If a cookbook is only available at GitHub, specify the location with

cookbook 'COOKBOOK_NAME', git: 'git@github.com:PATH_TO_COOKBOOK.git'

If the cookbook is available on the local drive of the workstation, specify the path with

cookbook 'COOKBOOK_NAME', path: '../COOKBOOK_FOLDER_NAME'

Use the above relative path if all of your cookbooks are under the same cookbooks directory.

If a reference to a Chef cookbook is missing from the Berksfile file, the following message appears when you execute berks update.

Unable to satisfy constraints on package …, which does not exist, due to solution constraint (… = …). Solution constraints that may result in a constraint on …: [(… = …) -> (… >= …)]
Missing artifacts: ...
Demand that cannot be met: (… = …)
Unable to find a solution for demands: … (…)

Searching in Splunk

When you are building the search criteria, click the field and value in the search result to add it to the search.

 

Wildcard character

  • * (asterisk) one or multiple characters

Exact phrases

  • Use ” (double quotes)

Search for quotes

  • \” (use backslash to escape quotes if you want to search for quotes)

Keywords in the search bar are case sensitive!

Boolean keywords are

  • AND (if omitted, it is implied)
  • OR
  • NOT

Order of boolean evaluation

  1. Inside parentheses ()
  2. NOT
  3. OR
  4. AND

Operators

  • =
  • !=
  • >
  • >=
  • <
  • <=

Examples

  • soourcetype=access_combined

Best search practices

Search in a time range

  • s  Seconds
  • m   Minutes
  • h  Hours
  • d  Days
  • w  Weeks
  • mon  Month
  • y  Year
  • @  Round down to the nearest unit

Examples

  • -30s  In the last 30 seconds
  • -30m@h  Round to the last hour. If the event was run at 5:42, events from 5:00 are returned
  • earliest=-2 latest=-1h  From two hours ago to one hour ago
  • earliest=05/12/2017:12:00:00  From an absolute date and time

Indexes

If the data is organized by multiple indexers, specify the index where the data is stored

Examples

  • index=main

Splunk installation

Install Splunk

  1. Navigate to the Splunk website at splunk.com,
  2. In the upper right corner select the Free Splunk button,
  3. If you don’t yet have a Splunk account, register to create one, otherwise log in,
  4. Select the Free Download in the Splunk Enterprise frame,
  5. Select the tab with the operating system of your machine.

Linux

  1. The simplest way to install Splunk on Linux is with wget in the command line. Click the Download via Command Line (wget) in the upper right corner in the Useful Tools box.
  2. Copy the command to your clipboard from the popup window,
  3. Execute the wget command in a terminal window to download the tar archive,
  4. It is recommended to install Splunk in the opt directory, untar the archive there.
    sudo tar xvzf splunk.tgz –C /opt

Windows

  1. Download the .msi installer for your operating system (32 bit or 64 bit),
  2. Run the installer, follow the prompts, and accept the license agreement,
  3. Use Local System to run Splunk under.

Macintosh OSX

  1. Select the .dmg installer for simpler installation,
  2. Follow the prompts to install the application,
  3. At the end of the installation select Start and Show Splunk to start the application and view the user interface in a browser.

 

To start, stop, and administer Splunk

Linux

  1. In a terminal window navigate to the Splunk bin directory
    cd /opt/splunk/bin
  2. To Start Splunk and accept the license agreement during the first start
    ./splunk start --accept-license
  3. The terminal window displays the Splunk web interface address in the The Splunk web interface is at … line. Open a browser to navigate to the address.
  4. To start, stop, and restart the instance, and get help execute
    ./splunk start
    ./splunk stop
    ./splunk restart
    ./splunk help

Macintosh OSX

  1. In a terminal window navigate to the Splunk bin directory
    cd /Applications/Splunk/bin
  2. To start, stop, and restart the instance, and get help execute
    ./splunk start
    ./splunk stop
    ./splunk restart
    ./splunk help

Logging into Splunk the first time

The initial credentials after installation is
Username: admin
Password; changeme

 

Get AWS SSL Certificate resource ids from existing Load Balancers

To launch an Elastic Load Balancer ( ELB ) with an existing SSL certificate using Terraform, you need to specify the AWS certificate resource id. If you have already uploaded the certificate and attached it to an existing load balancer, the following AWS CLI command will display it in the command window. MY_PROFILE is the name of the profile in the square brackets [] in the ~/.aws/credentials file.

aws elb describe-load-balancers --region MY_AWS_REGION --profile MY_PROFILE |grep SSL

To get all information on the load balancers, just omit the grep command:

aws elb describe-load-balancers --region MY_AWS_REGION --profile MY_PROFILE

Create a server image with Packer

Packer is a free, open source application from Hashicorp. It can generate a server image based on an existing one, and configure it for your special needs. You can use the generated image when you launch a server instance in the cloud or on your local workstation.

Install Packer

Generate the server image with Packer

  1. Open a Bash window,
  2. Navigate to the folder of the Packer JSON script,
  3. Execute the following command. Get the AWS access key and secret key from the ~/.aws/credentials file on your Macintosh or Linux workstation. On Windows, the file is at C:\Users\YOUR_USER_NAME\.aws\credentials.
    packer build -var 'aws_access_key=MY_ACCESS_KEY' -var 'aws_secret_key=MY_SECRET_KEY' ./MY_PACKER_SCRIPT.json
  4. The command window will display the ID of the generated image, or you can find it by name in the EC2 section of the AWS console under AMIs.

Share the generated server image with other cloud accounts

If you work in multiple cloud accounts you need to share the generated server image with other accounts

AWS

  1. Log into the AWS account you have used to generate the server image,
  2. On the left side of the EC2 section select AMI and find the new image by name of ID,
  3. On the Permissions tab click the Edit button,
  4. Make sure the Private radio button is selected if you don’t want to share the image publicly,
  5. Enter the account number of the account you want to share the image with,
  6. Check the Add “create volume” permissions… checkbox,
  7. Click the Add Permission button,
  8. When you have added all accounts to share with, click the Save button.

 

Convert PEM files to PPK to use them in PuTTY

When you create a key in AWS you can download it one time in PEM format. To use it in PuTTY, the free SSH and Telnet client, you have to convert it to PPK format.

To install PuTTY, see the Terminal Emulator section in Recommended utilities for your workstation

To convert a PEM file to PPK

  1. Open a terminal window in the folder of the PEM file
  2. Execute the following
    puttygen MYKEY.pem -o MYKEY.ppk