GitKraken installation and configuration

GitKraken is a Git user interface to manage Git repositories.

Installation

  1. Download the GitKraken installer from https://www.gitkraken.com/download

Configuration

  1. Start the GitKraken application
  2. Login with your GitHub account, or create a new account

    or
  3. Connect to a Git repository
  4. Click the Open a Repository button
  5. Select a repository folder and click the Open button

3D Printer GCODE instructions

GCODE is a standard file type to control 3D printers. The 3D object is usually exported from the CAD program to an STL file, that fully describes the end product. 3D printers build the physical objects layer-by-layer, so we need to slice the object into thin layers.

Turn off the fan after 5 minutes

When the printer completed the job, the fan of the Monoprice Select Mini printer says on, making noise, and wearing out the bearing. To turn the fan off after 5 minutes, add this code to the end of every GCODE file.

G4 P300000 ;wait 5 minutes before turning off the fan
M106 S1    ;turn off fan

To automatically append the instructions to every GCODE file, in Cura add the lines to the end.code section:

  1. In the Cura application select the Start/End-GCode tab,
  2. Select end.gcode,
  3. Enter the lines in the bottom window.

 

Terraform Enterprise Administration

Create a new organization

  1. In your web browser navigate to https://atlas.hashicorp.com/help/organizations,
  2. On the left menu bar under Organizations click Create,
  3. In the middle of the screen click the new organization page link,
  4. Enter the email address and user name for the organization owner and click the Create organization button.

Using Git

Frequently used Git commands

Git runs entirely on your workstation, and a copy of the entire repository is also on your local hard drive. GitHub, BitBucket, and other providers only give you a storage space to allow you to share your repository with others and provide a web user interface to manage it. You can use any provider’s application on your workstation to manage any of your Git repositories. Many developers use SourceTree, the great application written by Atlassian, the owner of BitBucket, to manage Git repositories on their workstations that are shared at GitHub.

Here are the most frequently used Git commands.

Create the local repository

Initialize a new Git repository in the current directory. This command creates the .git sub-directory to store your repository and its configuration file.

git init

Display the local repository status

Display the list of added, deleted and modified files in the local repository

git status

Display your changes in the files

Display the changes in files since the last git add

git diff

Display the changes in the local stage area after you have executed  git add

git diff --staged

Stage your changes in the local repository

Add your changes to the local stage area

git add .

Save the changes to the local repository

Commit your changes from the local stage area to the local repository with a message

git commit -m "My message"

Edit the last commit message

You can edit the last commit message even after the push.

git commit --amend

When you execute git status you get the message

Your branch and ‘origin/master’ have diverged,
and have 1 and 1 different commits each, respectively.
(use “git pull” to merge the remote branch into yours)
nothing to commit, working tree clean

To synchronize your local repository with the remote server, execute

git pull

The merge window pops up. You can leave the default message, or type your explanation. To save your message and close the window

  1. Press the ESC key on your keyboard
  2. Press the keys (including the colon at the beginning) :wq

Send your changes to the remote repository

Push the changes to the remote repository at GitHub, Bitbucket, or others

git push

Get the latest from the remote repository

Pull the latest changes from the remote repository

git pull

Advanced topics

Push an existing repository from the command line

git remote add origin https://github.com/ORGANIZATION/REPOSITORY_NAME.git
git push -u origin master

Clone a private Git repository from the command line

git clone https://my_username:my_password_or_token@github.com/my_user_name/my_repo_name.git

or use an SSH key, see https://help.github.com/en/articles/connecting-to-github-with-ssh

Branching

Forks and Pull Requests

Restoring and working with older versions of the code

Save the latest version in a branch and restore an old version into the Main branch

If your application is deployed from the Main branch, and you want to make changes to an older deployed version after changes have been made to the Main branch we will

  • create a feature branch for the new version
  • restore the old version into the Main branch

Follow the steps below:

  • Create a duplicate copy of the repository directory on your workstation to be able to restore the repository in case you accidentally overwrite a file.
  • Create a new branch for the new version and switch to it
    git switch -c MY_NEW_BRANCH
  • Push the new branch to GitHub
    git push --set-upstream origin MY_NEW_BRANCH
  • Switch back to the Main branch
    git checkout main
  • Restore the Main branch to an earlier version
    git reset --hard OLD_COMMIT_HASH
    git will display the name of the selected commit, check if that is the correct repo version you want to restore.
  • Make your changes to fix the application and commit them into the Main branch
    • To commit only the files already tracked by Git
      git commit -a
    • To commit selected files
      git add [files]
      git commit
    • To push the old version of the Main branch to GitHub and clear the history in the Main branch after the commit
      git push origin main -f
  • Deploy the fixed version from the Main branch,
  • Work in the feature branch until you are ready to deploy the new version:
    • Merge the feature branch to the Main branch
    • Deploy the new version from the Main branch

View an old version of the repository

To view the repository in the state of an old commit and return back to the current state use the “checkout” command

Save the state of the working directory

Temporarily save the current state of the working directory including untracked (new) and ignored files

 git stash -a

List the commits of the repository

git log

Search all commits of a file in a branch

First, select the branch to search in, if it is not the current branch

git checkout BRANCH_NAME
git log --all --full-history -- **/MY_FILE.*

View the old version of the repository

git checkout MY_OLD_COMMIT_SHA
To create a new branch for the restored version and switch to it
git switch -c <new-branch-name>
To switch back to the “main” branch to see the latest version of the repository
git switch main
To switch to the new branch for the restored version of the repository
git switch <new-branch-name>

Create a new branch and continue the work

If you realize, this is the last stable version of the application, create a new branch and develop your project from this point forward. This way you will not lose changes you made after this commit, but you will be able to create new commits based on the working version and merge the new branch back to the “master” branch later.

git checkout -b MY_APP_WORKS_AGAIN_BRANCH

Restore the state of the working directory before the “stash” command

git stash pop

If you want to restore the repository to the state before the “git stash” with the “git stash pop” command and you have created new files, to prevent the accidental deletion of those files, Git will display the error message:

MyPath/File already exists, no checkout
Could not restore untracked files from stash

You also get this message when an ignored file has been created, like the “.DS_Store” file on Mac OS.

Remove ignored files

To remove the ignored files from the work tree execute the following. Please note the upper case “X”.

Dry run to see what will be deleted:

git clean -n -f -X

Delete the files:

git clean -f -X

Remove ignored and untracked files

To remove the ignored files, and new (untracked) files and directories from the working tree, clean the repository with the following. Please note the lower case “x”.

Dry run to see what will be deleted:

git clean -n -f -d -x

Delete the files:

git clean -f -d -x

Rollback

If you have accidentally committed a change and want to roll back the changes you can use the “reset” command. It is very dangerous because it can rewrite history, remove commits, delete files in your working directory, so you can lose your work. The “–mixed” is the default option of the “reset” command, so if no option is specified, that will be executed.

If the change has NOT been pushed to the remote repository (GitHub)

Remove a file from the stage

The “add” command adds files to the “stage”. If you have “add”-ed multiple files and do not want to “commit” one of them together with the rest, remove a file from the “stage”, but keep it in your working directory.

git reset HEAD -- <file>
git reset HEAD -- <directoryName>

Remove the last commit

Move the history back before the last “commit” and all “add”s that are associated with it. You will not lose any changes in your working directory. Use this command if you realize you want to make more changes before the next commit.

git reset HEAD~1

Remove the last commit and lose all changes since that

Restore the files in the repository to the state of the prior “commit”. You will lose all changes you made since that. This command moves the HEAD back one commit, so it deletes the last commit from the history.

git reset --hard HEAD~1

Undo the rollback

If the reset was unnecessary you can undo it for a limited time. Git runs the garbage collector every 30 days, and it removes orphaned commits, so you have 0 to 30 days to undo the rollback. If the garbage collector runs a few minutes after the reset, the changes are lost forever.

To see the list of commit SHAs that the garbage collector not yet deleted

git reflog

To undo the reset of a commit while it is still available

git checkout -b aNewBranchName shaYouDestroyed

If the changes have already been pushed to the remote repository

The “reset” command can cause serious problems for others working in the same repository.

To undo a commit

Use the “revert” command to correct mistakes. The “revert” command will create a new commit with the state you want without rewriting the history of the repository.

git revert <bad-commit-sha1-id>
git push origin
 

To restore the repo to the state of a previous commit

To erase history, and restore the repository to the state of a previous commit use the “reset” command. “git push .. -f” will force the push to erase the history on the server, and also overwrite other user’s commits after the specified one, so coordinate with them, and they also need to “reset”their local repositories.

git reset --hard <commit-id>
git push origin main -f

Remove a file form the entire history of the repository

If a file is too big to be uploaded to the remote repository, you may get the error message when you push the repository to the remote:

remote: Resolving deltas: 100%, done.
remote: error: GH001: Large files detected. You may want to try Git Large File Storage – https://git-lfs.github.com.
remote: error: Trace: …
remote: error: See http://git.io/iEPt8g for more information.
remote: error: File … is … MB; this exceeds GitHub’s file size limit of 100.00 MB
To https://github.com/….git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to ‘https://github.com/….git’

You may also want to remove every trace of a file from the history for security reasons.

git filter-branch command

This command rewrites the history of the repository and removes every trace of a file

git filter-branch -f --index-filter 'git rm --cached --ignore-unmatch "MY_TOO_BIG_FILE_NAME"'

git-filter-repo utility

An alternative to remove files is the Python utility git-filter-repo

Clone the git filter repository from https://github.com/newren/git-filter-repo.git

git clone https://github.com/newren/git-filter-repo.git

Copy the git-filter-repo file to a directory in your path

Usage

Open a terminal window in the repository you want to clean

cd MY_REPOSITORY_TO_CLEAN

The git-filter-repo utility selects files from the repository, so to delete those files and keep the rest of the files in the repository, use the --invert-paths option.

To remove files or directories from the repository
git filter-repo --path README.md --path guides/ --path tools/releases --invert-paths

To use wildcards include the --path-glob option
git filter-repo --path-glob 'src/*/data' --invert-paths

To remove multiple files or directories, save the list in a text file and refer to it with the --paths-fom-file option

git-filter-repo --invert-paths --paths-from-file /tmp/delete-from-git-repo.txt --force

BFG Cleaner

An other alternative to git filter-branch to remove files from the repository is at https://rtyley.github.io/bfg-repo-cleaner/

error: package … does not exist in NetBeans

If your Java source code imports packages, you have to add the JAR files containing them to the Library.

When you try to compile your Java application in NetBeans, and you get the error message:

error: package … does not exist

  1. In the NetBeans Project view right-click the Libraries folder
  2. In the drop down menu select Add JAR/Folder
  3. Select all the JAR files you want to add to the library, not just the folder,
  4. Click the Open button.

Splunk configuration

Splunk stores the configuration values in files in the /opt/splunkforwarder directory structure.

Splunk client

Description Location
Splunk Deployment server /opt/splunkforwarder/etc/system/local/deploymentclient.conf
  Example
targetUri = DEPLOYMENT_SERVER_URL:8089
Splunk Forwarder address /opt/splunkforwarder/etc/apps/tcpout-aws/local/outputs.conf
   Example
server = FORWARDER1_ADDRESS:9997,FORWARDER2_ADDRESS:9997
 Linux event log. Splunk tails this file. /var/log/messages
   To log a message in the Linux event log
logger "My message"
   To find a message in the Linux event log
grep "My message" /var/log/messages

Splunk server

Description Location
Default data directory /opt/splunk/var/lib/splunk/defaultdb/
Log location /opt/splunk/var/log/splunk/splunkd.log

Useful Splunk UI searches

To list all indexes

| REST /services/data/indexes | dedup title | sort title | table title

Cannot restart the Atlassian Confluence service on Windows

When the Atlassian Confluence wiki is installed on a Windows server, it frequently becomes unavailable. Sometimes it is possible to restart the Atlassian Confluence Windows service, but most of the time the Stop phase times out with:

Windows could not stop the Atlassian Confluence service on Local Computer.
Error 1053: The service did not respond to the start or control request in a timely fashion.

To make Atlassian Confluence work again

  1. Open Task Manager,
  2. End the tomcat…exe process,
  3. Start the Atlassian Confluence Windows service.

Splunk lookups

Lookups provide readable information to users, so they don’t have to understand the returned codes in the reports.

Lookups are defined for a specific app, and not accessible from other apps.

Lookup options

Lookup code, description (input, output) values can be defined in multiple ways

  1. Comma delimited text file (csv),
  2. Search results saved as lookup table,
  3. External script or command,
  4. Splunk DB Connect application,
  5. Geospatial lookups,
  6. KV Store collection.

Create a lookup data .csv file

Save the lookup values in a “.csv” file on your workstation, with comma separated input and output values:

code,description
1,Success
2,Failure
3,Error …

To import a lookup table

Upload the data to the Splunk server

  1. In the Settings menu select Lookups,

  2. In the Lookup table files row click Add new,
  3. Select the Destination app where the lookup table will be available,
  4. Browse to the data file on your workstation,
  5. Enter the Destination filename for the uploaded file on the Splunk server,
  6. Click Save to upload the file to the Splunk server.

Import the data to the Splunk server

  1. In the Settings menu select Lookups again,
  2. Click Lookup definitions,
  3. Make sure the correct App context is selected in the drop-down, and click New,
  4. Make sure the correct Destination app and Lookup file are selected. Enter a name for the lookup definition, and keep File-based selected,
  5. Click Save.

Verify the imported lookup table

  1. Click the Splunk icon in the upper left corner to return to the home page,
  2. Click Search & Reporting,
  3. In the New Search field enter the following command with the “Name” you have entered on the Lookup definitions page to see the table of lookup values.
    | inputlookup MY_LOOKUP_NAME

Using lookup

Pipe the data into the lookup command to convert code to description

sourcetype=... | lookup products_lookup productId as productId OUTPUT product_name as ProductName

Pipe the result forward to the stats command for further processing

sourcetype=... | lookup products_lookup productId as productId OUTPUT product_name as ProductName | stats count by ProductName

Automatic lookup definition

If you want the lookup automatically appear in reports, create an automatic lookup definition.

  1. In the Settings menu select Lookups,
  2. Click Automatic lookups,

    1. Select the App context, and click New,
    2. Make sure the correct Destination app is selected where the lookup will be accessible,
    3. Create a name,
    4. Select the lookup table from the dropdown,
    5. In the Apply to section select the data type to use the lookup table for,
    6. In the Lookup input fields section enter the name of the code column in the lookup table and the code field name in the report.
    7. In the Lookup output fields section specify the display values.You can specify multiple fields using the Add another field link.
    8. If you want to overwrite existing field values, check the Overwrite field values checkbox.
    9. Click Save to save the lookup.

 

The Splunk Search Language (SPL)

 

Search Terms: see Searching in Splunk

Commands: tell Splunk what we want to do with the search result

  • Charts
  • Computing statistics
  • Formatting

Functions: explain how we want to chart, compute and evaluate the results

Arguments: variables we apply to the functions

Clauses: grouping and definition of results

Separator

Use pipes (|) to separate the components of the search language. The result of the component on the left is passed to the next component, no more data is read.

sourcetype=access_combined | top age | fields name

Editor features

  • Color coding
    • orange: Boolean operators and command modifiers
    • blue: commands
    • green: command arguments
    • purple: functions
  • If the cursor id behind a parenthesis, the matching parenthesis is highlighted
  • Hotkeys
    • Move each pipe to a new line: ⌘-\ (Mac) , ctrl-\ (Windows)

Commands

fields

Include and exclude fields from the search result. Separate the fields with space or comma.

  • Include fields. Happens before field extraction, can improve performance.
sourcetype=access_combined | fields status, clientip
  • Exclude fields (use negative sign after the word fields). It only affects the displayed result, no benefit to performance.
sourcetype=access_combined | fields - status, clientip


table

Retains the data in a tabulated format. Separate the fields with a comma.

  • Field names are the table column headers.
sourcetype=access_combined | table status, clientip


rename

Renames table fields fo display. Use space to separate the fields.

  • Wrap the name in quotes if the name contains space,
sourcetype=access_combined
| table status, clientip
| rename clientip as "IP Address"
status as "Status"
  • In subsequent components, we need to use the new name of the field, because that is passed forward by the pipe separator.
sourcetype=access_combined
| table status, clientip
| rename clientip as "IP Address"
| fields - "IP Address"


dedup

Removes duplicate events that share common values. Separate the fields with space.

sourcetype=access_combined
| dedup first_name last_name 
| table first_name last_name


sort

Ascending or descending order of the results.

  • Ascending order. The default order is ascending, the plus sign (+) also causes ascending sort.
sourcetype=access_combined
| table first_name last_name
| sort first_name last_name
  • Descending order
    • If there is a space between the minus sign and the field name, the descending order applies to all specified fields:
      sourcetype=access_combined
      | table first_name last_name
      | sort - age wage
    • If there is no space between the minus sign and the field name, the descending order only applies to that field:
      sourcetype=access_combined
      | table first_name last_name
      | sort -age wage

limit argument

To limit the number of events returned, use the limit argument.

sourcetype=access_combined
| table first_name last_name
| sort -age wage limit=10


top

Finds the most common values of the given fields in the result set. Used to render the result in graphs.

sourcetype=vendor_sales
| top Vendor

Automatically provides the data in tabular form and displays the count and percent columns, and limits the results to 10.

limit clause

  • Set the desired number or results.
sourcetype=vendor_sales
| top Vendor limit=20
  • To get all results, use limit=0
sourcetype=vendor_sales
| top Vendor limit=0
  • You can add more fields to the list separated by space or comma.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID, file
  •   Change the title of the count and percentage columns.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file countfield = "Product count" percentfield = "Product percent"
  • Control the visibility of the count and percent fields.
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file showcount = True/False showperc = True/False

Add count and percent numbers for not within the limit.

index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file useother = True/False

  • Specify the display value of the OTHER row:
index=main sourcetype=access_combined_wcookie 
| top JSESSIONID file otherstr = "Total count"

by clause

Top three product sold by each vendor

sourcetype=vendor_sales
| top product_name by Vendor limit=3


rare

Shows the least common values of the field set.

Has the same options as the top command.



stats

Produces statistics of the search results.

Stats functions

count

  • The number of events matching the search criteria.
index=main sourcetype=access_combined_wcookie 
| stats

  • To rename the “count” header us “as”
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files"
  • Use “by” to group the result
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files" by file

  • Add more fields with comma
index=main sourcetype=access_combined_wcookie 
| stats count as "Total files" by file, productId

  • Add a field to the count function to count events where the field is present
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files"

  • Compare the count to the total number of events
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files", count as "Total events"


distinct_count or dc

Count of unique values for a field.

index=main sourcetype=access_combined_wcookie 
| stats distinct_count(file) as "Total files"

index=main sourcetype=access_combined_wcookie 
| stats distinct_count(file) as "Total files" by productId


sum

Returns the sum of the numerical values.

index=main sourcetype=access_combined_wcookie 
| stats sum(bytes)


  • Count the events and sum the value
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files" sum(bytes)

  • Group the sum and count values by a field. These must be within the same pipe to work on the same set of data.
index=main sourcetype=access_combined_wcookie 
| stats count(file) as "Total files" sum(bytes) by productId


avg

Returns the average of numerical values.

index=main sourcetype=access_combined_wcookie 
| stats avg(bytes) as "Average bytes"

  • Group the values by a field
index=main sourcetype=access_combined_wcookie 
| stats avg(bytes) as "Average bytes" by productId

  • Add count to the table
index=main sourcetype=access_combined_wcookie 
| stats count as "Number of files" avg(bytes) as "Average bytes" by productId


list

Lists all values of a given field.

index=main sourcetype=access_combined_wcookie 
| stats list(file) as "Files"

  • Group the list of values by another field, but lists all repeated values.
index=main sourcetype=access_combined_wcookie 
| stats list(file) as "Files" by productId


values

Works like the list function, but returns the unique values of a given field.

index=main sourcetype=access_combined_wcookie 
| stats values(file) as "Unique Files"

  • Group the unique values by another field
index=main sourcetype=access_combined_wcookie 
| stats values(file) as "Unique Files" by productId