Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To close a tmux session (and kill whatever processes are running within it!), attach to the session and enter Ctrl+d. This cannot be undone, so before doing this, be sure that your process has finished, and that you have saved whatever terminal output you need.

...

Redirecting output and command pipelines (stdin

...

(Combine with head or tail)

...

and stdout)

You may encounter situations where you need to manipulate or reuse the output from a particular command. For example, the output from a given command may not be formatted the way you need it, or you may want to save the output from a command to a text file that you can review later. The terminal allows you to do this using stdin and stdout.

Every process on the terminal has a standard input and a standard output, called stdin and stdout for short. For basic commands, stdin is the text of the command provided by the user, such as "ls -1 /dps/david/temp". The stdout is the output produced by running the command in stdin. In the following example, the stdin is underlined in red, while the stdout is all the text contained within the green box:

Image Added

You can sometimes manipulate the format and structure of stdout using command options (such as "-1" in this example, which changes the output to a single directory or file per line), but that's still relatively limited, since the available options vary from one command to the next. Most of the time, if you need to manipulate or reuse a command's output, the best approach is to redirect stdout and/or pipe commands.

Redirecting stdout

The simplest way of redirecting stdout is to save it to a text file, which you can open, review, and manipulate using any text editor. To redirect stdout to a text file, use the > (greater than sign), followed by an output path. Think of it like an arrow pointing to an output file. In the following example, the stdout from "ls -1 /dps/david/temp" is saved as "/dps/david/tempcontents.txt":

Image Added

You can open that file with a text editor and see that its contents look just like the output we'd ordinarily expect the command to produce in the terminal:

Image Added

This redirection method will always create a new file at the output path you provide. If there is already a file at that location, it will be deleted and replaced with the stdout from your command, so be sure not to redirect to the location of any important existing file.

If you want to append your results to an existing file without overwriting it, use >> instead of >. In the following example, the contents of /dps/david/temp/ are saved to /dps/david/tempcontents.txt (shown in red), then the contents of /dps/david/temp/pdfs/ (shown in green) are added to the end of the same file

Image Added

Image Added

Command pipelines

Another option is to redirect stdout straight into another command using a "pipeline" of commands. Pipelines take the stdout from one command and feed it directly into a second command to form part of that command's stdin. Commands are separated using the pipe character ( | ), located above the backslash on a US keyboard layout. Command pipelines proceed from one command to the next from left to right, and there is no limit to the number of commands that can be chained together.

For example, if you wanted to find the first 25 lines in /dps/david/temp/manifests.txt that contain "tif", you could run the following command:

grep tif /dps/david/temp/manifests.txt

If you actually ran this command, however, you'd find that it produced far more than 25 lines. It also takes much longer than necessary, since it has to run through the entire file rather than stopping after 25 matches like you want. You could always save these results as a text file, open the file, and delete everything after the 25th match, but that's still quite time consuming.

A better approach is to combine the grep and head commands using a pipeline to stop the process after the 25th match:

Image Added

In this example, the stdout from the grep command has been fed directly into the head command that follows it. Ordinarily, when you run head you have to point it to a specific file to be read as an input, but in a command pipeline, you can omit that piece in order to use the previous command's stdout as the input.

Remember that there is no limit to the number of commands you can combine in a pipeline. If you needed to strip these results of the first two columns and leave only the file paths, you could add the cut command:

Image Added

The cut command receives the stdout from the previous head command and (using the -f3 option) cuts that list down to just the third column. If you then needed to count the number of characters in these 25 file paths, you could add the wc command:

Image Added

The wc command receives the stdout from the previous cut command and (using the -c) option counts the number of characters it contains. If you wanted to save this final output to a text file, you can always redirect the final stdout using > followed by an output path.

Looping through a file (while read method)

If you need to open a text file and perform some action using the value stored in each line, you can use the while read method to construct a "for loop". A for loop reads a line from a text file, stores the value of the line, performs some action using the value, and then moves onto the next line to repeat the process. To do this, you will need to provide a "while read" statement, the command(s) you want to run using each line, and identify the file containing the lines you want to act on.

For example, the file /dps/david/temp/msg_files.txt contains a list of .msg files stored within /dps/david/temp/test:

Image Added

Image Added

If you needed to calculate the character count for each one of these files, you could run wc -c on each file individually, or you could integrate that command into a for loop:

while read -r line; do wc -c "$line"; done < /dps/david/temp/txt_files.txt

Image Added

Every for loop is made up of three main elements, separated by semicolons:

  1. A "while read" statement (underlined in red in the above example)
    1. "while read -r" tells the terminal to read each line in an input file (that will be named at the end)
    2. "line" is the name of the variable that will be used as a stand-in for the actual value of each line in the file. This variable name is entirely arbitrary, and you can choose whatever variable name seems most logical to you, as long as you reference it correctly in the next part of the loop.
  2. A "do" statement (underlined in green)
    1. "do" tells the terminal to run a given command for each line in the input file
    2. "wc -c" is the command we have chosen to run in this case. Any terminal command can be incorporated into a for loop, using its original options and syntax requirements. You can also build command pipelines and redirect output as part of a for loop.
    3. ""$line"" (wrapped in quotes) is the name of the variable that was assigned in the first part of the loop, used here as the input for the "wc -c" command. If you changed the variable name to "abc" in the first part, you would change it to "$abc" in this part.
  3. A "done" statement (underlined in blue)
    1. "done" closes the loop
    2. "< /dps/david/temp/txt_files.txt" is the input file containing the lines to be read and stored as a variable in part one, then acted upon in part two. Think of < as an arrow feeding the contents of the text file into the preceding script.

In the above example, the terminal opens the input file indicated at the very end of the script, reads the first line and stores the value found there as "line", runs "wc -c" on that value, outputs the result, then moves onto the value in the next line. It does this until it reaches the end of the input text file, at which point the loop closes.

While the specific command(s) you run using a for loop may be different from the above example, all for loops adhere to this basic structure.