This page should serve as a reference for the many "things Linux" we use in this course. It is by no means complete – Linux is **huge** – but offers introductions to many important topics.
See also this page, which provides lists of the most common Linux commands, by category, as well as their most useful options: Some Linux commands
You need a Terminal program in order to ssh to a remote computer.
Use ssh (secure shell) to login to a remote computers.
# General form: ssh <user_name>@<full_host_name> # For example ssh abattenh@ls6.tacc.utexas.edu
When you type something in at a bash command-line prompt, it Reads the input, Evaluates it, then Prints the results, then does this over and over in a Loop. This behavior is called a REPL – a Read, Eval, Print Loop. The shell executes the command line input when it sees a linefeed, which happens when you press Enter after entering the command.
The input to the bash REPL is a command, which consists of:
Some examples using the ls (list files) command:
ls # example 1 - no options or arguments ls -l # example 2 - one "short" (single character) option only (-l) ls --help # example 3 - one "long" (word) option (--help) ls .profile # example 4 - one argument, a file name (.profile) ls --width=20 # example 5 - a long option that has a value (--width is the option, 20 is the value) ls -w 20 # example 6 - a short option w/a value, as above, where -w is the same as --width ls -l -a -h # example 7 - three short options entered separately (-l -a -h) ls -lah # example 8 - three short options that can be combined after a dash (-lah)
Some handy options for ls:
A good place to start learning built-in Linux commands and their options is on the Some Linux commands page.
How do you find out what options and arguments a command uses?
Many 3rd party tools, especially bioinformatics tools, may bundle a number of different functions into one command. For these tools, just typing in the command name then Enter may provide top-level usage information. For example, the bwa tool that aligns sequencing reads to a reference genome:
bwa
Produces something like this:
Program: bwa (alignment via Burrows-Wheeler transformation) Version: 0.7.16a-r1181 Contact: Heng Li <lh3@sanger.ac.uk> Usage: bwa <command> [options] Command: index index sequences in the FASTA format mem BWA-MEM algorithm fastmap identify super-maximal exact matches pemerge merge overlapping paired ends (EXPERIMENTAL) aln gapped/ungapped alignment samse generate alignment (single ended) sampe generate alignment (paired ended) bwasw BWA-SW for long queries shm manage indices in shared memory fa2pac convert FASTA to PAC format pac2bwt generate BWT from PAC pac2bwtgen alternative algorithm for generating BWT bwtupdate update .bwt to the new format bwt2sa generate SA from BWT and Occ Note: To use BWA, you need to first index the genome with `bwa index'. There are three alignment algorithms in BWA: `mem', `bwasw', and `aln/samse/sampe'. If you are not sure which to use, try `bwa mem' first. Please `man ./bwa.1' for the manual.
bwa, like many bioinformatics programs, is written as a set of sub-commands. This top-level help displays the sub-commands available. You then type bwa <command> to see help for the sub-command:
bwa index
Displays something like this:
Usage: bwa index [options] <in.fasta> Options: -a STR BWT construction algorithm: bwtsw or is [auto] -p STR prefix of the index [same as fasta name] -b INT block size for the bwtsw algorithm (effective with -a bwtsw) [10000000] -6 index files named as <in.fasta>.64.* instead of <in.fasta>.* Warning: `-a bwtsw' does not work for short genomes, while `-a is' and
Of course Google works on 3rd party tools also (e.g. search for bwa manual)
In the bash shell, and in most tools and programming environment, there are two kinds of input:
There are many metacharacters in bash: # \ $ | ~ [ ] to name a few.
Pay attention to the different metacharacters and their usages – which can depend on the context where they're used.
You know the command line is ready for input when you see the command line prompt. It can be configured differently on different systems, but on our system it shows your account name, server name, current directory, then a dollar sign ($). Note the tilde character ( ~ ) signifies your Home directory.
The shell executes command line input when it sees a linefeed character (\n, also called a newline), which happens when you press Enter after entering the command.
More than one command can be entered on a single line – just separate the commands with a semi-colon ( ; ).
cd; ls -lh
A single command can also be split across multiple lines by adding a backslash ( \ ) at the end of the line you want to continue, before pressing Enter.
ls6:~$ ls ~/.bashrc \ > ~/.profile
Notice that the shell indicates that it is not done with command-line input by displaying a greater than sign ( > ). You just enter more text then Enter when done.
Use Ctrl-C to exit the current command input
At any time during command input, whether on the 1st command line prompt or at a > continuation, you can press Ctrl-c (Control key and the c key at the same time) to get back to the command prompt.
Sometimes a line of text is longer than the width of your Terminal. In this case the text is wrapped. It can appear that the output is multiple lines, but it is not. For example, FASTQ files often have long lines:
head $CORENGS/misc/small.fq
Note that most Terminals let you increase/decrease the width/height of the Terminal window. But there will always be single lines too long for your Terminal width (and too many lines of text for its height).
So how long is a line? So how many lines of output are there really? And how long is a line? The wc (word count) command can tell us this.
And when you give wc -l multiple files, it reports the line count of each, then a total.
wc -l $CORENGS/misc/small.fq # Reports the number of lines in the small.fq file cat $CORENGS/misc/small.fq | wc -l # Reports the number of lines on its standard input wc -l $CORENGS/misc/*.fq # Reports the number of lines in all matching *.fq files tail -1 $CORENGS/misc/small.fq | wc -c # Reports the number of characters of the last small.fq line
You don't always type in commands, options and arguments correctly – you can misspell a command name, forget to type a space, specify an unsupported option or a non-existent file, or make all kinds of other mistakes.
What happens? The shell attempts to guess what kind of error it is and reports an appropriate error message as best it can. Some examples:
# You mis-type a command name, or a command not installed on your system ls6:~$ catt catt: command not found # You try to use an unsupported option ls6:~$ ls -z ls: invalid option -- 'z' Try 'ls --help' for more information. # You specify the name of a file that does not exist ls6:~$ ls xxx ls: cannot access 'xxx': No such file or directory # You try to access a file or directory you don't have permissions for ls6:~$ cat /etc/sudoers cat: /etc/sudoers: Permission denied
Type as little and as accurately as possible by using keyboard shortcuts!
Sometimes you want to repeat a command you've entered before, possibly with some changes.
The command line cursor (small thick bar on the command line) marks where you are on the command line.
Once the cursor is positioned where you want it:
Hitting Tab when entering command line text invokes shell completion, instructing the shell to try to guess what you're doing and finish the typing for you. It's almost magic!
On most modern Linux shells you use Tab completion by pressing:
An absolute pathname lists all components of the full file system hierarchy that describes a file. Absolute paths always start with the forward slash ( / ), which is the root of the file system hierarchy. Directory names are separated by the forward slash ( / ) .
You can also specify a directory relative to where you are using one of the special directory names:
Avoid special characters in filenames
While it is possible to create file and directory names that have embedded spaces, that creates problems when manipulating them.
To avoid headaches, it is best not to create file/directory names with embedded spaces, or with special characters such as + & # ( )
The shell has shorthand to refer to groups of files by allowing wildcards in file names.
Using these wildcards is sometimes called filename globbing, and the pattern a glob.
For example:
Most Linux commands write their results to standard output, a built-in stream that is mapped to your Terminal, but that data can be redirected to a file instead.
In fact every Linux command and program has three standard Unix streams: standard input, standard output and standard error. Each has a number, a name, and redirection syntax:
It is easy to not notice the difference between standard output and standard error when you're in an interactive Terminal session – because both outputs are sent to the Terminal window. But they are separate streams, with different meanings. In particular, programs write error and/or diagnostic messages to standard error, not to standard output.
Here's a command that shows the difference between standard error and standard output:
ls /etc/fstab xxx.txt
Produces this output in your Terminal:
ls: cannot access 'xxx.txt': No such file or directory /etc/fstab
What is not obvious, since both streams are displayed on the Terminal, is that:
To see this, redirect standard output and standard error to different files and look at their contents:
ls /etc/fstab xxx.txt 1> stdout.txt 2>stderr.txt cat stdout.txt # Displays "/etc/fstab" cat stderr.txt # Displays "ls: cannot access 'xxx.txt': No such file or directory"
What if you want both standard output and standard error to go to the same file? You use this somewhat odd 2>&1 redirection syntax:
# Redirect both standard output and standard error to the out.txt file ls /etc/fstab xxx.txt > out.txt 2>&1 # Display the contents of the out.txt file cat out.txt # produces output like this: ls: cannot access 'xxx.txt': No such file or directory /etc/fstab
Two final notes.
When running batch programs and scripts you will want to manipulate standard output and standard error from programs appropriately – especially for 3rd party programs that often produce both results data and diagnostic/progress messages.
Most programs/commands read input data from some source, then write output to some destination. A data source can be a file, but can also be standard input. Similarly, a data destination can be a file but can also be a stream such as standard output.
The power of the Linux command line is due in no small part to the power of piping. The pipe operator ( | ) connects one program's standard output to the next program's standard input.
A simple example is piping uncompressed data "on the fly" to count its lines using wc -l (word count command with the lines option).
# zcat is like cat, except that it understands the gz compressed format, # and uncompresses the data before writing it to standard output. # So, like cat, you need to be sure to pipe the output to a pager if # the file is large. zcat big.fq.gz | wc -l
But the real power of piping comes when you stitch together a string of commands with pipes – it's incredibly flexible, and fun once you get the hang of it.
For example, here's a simple way to make a histogram of mapping quality values from a subset of BAM file records.
# create a histogram of mapping quality scores for the 1st 1000 mapped bam records samtools view -F 0x4 small.bam | head -1000 | cut -f 5 | sort -n | uniq -c
The most basic way of view file data is the cat command. While the name comes from its ability to concatenate one or more files, it can be used to output the contents of a single file. For example:
cat ~/.profile # or, to see line numbers in the output: cat -n ~/.profile
Using cat by itself is fine for small files, but it reads/writes everything in the file without stopping. So for larger files you use a pager such as more, or less. A pager reads text and outputs only one "page" of text at a time, then waits for you to ask it to advance. And a "page" of text is the number of lines that will fit on your visible Terminal.
Using the more pager:
more ~/.bashrc
If there is additional output, you'll see the --More-- indicator again; if not, the command prompt appears again.
Using the less pager:
less ~/.bashrc # to see line numbers in the output: less -N ~/.bashrc # to use case-insensitive matching: less -I ~/.bashrc
Basic navigation in less:
Searching in less:
Another method of text searching is using the grep program, which stands for general regular expression parser. In Unix, the grep program performs regular-expression text searching, and displays lines where the pattern text is found.
Nearly every programming language offers grep functionality, where a pattern you specify – a regular expression or regex – describes how the search is performed.
There are many grep regular expression metacharacters that control how the search is performed (see the grep command).
Basic usage is: grep '<pattern>' <file> where
Common options:
Two other commands that are useful for viewing text are head and tail.
Examples:
head ~/.bashrc # view the 1st 10 file lines head -n 2 ~/.bashrc # view the 1st 2 file lines head -5 ~/.bashrc # view the 1st 5 file lines tail ~/.bashrc # view the last 10 file lines tail -n 3 ~/.bashrc # view the last 3 file lines tail -1 ~/.bashrc # view the last line of the file # view 7 lines of text starting at line 20 tail -n +20 ~/.bashrc | head -7
Since head and tail do not have an option to display line numbers, you can pipe in text that includes line numbers with cat -n:
cat -n ~/.bashrc | head -4 # view the 1st 4 lines w/line numbers cat -n ~/.bashrc | tail -5 # view the last 5 lines w/line numbers # view 6 lines of text starting at line 25 cat -n ~/.bashrc | tail -n +25 | head -6
Environment variables are just like variables in a programming language (in fact bash is a complete programming language), they are "pointers" that reference data assigned to them. In bash, you assign an environment variable as shown below:
export varname="Some value, here it's a string"
Careful – do not put spaces around the equals sign when assigning environment variable values.
Also, always surround the value with double quotes ( " " ) if it contains (or might contain) spaces.
You set environment variables using the bare name (varname above).
You then refer to or evaluate an environment variable using a dollar sign ( $ ) evaluation operator before the name:
echo $varname
The export keyword when you're setting ensures that any sub-processes that are invoked will inherit this value. Without the export only the current shell process will have that variable set.
Use the env command to see all the environment variables you currently have set.
What different quote marks mean in the shell and when to use can be quite confusing.
When the shell processes a command line, it first parses the text into tokens ("words"), which are groups of characters separated by whitespace (one or more space characters). Quoting affects how this parsing happens, including how metacharacters are treated and how text is grouped.
There are three types of quoting in the shell:
The quote characters themselves ( ' " ` ) are metacharacters that tell the shell to "start a quoting process" then "end a quoting process" when the matching quote is found. Since they are part of the processing, the enclosing quotes are not included in the output.
If you see the greater than ( > ) character after pressing Enter, it can mean that your quotes are not paired, and the shell is waiting for more input to contain the missing quote of the pair (either single or double). Just use Ctrl-c to get back to the prompt.
The first rule of quoting is: always enclose a command argument in quotes if it contains spaces so that the command sees the quoted text as one item. In particular, always use single ( ' ) or double ( " ) quotes when you define an environment variable whose value contains spaces.
foo='Hello world' # correct - defines variable "foo" to have value "Hello world" foo=Hello world # error - no command called "world"
These two expressions using double quotes or single quotes are different because the single quotes tell the shell to treat the quoted text as a literal, and not to look inside it for metacharacter processing.
# Inside double quotes, the text "$USER" is evaluated and its value substituted echo "my account name is $USER" # Inside single quotes, the text "$USER" is left as-is echo 'the environment variable storing my account name is $USER'
To display a metacharacter as a literal inside double quotes, use the backslash ( \ ) character to escape the following character.
# Inside double quotes, use a backslash ( \ ) to escape the dollar sign ( $ ) metacharacter echo "the environment variable storing my account name is \$USER"
backtick ( ` ` ) evaluation quoting is one of the underappreciated wonders of Unix. The shell:
An example, using the date function that just writes the current date and time to standard output, which appears on your Terminal.
date # Calling the date command just displays date/time information echo date # Here "date" is treated as a literal word, and written to standard output echo `date` # The date command is evaluated and its standard output replaces `date`
A slightly different syntax, called sub-shell evaluation, also evaluates the expression inside $( ) and replaces it with the expression's standard output.
today=$( date ); echo $today # environment variable "today" is assigned today's date today="Today is: `date`"; echo $today # "today" is assigned a string including today's date
So what exactly is text? That is, what is stored in files that the shell interprets as text?
On standard Unix systems, each text character is stored as one byte – eight binary bits – in a format called ASCII (American Standard Code for Information Interchange). Eight bits can store 2^8 = 256 values, numbered 0 - 255.
In its original form values 0 - 127 were used for standard ASCII characters. Now values 128 - 255 comprise an Extended set. See https://www.asciitable.com/
However not all ASCII "characters" are printable -- in fact the "printable" characters start at ASCII 32 (space).
ASCII values 0 - 31 have special meanings. Many were designed for use in early modem protocols, such as EOT (end of transmission) and ACK (acknowledge), or for printers, such as VT (vertical tab) and FF (form feed).
The non-printable ASCII characters we care most about are:
Let's use the hexdump command (really an alias, defined in your ~/.bashrc login script) to look at the actual ASCII codes stored in a file:
tail ~/.bashrc | hexdump
This will produce output something like this:
Each line here describes 16 characters, in three display areas:
Notice that spaces are ASCII 0x20 (decimal 32), and the newline characters appear as 0x0a (decimal 10).
Why hexadecimal? Programmers like hexadecimal (base 16) because it is easy to translate hex digits to binary, which is how everything is represented in computers. And it can sometimes be important to know which binary bits are 1s and which are 0s. (Read more about Decimal and Hexadecimal)
There are several ways to output multi-line text. You can:
example:
echo 'My name is Anna'
example:
echo -e "My\nname is\nAnna"
Another method for writing multi-line text that can be useful for composing a large block of text in a script, is the heredoc syntax, where a block of text is specified between two user-supplied block delimiters, and that text block is sent to a command. The general form of a heredoc is:
COMMAND << DELIMITER ..text... ..text... DELIMITER
The 2nd (ending) block delimiter you specify for a heredoc must appear at the start of a new line.
For example, using the (arbitrary) delimiter EOF and the cat command:
cat << EOF This text will be output And this USER environment variable will be evaluated: $USER EOF
Here the block of text provided to cat is just displayed on the Terminal. To write it to a file just use the 1> or > redirection syntax in the cat command:
cat 1> out.txt << EOF This text will be output And this USER environment variable will be evaluated: $USER EOF
The out.txt file will then contain this text:
This text will be output And this USER environment variable will be evaluated: student01
Arithmetic in bash is very weird:
echo $(( 50 * 2 + 1 )) n=0 n=$(( $n + 5 )) echo $n
And it only returns integer values, after truncation.
echo $(( 4 / 2 )) echo $(( 5 / 2 )) echo $(( 24 / 5 ))
As a result, if I need to do anything other than the simplest arithemetic, I use awk:
awk 'BEGIN{print 4/2}' echo 3 2 | awk '{print ($1+$2)/2}'
You can also use the printf function in awk to control formatting. Just remember that a linefeed ( \n ) has to included in the format string:
echo 3.1415926 | awk '{ printf("%.2f\n", $1) }'
You can even use it to convert a decimal number to hexadecimal using the %x printf format specifier. Note that the convention is to denote hexadecimal numbers with an initial 0x.
echo 65 | awk '{ printf("0x%x\n", $1) }'
As in many programming languages, a for loop performs a series of expressions on one or more item in the for's argument list.
The bash for loop has the general structure:
for <variable_name> in <list of space-separated items>
do <something>
<somthing else>
done
The <items> should be (or evaluate to) for's argument list: a space-separated list of items (e.g. 1 2 3 4 or `ls -1 *.gz` ).
for num in `seq 4` do echo $num done # or, since bash lets you put multiple commands on one line # if they are each separated by a semicolon ( ; ) for num in `seq 4`; do echo $num; done
Gory details:
One common use of for loops is to process multiple files, where the set of files to process is obtained by pathname wildcarding. For example, the code below counts the number of reads in a set of compressed FASTQ files:
for fname in *.gz; do echo "$fname has $((`zcat $fname | wc -l` / 4)) sequences" done
We saw how double quotes allow the shell to evaluate certain metacharacters in the quoted text.
But more importantly when assigning multiple lines of text to a variable, quoting the evaluated variable preserves any special characters in the variable value's text such as Tab or newline characters.
Consider this case where a captured string contains newlines, as illustrated below.
txt=$( echo -e "aa\nbb\ncc" ) echo "$txt" # inside double quotes, newlines preserved echo $txt # without double quotes, newlines are converted to spaces
This difference is very important!
See the difference:
nums=$( seq 5 ) echo $nums echo "$nums" echo $nums| wc -l # newlines converted to spaces, so only one line echo "$nums" | wc -l # newlines preserved, so reports 5 # This loop prints a line for each of the files for n in $nums; do echo "the number is: '$n'" done # But this loop prints only one line for n in "$nums"; do echo "the number is: '$n'" done
The general form of an if/then/else statement in bash is:
if [ <test expression> ]
then <expression> [ expression... ]
else <expression> [ expression... ]
fi
Where
A simple example:
for val in 5 0 "27" "$emptyvar" abc '0'; do if [ "$val" ] then echo "Value '$val' is true" else echo "Value '$val' is false" fi done
A good reference on the many built-in bash conditionals: https://www.gnu.org/software/bash/manual/html_node/Bash-Conditional-Expressions.html
The read function can be used to read input one line at a time, in a bash while loop.
While the full details of the read commad are complicated (see https://unix.stackexchange.com/questions/209123/understanding-ifs-read-r-line) this read-a-line-at-a-time idiom works nicely.
while IFS= read line; do echo "Line: '$line'" done < ~/.bashrc
If the input data is well structured, its fields can be read directly into variables. Notice we can pipe all the output to more – or could redirect it to a file.
tail /etc/passwd | while IFS=':' read account x uid gid name shell do echo $account $name done | more
Consider a long listing of our Home directory.
There are 9 whitespace-separated columns in this long listing:
Notice I call everything a file, even directories. That's because directories are just a special kind of file – one that contains information about the directory's contents.
A file's owner is the Unix account that created the file (here abattenh, me). That account belongs to one or more Unix groups, and the group associated with a file is listed in field 4.
The owner will always be a member of the Unix group associated with a file, and other accounts may also be members of the same group. G-801021 is one of the Unix groups I belong to at TACC. To see the Unix groups you belong to, just type the groups command.
File permissions and information about the file type are encoded in that 1st 10-character field. Permissions govern who can access a file, and what actions they are allowed.
Each of the 3-character sets describes if read ( r ) write ( w ) and execute ( x or s ) actions are allowed, or not allowed ( - ).
Examples:
ls -l ~/.bash_history
haiku.txt | description |
---|---|
|
ls -l /usr/bin/ls
/usr/bin/ls | description |
---|---|
|
ls -l -d ~/local (-d says to list directory information, not directory contents)
docs | description |
---|---|
|
Assume you want to copy the TACC file $SCRATCH/core_ngs/fastq_prep/small_fastqc.html back to your laptop/local computer. You must initiate the copy operation from your local computer rather than at TACC. Why? because the TACC servers have host names and IP addresses that are public in the Internet's Distributed Name Service (DNS) directory. But your local computer (in nearly all cases) does not have a published name and address.
First, on the TACC server figure out what the appropriate absolute path (a.k.a. full pathname) is.
cd $SCRATCH/core_ngs/fastq/prep pwd -P
This will display something like /scratch/01063/abattenh/core_ngs/fastq_prep
For folks with Mac or Linux laptops or Windows 10+ users with scp available in the Command Prompt program (or Windows subsystem for Linux):
scp abattenh@ls6.tacc.utexas.edu:/scratch/01063/abattenh/core_ngs/fastq_prep/small_fastqc.html .
Windows users can use the free WinSCP program (https://winscp.net/eng/index.php) if their Windows version does not support scp.
There are three main approaches to editing Unix files:
Knowing the basics of at least one Linux command-line text editor is useful for creating/editing small files, and we'll explore nano in this class. For editing larger files, you may find options #2 or #3 more useful.
nano is a very simple editor available on most Linux systems. If you are able to ssh into a remote system, you can use nano there.
To invoke nano to edit a new or existing file just type nano <filename>. For example:
nano newfile.txt
You'll see the name of the file (if you supplied one) on the top line of the Terminal window.
Navigation and operations in nano are similar to those we discussed in Command line editing
You can just type in text, and navigate around using arrow keys (up/down/left/right). A couple of other navigation shortcuts:
Once you've positioned the cursor where you want it, just type in your text.
Be careful with long lines – sometimes nano will split long lines into more than one line, which can cause problems in a commands file where each task must be specified on a single line.
To remove text:
Once you're satisfied with your edits:
These and other important nano operations are displayed in a menu at the bottom of the Terminal window. Note that the ^ character means Ctrl- in this menu.
emacs is a complex, full-featured editor available on most Linux systems.
To invoke emacs to edit a new or existing file just type:
emacs <filename>
Here's a reference sheet that list many commands: https://www.gnu.org/software/emacs/refcards/pdf/refcard.pdf. The most important are:
You can just type in text, and navigate around using arrow keys. A couple of other navigation shortcuts:
Be careful when pasting text into an emacs buffer – it takes a few seconds before emacs is ready to accept pasted text.
Double-check that the 1st line of pasted test is correct – emacs can clip the 1st few characters if the paste is done too soon.
The dirty little secret of the computer world is that the three main "families" of computers – Macs, Windows and Linux/Unix – use different, mutually incompatible line endings.
And guess what? Most Linux programs don't work with files that have Windows or Mac line endings, and what's worse they give you bizarre error messages that don't give you a clue what's going on!
So whatever non-Linux text editor you use, be sure to adjust its "line endings" setting – and it better have one somewhere!
Komodo Edit is a free, full-featured text editor with syntax coloring for many programming languages and a remote file editing interface. It has versions for both Macintosh and Windows. Download the appropriate install image here.
Once installed, start Komodo Edit and follow these steps to configure it:
When you want to open an existing file at lonestar6, do the following:
To create and save a new file, do the following:
Rather than having to navigate around TACC's complex file system tree, it helps to use the symbolic links to those areas that we created in your Home directory.
Notepad++ is an open source, full-featured text editor for Windows PCs (not Macs). It has syntax coloring for many programming languages (python, perl, bash), and a remote file editing interface.
If you're on a Windows PC download the installer here.
Once it has been installed, start Notepad++ and follow these steps to configure it:
To open the connection, click the blue (Dis)connect icon then select your lonestar6 connection. It should prompt for your password. Once you've authenticated, a directory tree ending in your Home directory will be visible in the NppFTP window. You can click the the (Dis)connect icon again to Disconnect when you're done.
Rather than having to navigate around TACC's complex file system tree, it helps to use the symbolic links to those areas that we created in your Home directory (~/work or ~/scratch).