Linux and Lonestar refresher
Overview:
Based on your pre-class survey answers, the vast majority of you are pretty familiar with the basic linux commands. This portion of the class is devoted to making sure we are all starting from the same starting point on lonestar. Previous versions of portions of this tutorial can be found here, here, here, here, here, and here. Collective thanks to all those that contributed to those works which now appear in a single version. Anyone wishing to use this tutorial is welcome.
Objectives:
- Log into lonestar.
- Change your lonestar profile to the course specific format.
- Refresh understanding of basic linux commands with some course organization.
- Review use of the nano text editor program, and become familiar with several other text editor programs.
Tutorial:
Logging into lonestar
Start a new terminal window. For MACs this is done by clicking on the magnifying glass on the right hand side of the toolbar at the top of the page and type "terminal". For windows this should be done by connecting through cygwin. Log into lonestar using your account information.
This brings us to our first "code block". There will be 3 types of code blocks used throughout this class:
- Visible
- These are code blocks that you would have no idea what to type without help.
- These will typically be associated with longer/more detailed text above the text box explaining things.
- Hinted
- These are code blocks that you can probably figure out what to type with a hint that goes beyond what the tutorial is requesting. Access the hint by clicking the triangle or hint hyperlink.
- These will always contain an additional hidden code block incase you don't find the hint as clever as we did.
- Hidden
- These code blocks represent things that either there is a good chance you know how to do already, something too straightforward to warrant a hint, or are there to give you the answer if the hint doesn't help. Access the answer by clicking "expand source" on the right hand side of the code block.
Text inside of code blocks represent "right" answers, and should either be typed EXACTLY into the terminal window as they are, or copy pasted with a noteable exception. Things that exist within <> symbols represent something that you need to replace before sending it to the terminal. We try to put informative text within the brackets so you know what to replace it with. If you are ever unsure of what to replace the <> text with, just ask.
Using what we have just taught you about code blocks, log into lonestar. Since this is your first code box, it is probably worth expanding even if you know how to log into lonestar already.
When prompted enter your password, and answer "yes" to the security question.
Logging into remote computers
As a matter of internet safety, the terminal window knows you are entering a password and may not want your neighbor to see what it is. For this reason, even as you type to enter your password, nothing will be displayed on the screen. While backspace will work if you know you made a mistake, we often find it better to just hit enter and try again.
If you have never logged into lonestar from the computer you are currently using before, you will be issued a security warning. The same will be true if you log into any of the other TACC resources, or any other remote computer. If you ever see a security warning logging into somewhere that you use commonly you should answer no and try to figure out why you were warned. Otherwise type "yes" to bypass the security check.
Setting up your lonestar profile and other variables
There are many flavors of Linux/Unix shells. The default for TACC's Linux (and most other Linuxes) is bash (bourne again shell), which we will use throughout.
Whenever you login via an interactive shell as you did above, a well-known script is executed by the shell to establish your favorite environment settings. We've set up a common profile for you to start with that will help you know where you are in the file system and make it easier to access some of our shared resources. If you already have a profile set up on lonestar that you like, we want to make sure that we don't destroy it but it will be important to make sure that we change it temporarily. Use the ls command to check if you have a .profile already set up in your home directory.
If you already have a .profile file, use the mv command to change the name to something descriptive (for example ".profile_pre_bdib_backup"). Otherwise continue to creating a new .profile file.
Copy our predefined ngsc_profile_user file from /corral-repl/utexas/BioITeam/scripts/
folder as .profile before using the chmod command to change the permissions to read and write for the user only.
The chmod 600 .profile command marks the file as readable/writable only by you. The .profile script file will not be executed unless it has these permissions settings. Note that the well-known filename is .profile (or .profile_user on some systems), which is specific to the bash shell.
Notice that when you do a normal ls to list the contents of your home directory, this file doesn't appear. That's because it's a hidden "dot file" – a file that has no filename, only an extension. To see these hidden files use the -a (all) switch for ls:
To see even more detail, including file permissions, add the -l (long listing) switch:
Since .profile is executed when you login, to ensure it is set up properly you should first logout:
then log back in:
If everything is working correctly you should now see a prompt like this: tacc:~$
In order to make navigating to the different file systems on lonestar a little easier ($SCRATCH and $WORK), you can set up some shortcuts with these commands that create folders that "link" to those locations. Run these commands when logged into Lonestar with a terminal, from your home directory.
cdh ln -s $SCRATCH scratch ln -s $WORK work ln -s $BI BioITeam
Understanding what your .profile file actually does.
Editing files
There are a number of options for editing files at TACC. These fall into three categories:
- Linux text editors installed at TACC (nano, vi, emacs). These run in your terminal window. vi and emacs are extremely powerful but also quite complex, so nano may be the best choice as a first local text editor.
- Text editors or IDEs that run on your local computer but have an SFTP (secure FTP) interface that lets you connect to a remote computer (Notepad++ or Komodo Edit). Once you connect to the remote host, you can navigate its directory structure and edit files. When you open a file, its contents are brought over the network into the text editor's edit window, then saved back when you save the file.
- Software that will allow you to mount your home directory on TACC as if it were a normal disk e.g. MacFuse/MacFusion for Mac, or ExpanDrive for Windows or Mac ($$, but free trial). Then, you can use any text editor to open files and copy them to your computer with the usual drag-drop.
We'll go over nano
together in class, but you may find these other options more useful for your day-to-day work so feel free to go over these sections in your free time to familiarize yourself with their workings to see if one is better for you.
As we will be using nano throughout the class, it is a good idea to review some of the basics. nano is a very simple editor available on most Linux systems. If you are able to use ssh, you can use nano. To invoke it, just type:
nano
You'll see a short menu of operations at the bottom of the terminal window. The most important are:
- ctl-o - write out the file
- ctl-x - exit nano
You can just type in text, and navigate around using arrow keys. A couple of other navigation shortcuts: - ctl-a - go to start of line
- ctl-e - go to end of line
Be careful with long lines – sometimes nano will split long lines into more than one line, which can cause problems in our commands files, as you will see.
Stringing commands together and controlling their output
In a linux shell, it is often useful to take output of one command save it to a new file rather than having it print to the screen. It uses a familiar metaphor: "pipes". The linux operating system expects some "standard input pipe" and gives output back through a "standard output pipe". These are called "stdin" and "stdout" in linux. There's also a special "stderr" for errors; we'll ignore that for now. Usually, your shell is filling the operating system's stdin with stuff you type - the commands with options. The shell passes responses back from those commands to stdout, which the shell usually dumps to your screen. The ability to switch stdin and stdout around is one of the key reasons linux has existed for decades and beat out many other operating systems. Let's start making use of this. Change to the scratch directory and make a new folder called "piping" and put list of the full contents of the $BI folder to a new file called whatsHere.
cds mkdir piping ls -1 $BI > whatsHere cat whatsHere
When you execute the ls -1 > whatsHere
command, you should have noticed nothing happening on the screen, and when you cat the whatsHere file, you should notice the output you would have exptected from the ls -1 > whatsHere
command. Often it is useful to chain commands together using the output of the first command as the input of the second command. Commands are chained together using the "|" character (shift \ above the return key). Use redirection to put the first 2 lines of the $BI directory contents into the whatsHere
file.
Again, you should see your answer only showing up after the cat command. Note that by using a single > you are overwriting the existing contents and that there is no warning that this is happening beware of this in the future as linux doesn't have an "undo" feature. We will make use of the redirect output (stdout) character (>
)
, and the "pass output along as input" "|" throughout the course. Not all shells are equal - the bash shell lets you redirect stdout with either >
or 1>
; stderr can be redirected with 2>
; you can redirect both stdout and stderr using &>
. If these don't work, use google to try to figure it out. The web site stackoverflow is a usually trustworthy and well annotated site for OS and shell help.
Understanding TACC
Now that we've been using lonestar for a little bit, and have it behaving in a way that is a little more useful to us, let's get more of a functional understanding of what exactly it is and how it works.
Diagram of Lonestar directories: What connects to what, how fast, and for how long.
Lonestar is a collection of 1,888 computers connected to three file servers, each with unique characteristics. You need to understand the file servers to know how to use them effectively.
| $HOME | $WORK | $SCRATCH |
---|---|---|---|
Purged? | No | No | Files are purged if not accessed for 10 days. |
Backed Up? | Yes | No | No |
Capacity | 1GB | 250GB | Basically infinite. 1.4 PB |
Commands to Access | cdh cd $HOME/ | cdw cd $WORK/ | cds cd $SCRATCH/ |
Purpose | Store Executables | Store Files | Run Jobs |
Executables that aren't available on TACC through the "module" command should be stored in $HOME.
If you plan to be using a set of files frequently or would like to save the results of a job, they should be stored in $WORK.
If you're going to run a job, it's a good idea to keep your input files in a directory in $WORK and copy them to a directory in $SCRATCH where you plan to run your job.
cp $WORK/my_fastq_data/*fastq $SCRATCH/my_project/
Understanding "jobs" and compute nodes.
When you log into lonestar using ssh you are connected to what is known as the login node or "the head node". There are several different head nodes, but they are shared by everyone that is logged into lonestar (not just in this class, or from campus, or even from texas, but everywhere in the world). Anything you type onto the command line has to be executed by the head node. The longer something takes to complete, or the more it will slow down you and everybody else. Get enough people running large jobs on the head node all at once (say a classroom full of Big Data in Biology summer school students) and lonestar can actually crash leaving nobody able to execute commands or even log in for minutes -> hours -> even days if something goes really wrong. To try to avoid crashes, TACC tries to monitor things and proactively stop things before they get too out of hand. If you guess wrong on if something should be run on the head node, you may eventually see a message like the one pasted below. If you do, its not the end of the world, but repeated messages will become revoked TACC access and emails where you have to explain what you are doing to TACC and your PI and how you are going to fix it and avoid it in the future.
Message from root@login1.ls4.tacc.utexas.edu on pts/127 at 09:16 ... Please do not run scripts or programs that require more than a few minutes of CPU time on the login nodes. Your current running process below has been killed and must be submitted to the queues, for usage policy see http://www.tacc.utexas.edu/user-services/usage-policies/ If you have any questions regarding this, please submit a consulting ticket.
So you may be asking yourself what the point of using lonestar is at all if it is wrought with so many issues. The answer comes in the form of compute nodes. There are 1,888 compute nodes that can only be accessed by a single person for a specified amount of time. These compute nodes are divided into different queues called: normal, development, largemem, etc. Access to nodes (regardless of what queue they are in) is controlled by a "Queue Manager" program. You can personify the Queue Manager program as: Heimdall in Thor, a more polite version of Gandalf in lord of the rings when dealing with with the balrog, the troll from the billy goats gruff tail, or any other "gatekeeper" type. Regardless of how nerdy your personification choice is, the Queue Manager has an interesting caveat: it only speaks 1 language "job.sge". "job.sge" is a file that contains information on HOW/WHERE to run things (how many nodes you need, how long you need them for, how to charge your allocation, etc). The Queue manager doesn't care WHAT you are running, only HOW to find what you are running (which is specified by a setenv CONTROL_FILE commands
line in job.sge). The WHAT is then handled by the file "commands" which contains what you would normally type into the command line to make things happen.
Further launcher.sge reading
To make things easier on all of us, there is a script called launcher_creator.py that you can use to automatically generate the "job.sge" file. This can all be summarized in the following figure:
Using launcher_creator.py
We have created a Python script called launcher_creator.py
that makes creating a launcher.sge
script a breeze. You will probably want to use this for the rest of the course. Now run the script with the -h
option to show the help message so we can see what other options the script takes:
Short option | Long option | Required | Description |
-n | name | Yes | The name of the job. |
-a | allocation | The allocation you want to charge the run to. | |
-q | queue | Default: Development | The queue to submit to, like 'normal' or 'largemem', etc. |
-w | wayness | Optional The number of jobs in a job list you want to give to each node. (Default is 12 for Lonestar, 16 for Stampede.) | |
-N | number of nodes | Optional Specifies a certain number of nodes to use. You probably don't need this option, as the launcher calculates how many nodes you need based on the job list (or Bash command string) you submit. It sometimes comes in handy when writing pipelines. | |
-t | time | Yes | Time allotment for job, format must be hh:mm:ss. |
-e | Optional Your email address if you want to receive an email from Lonestar when your job starts and ends. | ||
-l | launcher | Optional Filename of the launcher. (Default is | |
-m | modules | Optional String of module management commands. | |
-b | Bash commands | Optional String of Bash commands to execute. | |
-j | Command list | Optional Filename of list of commands to be distributed to nodes. | |
-s | stdout | Optional Setting this flag outputs the name of the launcher to stdout. |
We should mention that launcher_creator.py
does some under-the-hood magic for you and automatically calculates how many cores to request on lonestar, assuming you want one core per process. You don't know it, but you should be grateful that this saves you from ever having to think about a confusing calculation.
Running a job
Now that we have an understanding of what the different parts of running a job is, let's actually run a job. Move to your scratch directory, make a new folder called "my_first_job" (Remember not to use spaces in file/folder names), make a new file called "commands" inside of that directory using nano, and put 4-12 lines with 1 command on each line in that file, being sure to remember to pipe the output to 1 or more files.
Interrogating the launcher queue
Here are some of the common commands that you can run and what they will do or tell you:
Command | Purpose | Output(s) |
---|---|---|
qsub | Submit your job to the Queue Manager | a series of checks and eventual submission to the queue has gone well |
qstat | Check the status of your job | Shows all of your currently submitted jobs, a state of: "qw" means it is still queued and has not run yet "r" means it is currently running |
qdel <job-ID> | Delete a submitted job before it is finished running note: you can only get the job-ID by using qstat | There is no confirmation here, so be sure you are deleting the correct job. There is nothing worse than deleting a job that has sat a long time by accident because you forgot something on a job you just submitted. |
showq | You are a nosy person and want to see everyone that has submitted a job | Typically a huge list of jobs |
showq -u | Shows only your jobs | Very similar to the qstat output |
If the queue is moving very quickly you may not see much output, but don't worry, there will be plenty of opportunity later in the course.
Evaluating your first job submission
Based on our example you may have expected 4 new files to have been created during the job submission, but instead you will find 3 extra files as follows: <job_name>.e(job-ID), <job_name>.pe(job-ID), and <job_name>.o(job-ID). When things have worked well, these files are typically ignored. When your job fails, these files offer insight into the why so you can fix things and resubmit.
Many times while working with NGS data you will find yourself with intermediate files. Two of the more difficult challenges of analysis can be trying to decide what files you want to keep, and remembering what each intermediate file represents. Your commands files can serve as a quick reminder of what you did so you can always go back and reproduce the data. Using arbitrary endings (.out in this case) can serve as a way to remind you what type of file you are looking at. Since we've learned that the scratch directory is not backed up and is purged, see if you can turn your 4 intermediate files into a single final file using the cat command, and copy the new final file, the .sge file you created, and the 3 extra files to work. This way you should be able to come back and regenerate all the intermediate files if needed, and also see your final product.
Moving beyond the preinstalled commands on TACC
If (or when) you looked at what our edits to the .profile file did, you would have seen that the last lines were a series of "module load XXXX
" commands, and a promise to talk more about them later. I'm sure you will be thrilled to learn that now is that time... As a "classically trained wet-lab biologist" one of the most difficult things I have experienced in computational analysis has been in installing new programs to improve my analysis. Programs and their installation instructions tend (or appear) to be written by computational biologists in what at times feels like a foreign language, particularly when a particular when things start going wrong. Luckily TACC (and the BioIteam) help get around a large number of these problems by preinstalling many programs if you know where to look.
TACC modules
Modules are programs or sets of programs that have been set up to run on TACC. They make managing your computational environment very easy. All you have to do is load the modules that you need and a lot of the advanced wizardry needed to set up the linux environment has already been done for you. New commands just appear.
To see all modules available in the current context, type:
module avail
Remember you can hit the "q" key to exit out of the "more" system, or just keep hitting return to see all of the modules available. The "module avail
" command is not the most useful of commands if you have some idea of what you are looking for. For example imagine you want to align a few million next generation sequencing reads to a genome, but you don't know what your options are. You can use the following command to get a list of programs that may be useful:
module keyword alignment
Note that this may not be an inclusive list as it requires the name of the program, or its description to contain the word "alignment". Looking through the results you may notice some of the programs you already know and use for aligning 2 sequences to each other such as blast and clustalw. Try broadening your results a little by searching for "align" rather than "alignment" to see how important word choice is. When you compare the two sets of results you will see that one of the new results is:
bowtie: bowtie/1.0.0, bowtie/1.1.1, bowtie/2.1.0 Ultrafast, memory-efficient short read aligner
This may sound much better, but you still only have limited information about it. To learn more about a particular program, try the following 2 commands:
module spider bowtie module spider bowtie/2.1.0
In the first case, we see information about what versions of bowtie lonestar has available for us, but really that is just the same information as we had from our previous search. This can be particularly useful when you know what program you want to use but don't know what versions are available. In the second case we now have more detailed information about the particular version of interest including websites we can go to to learn more about the program itself.
Once you have identified the module that you want to use, you install it using the following command:
module load bowtie/2.1.0
While not strictly necessary, using the "/2.1.0
" text is a very good habit to get into as it controls what version is to be loaded. In this case the "2.1.0" version is the default version and module load bowtie
will behave identically to module load bowtie/2.1.0
but that will not always be the case, particularly if in the future TACC installs a new version of bowtie. Since the module load command doesn't give any output, it is often useful to check what modules you have installed with either of the following commands:
module list module list bowtie
The first example will list all currently installed modules while the second will only list modules containing bowtie in the name. If you see that you have installed the wrong version of something, a module is conflicting with another, or just don't feel like having it turned on anymore, use the following command:
module unload bowtie
You will notice when you type module list you have several different modules installed already. These come from both TACC defaults (TACC, linux, etc), and several that are used so commonly both in this class and by biologists that it becomes cumbersome to type "module load python
" all the time and therefore we just have them turned on by default by putting them in our profile to load on startup. As you advance in your own data analysis you may start to find yourself constantly loading modules as well. When you become tiered of doing this (or see jobs fail to run because the modules that load on the compute nodes are based on your profile plus commands given to each node), you may want to add additional modules to your profile. This can be done using the "nano .profile" command.
Transferring files to and from lonestar with a Mac/Linux machine
Lonestar is tremendously powerful and capable of doing many things, but as most of you are probably being slightly frustrated by, it doesn't have much in the way of a GUI (graphical user interface), and does not have the same scrolling capabilities we are used to on our own computers, let alone actually visualizing graphs and more meaningful representations of our data. In order to do these types of things, we have to get our data off of lonestar and onto our own computers. On our diagram of lonestar we showed a boundary of what could be copied and moved within TACC and list the scp command as a way of moving files to other computers outside of TACC. scp works the same was as the cp command, it just includes more detailed information on the path of where the file is, or where the file is going. Here we will transfer our recently created "first_job_submission.final.output" file from lonestar to the computer you are sitting at as an example. First navigate to your work directory to find your final output file, and determine what the full path to that location is.
Next we'll transfer the file to a new "temp" directory on the computer using the scp command. Open a new terminal window and use the following code(make sure to replace <> and everything between them:
mkdir temp cd temp ls scp <username>@tacc.lonestar.utexas.edu:<pwd_from_other_window>/first_job_submission.final.output . # remember, the . at the end signifies current location. ls
There should be no files in the temp directory the first time you use the ls command, and after the scp command you should see your output file in that directory. Wildcards can be used to transfer files meeting specific conditions, or entire folders can be copied back and forth. Lets transfer the entire my_first_job directory to this same folder. You will need to do things on both the TACC terminal and desktop terminal. Add the -r option to the scp command to "recursively" transfer a folder and all contents.
You should now see the my_first_job directory and all its contents on your desktop.
Files can be moved to lonestar in the same way, just by adding the "<username>@tacc.lonestar.utexas.edu:
" location information to the destination portion of the command.
Transferring files to and from lonestar with Windows
This concludes the the linux and lonestar refresher tutorial.
Big Data In Biology Genome Variant Analysis Course 2015 home.
Welcome to the University Wiki Service! Please use your IID (yourEID@eid.utexas.edu) when prompted for your email address during login or click here to enter your EID. If you are experiencing any issues loading content on pages, please try these steps to clear your browser cache.