Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

...

Getting started with Bosco

The Tier-3 uses utatlas.its.utexas.edu as a submission host - this is where the Condor scheduler lives.  However 

Bosco is a job submission manager designed to manage job submissions across different resources.  It is needed to submit jobs from our workstations to the Tier-3.

Make sure you have an account on our local machine utatlas.its.utexas.edu, and that you have passwordless ssh set up to it from the tau* machines.

To do this create an RSA key and copy your .ssh folder onto the tau machine using scp.

 Then carry out the following instructions on any of the tau* workstations:

Code Block
bash
bash
cd ~
curl -o bosco_quickstart.tar.gz ftp://ftp.cs.wisc.edu/condor/bosco/1.2/bosco_quickstart.tar.gz
tar xvzf ./bosco_quickstart.tar.gz
./bosco_quickstart

...

Code Block
Requirements = ( IS_RCC =?= undefined )

A snapshot of ATLAS Connect status can be seen at this link. "UTexas" shows the number of outside jobs executing in our Tier-3, while "Tier3Connect UTexas" shows activity we induce on other sites.

VM configuration

Our virtual machines are CentOS 6 instances configured with CVMFS for importing the ATLAS software stack from CERN. They also boot individual instances of the Condor job scheduling system. They access the same instance of the Squid HTTP caching server which our local workstations use (on utatlas.its.utexas.edu), which help reduce network traffic required for CVMFS and for database access using the Frontier system.

...

Code Block
bash
bash
ssh username@alamo.futuregrid.org

Then visit the list of instances to see which nodes are running. Then simply 

Code Block
bash
bash
ssh root@10.XXX.X.XX

and you are now accessing a node!