Introduction
...
Getting started with Bosco
The Tier-3 uses utatlas.its.utexas.edu as a submission host - this is where the Condor scheduler lives. However
Bosco is a job submission manager designed to manage job submissions across different resources. It is needed to submit jobs from our workstations to the Tier-3.
Make sure you have an account on our local machine utatlas.its.utexas.edu, and that you have passwordless ssh set up to it from the tau* machines.
To do this create an RSA key and copy your .ssh folder onto the tau machine using scp.
Then carry out the following instructions on any of the tau* workstations:
Code Block | ||||
---|---|---|---|---|
| ||||
cd ~ curl -o bosco_quickstart.tar.gz ftp://ftp.cs.wisc.edu/condor/bosco/1.2/bosco_quickstart.tar.gz tar xvzf ./bosco_quickstart.tar.gz ./bosco_quickstart |
...
Code Block |
---|
Requirements = ( IS_RCC =?= undefined ) |
A snapshot of ATLAS Connect status can be seen at this link. "UTexas" shows the number of outside jobs executing in our Tier-3, while "Tier3Connect UTexas" shows activity we induce on other sites.
VM configuration
Our virtual machines are CentOS 6 instances configured with CVMFS for importing the ATLAS software stack from CERN. They also boot individual instances of the Condor job scheduling system. They access the same instance of the Squid HTTP caching server which our local workstations use (on utatlas.its.utexas.edu), which help reduce network traffic required for CVMFS and for database access using the Frontier system.
...
Code Block | ||||
---|---|---|---|---|
| ||||
ssh username@alamo.futuregrid.org |
Then visit the list of instances to see which nodes are running. Then simply
Code Block | ||||
---|---|---|---|---|
| ||||
ssh root@10.XXX.X.XX |
and you are now accessing a node!