Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

We are developing a cluster for local ATLAS computing using the TACC Rodeo system to boot virtual machines.  If you just want to use the system, see the next section and ignore the rest (which describes the virtual machine setup and is a bit out of date as of Sep 2015).

Transferring data from external sources

The Tier-3 nodes do not directly connect to any storage space.  We can access files via the xrootd protocol from the /data disk that is mounted by all the workstations and utatlas.its.utexas.edu (see below).  So files must first be transferred to the tau workstations or to utatlas.its.utexas.edu.  Methods include:

  • Rucio download for Grid datasets
  • xrootd copy for files on CERN EOS/ATLAS Connect FaxBox/ATLAS FAX (Federated XrootD)
  • /wiki/spaces/utatlas/pages/50626812 for files on ATLAS Connect FaxBox, TACC, or CERN

 

Getting started with Bosco

The Tier-3 uses utatlas.its.utexas.edu as a submission host - this is where the Condor scheduler lives.  However 

Bosco is a job submission manager designed to manage job submissions across different resources.  It is needed to submit jobs from our workstations to the Tier-3.

Make sure you have an account on our local machine utatlas.its.utexas.edu, and that you have passwordless ssh set up to it from the tau* machines.

To do this create an RSA key and copy your .ssh folder onto the tau machine using scp.

 Then carry out the following instructions on any of the tau* workstations:

Code Block
bash
bash
cd ~
curl -o bosco_quickstart.tar.gz ftp://ftp.cs.wisc.edu/condor/bosco/1.2/bosco_quickstart.tar.gz
tar xvzf ./bosco_quickstart.tar.gz
./bosco_quickstart

...

Code Block
bash
bash
ssh username@alamo.futuregrid.org

Then visit the list of instances to see which nodes are running. Then simply 

Code Block
bash
bash
ssh root@10.XXX.X.XX

and you are now accessing a node!