Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

Overview

The main point of using Lonestar is that it is a massive computer cluster. If we run a command when logged into lonestar, we are running it on one of the two low memory, low power  "head" or "login" nodes on TACC. WHen we do serious computations that are going to take more than a few minutes or use a lot of RAM, we need to submit them to one of the other 1,888 computer nodes and 22,656 cores on Lonestar.

In this section we are going to learn how to submit a job to the Lonestar cluster.

Diagram of how a job gets run on Lonestar

...

The launcher_creator.py script just helps you by creating jobs.sge easily - saves you some time editing a file (and potentially messing it up).

Launcher

The main point of using Lonestar is that it is a massive computer cluster. If we run a command when logged into lonestar, we are running it on one of the two low memory, low power  "head" or "login" nodes on TACC. WHen we do serious computations that are going to take more than a few minutes or use a lot of RAM, we need to submit them to one of the other 1,888 computer nodes and 22,656 cores on Lonestar.

...

In the examples we tend to say that a job can be "interactive" or should be "submitted to the TACC queue". The first means that you can type it and run it directly. It should be short enough that it does not tie up the TACC head node. The second means that you should go through the launcher submission process described here.

...

We should mention that launcher_creator.py does some under-the-hood magic for you and automatically calculates how many cores to request on lonestar, assuming you want one core per process. You don't know it, but you should be grateful that this saves you from ever having to think about a confusing calculation.

Lonestar Queue

Next step would be to submit the job to the queue by using the launcher file.

...