FAQ
Also see our About R and R Studio Server and About Python and JupyterHub server help page.
BRCF account management
TACC account no longer required for BRCF account creation
As of summer 2019, having an active TACC account is no longer required to set up a BRCF accoun using the BRCF Account Request page.
I forgot my BRCF account password
If you have an active UT EID, you can use the BRCF Account Management application to reset your password after EID validation, using this page: Password Reset by User. See Resetting a forgotten password for instructions if you need them.
If you do not have an active UT EID, please Contact Us to reset your password.
BRCF accounts for non-UT users
It is possible for collaborators outside of UT to obtain BRCF credentials in order to access BRCF PODs. There are two paths.
1) For full remote access (including Samba browsing) the external collaborator need the UT VPN service. This is only available to UT students/staff, but PIs can set up "0-hour appointments" to UT which provide access to the UT VPN and cost nothing (other than the paperwork ). With this path the person could use the BRCF Account Request application to set up their own account, specifying the appropriate PI under Affiliation.
2) For access to SSH and the web applications, the external collaborator would just need a BRCF account. One complication is that access to the BRCF Account Request website is only allowed from within the UT campus network, or using the UT VPN service, which they would not have. A work-around is to have the POD owner or delegate Contact Us and authorize us create an account for the collaborator (see Available BRCF PODs for POD owners and delegate names). We will do so, and assign a password which we will share with the UT-affiliated requestor in UT's Stache (https://stache.utexas.edu/login). The requestor can then provide that password to the collaborator, which will be needed for web application access. (Also note that password changes can only be made through the network-restricted BRCF Account Request application.
A second complication is that different POD services are restricted in different ways. For example, SSH access outside the UT campus network or without the UT VPN service is only supported using public key encryption. So for SSH access, the collaborator will need to generate a public/private key pair and email us the public key to install in their home directory. They can then SSH to pod compute servers, without providing a password. See Passwordless Access via SSH for details.
See POD Service Access for a full description of which POD services/resources are available under which circumstances.
Compute server access
I get "Permission denied, please try again" or other error trying to login to a compute server
This is usually due to providing the wrong user name or password.
First make sure that you are using the correct BRCF credentials by logging into the BRCF Account Request application.
If you can login to that application, check that you have access to the server you are trying to access. These are listed under "PODs you have (or will have) an account on" on your home page in the BRCF Account Request application.
If you can't login to that application, try resetting your password following the steps described at Resetting a forgotten password.
If none of these steps work, please Contact Us for assistance.
I can't SSH to POD's compute servers from off campus or from DMS
Per a directive from the UT Information Security Office (ISO), SSH access using passwords from outside the UT campus network has now been blocked. Please see the POD access discussion for more information.
Networks at Dell Medical School are not part of the UT campus network, so require use of the UT VPN service.
How to set up password-less access to POD compute servers using public key encryption
First set up a public/private key pair as described in Passwordless Access via SSH. If you have access to the POD from the UT campus network, you can set up the public key yourself. Since your home directory is on shared storage visible to all compute servers, you can perform this setup on any of your POD's compute server. Just be sure that:
- your ~/.ssh directory on the POD has permissions rwx------ (o700)
- your public key is on one line in ~/.ssh/authorized_keys
- the ~/.ssh/authorized_keys is owned by you and has permissions rw-r----- (o640)
If you do not have access to the POD from the UT campus network, email us (rctf-support@utexas.edu) the public key file along with the BRCF user name and POD, and we will set it up for you.
The private key corresponding to this public key must be in the ~/.ssh directory (permission o700) on the computer used to access the POD, and the private key file (usually id_rsa) must have permissions rw------- (o600).
You should then be able to login to any POD compute server without being prompted for a password. Note that the need to specify non-standard port 222 was removed in June 2022.
# Login to a compute server where my public key is in ~/.ssh/authorized_keys ssh -username@xxxxcomp01.ccbb.utexas.edu # Copy files to the compute server from off campus using SCP scp local_file username@xxxxcomp01.ccbb.utexas.edu:~/ # Use rsync to copy files to the compute server from off campus rsync -avrP -e 'ssh -i ~/.ssh/id_rsa' \ local_directory/ username@xxxxcomp01.ccbb.utexas.edu:~/local_directory/
How to set up the UT VPN service
Off-campus access to BRCF PODs can be enabled using UT's VPN (Virtual Private Network) service. Once this software is installed, activating it from an off-campus computer makes that computer appear as if it is part of the UT campus network, thus enabling Samba remote file system access and compute server web applications and ssh login.
Step by step instructions for setting up UT's VPN software are described here: https://utexas.atlassian.net/wiki/pages/viewpage.action?spaceKey=networking&title=Connecting+to+the+UT+VPN+Service. In addition, this remote_computing_software_download_instructions.pdf PDF provides detailed information about how to configure the UT VPN service, set up Duo 2-factor authenticaion, and installing software for remote SSH access in Windows.
Briefly, the setup process is as follows:
- Create a Duo two-factor authentication (2FA) account at the Duo self-registration portal: https://utdirect.utexas.edu/apps/duo/register/
- Install the Cisco AnyConnect client.
- We recommend downloading the client software and installing it directly rather than connecting to the VPN service in a browser.
- When access to UT network resources is needed, connect the Cisco AnyConnect client to vpn.utexas.edu.
- Supply your UT EID and password as the 1st factor
- Acknowledge the Duo 2nd factor request
Note that you must have an active high-assurance UT EID to use the VPN service, which will be the case for all UT students, faculty and staff.
Non-UT students, faculty and staff will need to obtain a 0-hour appointment through a UT sponsor (generally the POD owner; see Available PODs) in order to use the UT VPN service. The 0-hour appointment will allow them to obtain a high-assurance UT EID through the EID upgrade process. See https://ut.service-now.com/sp?id=kb_article&number=KB0011333 for more information.
VPN issue in Windows 10 WSL
Users have reported problems accessing BRCF resources from a Windows Subsystem for Linux (WSL) Ubuntu 18.04 shell, with the UT VPN service active. The error reported is usually something like "hostname not found", because Internet access is not working at all so the initial hostname-to-IP address DNS lookup fails.
This post (https://github.com/microsoft/WSL/issues/5068#issuecomment-637880306) suggests that the problem is the interaction between WSL and the specific VPN application. Users have reported resolving the issue in one of the following ways:
- Use WSL in the older WSL1 mode rather than the newer WSL2 mode
- Uninstall the Cisco AnyConnect application, and instead download and install the Windows app store version of AnyConnect
- Use the built-in Windows VPN instead of AnyConnect
For information on installing WSL in Windows 10, see this post: https://docs.microsoft.com/en-us/windows/wsl/install-win10.
Desktop file system access via Samba
How can I configure POD storage to be accessible via Windows file Explorer or Mac's Finder?
This is done using the Samba remote file system protocol. See Samba remote file system access for how to configure this on your Windows or Macintosh computer.
Can't connect to a Samba share after POD maintenance
It sometimes happens that a Samba share that was mounted before BRCF POD maintenance no longer works afterwards, or attempting to mount a new share does not work. The first thing to try in this situation is to reboot your laptop/desktop computer, as we have seen a number of Samba-related issues that are only cleared by a clean reboot. If connection problems persiste, please Contact Us for assistance.
"The network folder specified is currently mapped using a different user name and password" (Windows only)
The message can be wrong because:
- The network folder specified is not mapped at all, or
- The server is already mapped (to a different drive letter and different share) using the same userid and password.
So the error message can be wrong on two counts!
Easy fix: you can map to different shares on the same server without re-specifying the credentials. Specify the username when asked, but no password is needed.
See this Microsoft post for full details: https://answers.microsoft.com/en-us/windows/forum/windows_vista-networking/the-network-folder-specified-is-currently-mapped/928f6313-fe2c-4d2d-a247-152ec022e062?page=4
My password works for Samba on Macs but not Windows (or vice versa)
This can happen when your password contains certain special non-alphanumeric characters. Try changing your password so that it contains only numbers and letters.
If you have an active UT EID, you can use the BRCF Account Management application to change your password (after EID validation) using this page: Password Reset by User. See Resetting a forgotten password for instructions if you need them. If you do not have an active UT EID, please Contact Us to reset your password.
If you logged into the computer through the Austin domain via your EID credentials, the computer will try to connect to the storage server with those credentials (AUSTIN\(your EID) instead of your BRCF credentials. In order to change the credentials, select 'connect using a different user' and type '.\(your BRCF username)’ and then enter your BRCF password.
Transferring data to/from PODs
I'm having trouble transferring files to/from POD
You are probably going through a POD compute server instead performing direct transfers to/from the POD's storage server.
BRCF storage servers are just Linux servers, but ones dedicated to running a ZFS file system, not applications. They are not available for interactive shell (ssh) access; however, they provide direct file transfer capability via scp or rsync. Using the storage server as a file transfer target is useful when you have many files and/or large files, as it provides direct access to the shared storage. Going through a compute server is also possible, but involves an extra compute-server-to-storage-server network hop – it's a fast 10g hop, but a hop nonetheless. Also, since POD compute servers are used for general computation, that usage could be interfering with the system's ability to handle network I/O.
The solution is to target your POD's storage server directly. All user accounts are implemented there, but instead of having an interactive bash shell you have a special rssh shell, which allows you to use scp or rsync. When you do this, you are going directly to where the data is physically located, so you avoid extra network hops and do not burden heavily-used compute servers.
Note that direct storage server file transfer access is only available from UT network addresses, from TACC, or using the UT VPN service.
Also, we recommend you use rsync over scp. While both work, rsync has much better performance (it is multi-threaded) and has the ability to pick up where it left off when transferring large directories. Unfortunately, its man page is about the length of War and Peace, but 90% of the time there is an easy formula. For example, to invoke rsync on a directory ~/foo at TACC (for example) to a local shared Work area directory for the GSAF group on the GSAF POD:
# Target the storage server from the UT campus network or # off campus with the UT VPN service active rsync -avrW ~/foo/ \ abattenh@gsafstor01.ccbb.utexas.edu:/stor/work/GSAF/foo/ # Target the storage server from off campus using SSH keys rsync -avrW -e 'ssh -i ~/.ssh/id_rsa' ~/foo/ \ abattenh@gsafstor01.ccbb.utexas.edu:/stor/work/GSAF/foo/
Note that the trailing slash ( / ) characters for the source and target directory names are very important.
Here are the steps to construct a <user>@<storage_server>:/stor/work/<group>/<dir>
path such as the one above:
- the
<user>
portion is your BRCF account name (abattenh
above) - the
<storage_server>
portion is the full host name of your BRCF storage server (gsafstor01.ccbb.utexas.edu
above)- see this table to identify your POD's storage server: Available PODs
- the <group> portion matches your Linux group name on your POD (
GSAF
above)- type groups when logged into one of your POD compute servers to list your group(s)
- the -avrW options say to sync the entire directory tree, and to handle large files efficiently.
Finally, it is important that large files not be transferred to your POD Home directory, as that directory has a 100 GB quota. See Home directory quotas for more information.
How can I get my GSAF sequencing data to TACC?
Since GSAF sequencing data is now delivered via Illumina BaseSpace, it can involve several steps to get that data over to TACC where initial processing steps often take place: first extracting the data from BaseSpace to some local storage, then transferring it from there to TACC.
For GSAF sequencing customers with access to the GSAF_POD, an alternative exists for transferring data. The gsafcbig01.ccbb.utexas.edu compute server has a FUSE mount of the GSAF sequencing data archive: gsafcbig01.ccbb.utexas.edu:/mnt/corral. With appropriate credentials (my_username below), and knowing your GSAF run (SAnnnnn below) and job (JAnnnnn below) numbers, data can be transferred from gsafcbig01 like this, assuming you're logged in to one of the TACC compute clusters:
rsync -avrWP my_username@gsafcbig01.ccbb.utexas.edu:/mnt/corral/SAnnnnn/Project_JAnnnnn/ $SCRATCH/JAnnnnn/
Note that recent Runs/Jobs may not show up here right away, as the archival is part of offline run post-processing.
The /mnt/corral directory can be accessed directly if you are logged in to gsafcbig01.ccbb.utexas.edu; however, because of the large number of run directories, doing ls /mnt/corral can take a very long time to return results. A faster way to query the FUSE mount contents is to specify the run, e.g. ls /mnt/corral/SAxxxxx.
Also note that the gsafcbig01 FUSE mount sometimes experiences connection issues, so if executing a command like this reports an error that appears connection related (e.g. "endpoint not connected", "stale file handle"), Contact Us to resolve the problem.
If you are a GSAF customer and have a TACC account, you can request access to the GSAF_POD. If you do not yet have a BRCF account, please use the BRCF Account Request application to set one up. If you already have a BRCF account but do not have GSAF POD access, Contact Us to request it.
FTP hangs when attempting to get/put
After successfully connected to an external FTP/SFTP server from a POD compute node, a get or put command may hang. This is beause the FTP protocol from a firewalled server requireds that passive mode be enabled. Passive mode can be enabled using any of the following approaches:
- Enter the passive command after successful FTP login
If you see a message like "Passive mode on", you're fine.
ftp> passive Passive mode on. ftp> ls 227 Entering Passive Mode (130,14,29,35,196,253). 150 Opening BINARY mode data connection for file list
- If you see something like "Passive mode off", just enter passive again (mode is toggled)
- Pass the -p flag to the ftp program
(e.g.,ftp -p ftp-private.ncbi.nlm.nih.gov
). - Use the pftp command instead of ftp
(e.g.,pftp -p ftp-private.ncbi.nlm.nih.gov
). - Edit your configuration file to make passive the default.
Home directory quotas and snapshots
Staying under your 100 GB quota
Home directories are intended for use to keep small, personal files. To enforce this, there is a 100 GB quota on all user Home directories. The shared Work or Scratch area for your group(s) are available for larger files
Some file transfer programs (e.g. NCBI's SRA toolkit, Globus, and others) target sub-directories of the user's Home directory by default. Such programs usually also offer command-line or configuration file options to change the default target location. If a different directory cannot be specified, the physical directory can be replaced by a symbolic link to a location in the Scratch or Work area so that transferred files go there instead.
How much space and I using home directory?
When you open a terminal window on one of the compute nodes (e.g. SSH to one of your compute nodes) your current Home directory quota is displayed automatically. You can also enter the quota command on the command line at any time. In either case, a message such as this one is displayed:
Quota Report for imauser Mount Point Used Total Last Checked stor/home/imauser 7.5G (7%) 100G Fri 16 Nov 2018 08:02:02 PM CST
The above shows that the user imauser has a home directory mounted at /stor/home/imauser and is using 7.5G (about 7%) of their 100G quota. The quota information is updated about every 2 minutes normally – the last field tells you when the information was last updated. If you are deleting files, you may need to wait a few minutes to see the change reflected in the quota report.
Home do I tell how much space my snapshots are taking?
When a snapshot is created, its disk space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, disk space that was previously shared becomes unique to the snapshot, and thus is counted in the snapshot's used property. Additionally, deleting snapshots can increase the amount of disk space unique to (and thus used by) other snapshots. Because of this complex relationship between a filesystem, its snapshots, and between multiple snapshots, it is fairly difficult to determine not only how much unique space any given snapshot is using, but also how much space would be freed up if deleting multiple snapshots.
We have provided reports generated daily at /stor/system/opt/zfs-snapshot-disk-usage/<my-username> which attempt to help by providing some useful, though perhaps hard at first to understand, space accounting of your home directory snapshots, showing how much space would be freed by deleting any number of consecutive snapshots.
The script generates output in csv format for all pairs of snapshots in your home filesystem, and shows how much space would be freed by deleting the corresponding sequence of snapshots. It produces a csv-formatted matrix with row and column headers. Your report can be read using Excel, libreoffice, or some other spreadsheet-type software to see exactly which snapshots are using the most space in your home directory filesystem.
Once you open the file in a spreadsheet, you can find the intersection of any row and any column to determine how much space (in bytes) would be freed by deleting all snapshots between and including those two snapshots.
How can I delete snapshots?
At this time, please contact us for help deleting your snapshots. In the future, we hope to have a way for you to manage your own snapshots.
How can I delete files when I'm over my quota?
If you are over or at your quota, you may find it difficult to delete files. ZFS is a copy-on-write filesystem, so a file deletion transiently takes slightly more space on disk before a file is actually deleted as it writes the metadata involved with the file deletion before it removes the allocation for the file being deleted. This is how ZFS is able to always be consistent on disk, even in the event of a crash.
However, all is not lost. You can truncate the file, then delete it. For example, if you want to delete a file called junk.txt, you would use the following commands:
cat /dev/null > junk.txt # OR: cp /dev/null junk.txt rm junk.txt
Specific software-related issues
Error running FastQC
Running fastqc on POD compute servers generates the Java exception shown below.
This happens because the version of fastqc installed requires a newer version of Java than the global default Java. One solution is to define a fqc alias as shown below, then call it instead of fastqc.
alias fqc="/usr/bin/fastqc -java=/usr/lib/jvm/java-11-openjdk-amd64/bin/java" fqc my.fastq
JupyterHub "try restarting it from the hub" message
One issue specific to the JupyterHub web application is getting 503: service unavailable. with the message: try restarting it from the hub after logging in with BRCF credentials. Sometime just clicking on the Restart button will work; if not, try logging in again.
Running GUI programs
What are X-Windows and X11?
X-Windows is the display rendering (windows) system used on Linux/Unix computers. X11 is the protocol used by X-Windows; it specifies how an application window is to be rendered (e.g. a series of lines, boxes, polygons, colors, etc, etc). The application creates a series of such steps according to the X11 protocol specification. The receiver (the X11 client) reads and follows those steps to render an image in its local environment.
The terms X11 and X-Windows are sometimes used interchangeably, but technically X11 is the protocol and X-Windows is a term for applications that use that protocol to render their windows, and the entirety of the components involved.
The important thing about X-windows is that it is a portable system. The application that uses X-windows to render its screens can run on any operating system, not just Linux/Unix. As long as the client (e.g. the Mac Terminal, or an appropriate Windows program) supports X11, the visualizations can be rendered locally. Contrast this with Windows-only or Mac-only applications which use operating system specific API calls.
The other bottom line is that to run a GUI program installed on a BRCF pod, you'll need an X11-enabled terminal (see below).
How do I invoke a GUI program on a POD compute server?
A terminal enabled for X-Windows (X11) is needed in order to invoke Linux GUI (graphical user interface) program and see the GUI on your laptop or desktop running Windows, Mac OS, or even Linux. This involves installing a 3rd party X-Windows package (Xming on Windows; XQuartz on Mac) then using an X-Windows-capable terminal (PuTTY on Windows, the built-in Terminal on Mac) to SSH to the server where the X-Windows GUI program resides. This link describes the needed configuration: https://kb.iu.edu/d/bdnt (obviously specifying BRCF credentials and servers rather than Indiana University ones).
If you invoke ssh explicitly from a Mac Terminal, use ssh -Y to connect to the POD compute server (the -Y enables forwarding of the X11 commands to the X-terminal). On Windows, you configure PuTTY to use X11 forwarding automatically when you open a session.
How do I use MATLAB on the POD?
MATLAB is installed on all PODs and is available in the /stor/system/opt/MATLAB/R2015b directory.
To use the MATLAB GUI (graphical user interface), you need to login from an X11-enabled terminal. See this link for obtaining and configuring one: https://uisapp2.iu.edu/confluence-prd/pages/viewpage.action?pageId=280461906.
From your X11-enabled terminal, use ssh -Y to connect to the POD compute server (the -Y enables forwarding of the X11 commands to the X-terminal). Once logged in, type matlab. This will (slowly) open a graphical window to run matlab in.
Here's how to create a script in matlab.
- In the "Command Window" in the middle of the matlab window, type "1+1" and hit return, it should say "2".
- Click the "New Script" button at the upper left (or the "New" Button, then select "Script" if you don't see "New Script").
- This will open an editing window for a script.
- This will open an editing window for a script.
- Type "1+1" in the window, then click "Save" from the upper menu.
- Name it anything with a ".m" extension (such as untitled.m, the default).
- Name it anything with a ".m" extension (such as untitled.m, the default).
- You can then use then "Open" menu, or the "Current Folder" pain, to open that file in the future.
- Once open in the Editor, you can use the "Run" command from the Editor menu to run it.
- Exit matlab (using either the "exit" or "quit" command)
To open matlab without the graphical interface, type the not-so-short or intuitive command: matlab -nodisplay -nosplash. This should give an interactive command prompt. To exit, type quit or exit. Other sometimes-useful options for the non-GUI matlab include -nojvm (might speed things up a bit) and -wait (wait until your jobs finish before exiting).
To run the "script" we created above (called untitled.m in your home directory) and exit, you can do something like:
matlab -nodisplay -nosplash -r "run('~/untitled.m');quit"
To add some error checking, you can use:
matlab -nodisplay -nosplash -r "try, run('~/untitled.m'), catch, exit, end, exit"
Another simple example script could be created and executed from the command line as shown below. (It should tell you the answer is "7.3529".)
echo "5^3/(2^4+1)" > ~/untitled2.m matlab -nodisplay -nosplash -nojvm -r "run('~/untitled2.m');quit"
General
Open files maximum and other limits
The number of open files (and other resource limits) is controlled by the ulimit command. The default can be increased, up to a point, by the user as shown below. Note that any ulimit change only applies to the current login session, although this can be automated by addinig it to your ~/.bashrc or ~/.profile file.
# View all current limits ulimit -a # View current open files limit ulimit -n # Increase number of open files from 1024 to 2048 for this login session ulimit -n 2048
"Cannot create temp file" error
While all compute servers share the large storage server disk array, each has its own small local disk where the OS is installed. This is also where the default /tmp directory on Linux is located.
Since OS disks are generally 250-500GB, having multiple programs write temporary files to /tmp at the same time can fill up the OS disk and lead to errors (e.g. "Cannot create temp file") and other symptoms, such as inhibiting tab completion of filenames.
The solution is to tell the program to write its temporary files somewhere else, such as a directory in your Scratch area. Most programs have an option for this if you look at their documentation.
Note that Home directories should not be used for temporary files, since they have 100 GB per-user quotas and can also fill up.
If the program you're using doesn't seem to have an option to change the location of temporary files, you may be able to achieve the same effect by setting several environment variables that are commonly used. The exmple below creates a directory in the GSAF group's Scratch area.
mkdir -p /stor/scratch/GSAF/tmp chmod 777 /stor/scratch/GSAF/tmp TMPDIR=/stor/scratch/GSAF/tmp TMP=$TMPDIR TEMP=$TMPDIR export TMPDIR TMP TEMP
You may also need to Contact Us to delete /tmp directory contents if you are not able to do so yourself (users can delete anything they write to /tmp).
Tab completion not working (or is very slow)
If tab completion is not working, or is very slow, that is often a result of too much data in the system's /tmp directory, which causes the OS disk to run out of storage. See the "Cannot create temp file" error FAQ directly above for more information.
How do I manage being a member of multiple groups on a POD?
See this discussion: Multiple POD group memberships
Welcome to the University Wiki Service! Please use your IID (yourEID@eid.utexas.edu) when prompted for your email address during login or click here to enter your EID. If you are experiencing any issues loading content on pages, please try these steps to clear your browser cache.