| Table of Contents |
|---|
...
Austin's own Advanced Micro Devices (AMD) has most generously donated a number of GPU-enabled servers to UT.
While it is still true that AMD GPUs do not support as many 3rd party applications as NVIDIA, they do support many popular Machine Learning (ML) applications such as TensorFlow, PyTorch, and AlphaFold, and Molecular Dynamics (MD) applications such as GROMACS, all of which are installed and ready for use.
Two BRCF research pods have AMD GPU servers available: the Hopefog and Livestong PODs. Their use is restricted to the groups who own those pods. See Livestrong and Hopefog pod AMD servers for specific information.
The BRCF's AMD GPU pod is available for instructional use and for research use for qualifying UT-Austin affiliated PIs. Allocations are granted to groups who will only perform certain GPU-enabled workflows, not for general computation. To request an allocation, contact us at rctf-support@utexas.edu, and provide the UT EIDs of those who should be granted access.
...
The /stor/scratch/AlphaFold directory has the large required database, under the data.4 sub-directory. There is also an AMD example script /stor/scratch/AlphaFold/alphafold_example_amd.shand an alphafold_example_nvidia.sh script if the POD also has NVIDIA GPUs, (e.g. the Hopefog pod).
On AMD GPU servers, AlphaFold is implemented by a run_alphafold.py Python script inside a Docker image, See the run_alphafold_rocm.sh and run_multimer_rocm.sh scripts under /stor/scratch/AlphaFold for a complete list of options to that script.
AlphaFold requires a number of databases in order to run and several versions of these databases can be found under /stor/scratch/AlphaFold/:
- data.1, data.2, data.3, data.4 – the default, but can be changed in the run_*rocm.sh scripts
GROMACS
AMD GPU-enabled version of the Molecular Dynamics (MD) GROMACS program is available on all AMD GPU servers, and a CPU-only version is installed also.
...
Two Python scripts are located in /stor/scratch/GPU_info that can be used to ensure you have access to the server's GPUs from TensorFlow or PyTorch. You can run them from the command line using time to see the run times. as shown below:
- Tensor Flow – AMD GPU pod servers (amdgcomp01/02/03)
- time (python3 /stor/scratch/GPU_info/tensorflow_example.py )
- Tensor Flow – Livestrong and Hopefog pod servers (livecomp02/03, hfogcomp02/03)
- time (python3 bash /stor/scratch/GPU_info/tensorflow_example.amd-mi50.sh )sh
- PyTorch – all compute servers
- time (python3 /stor/scratch/GPU_info/pytorch_example.py )
...
- ROCm Video series
- https://community.amd.com/t5/instinct-accelerators-blog/rocm-open-software-ecosystem-for-accelerated-compute/ba-p/418720
- Especially the Introduction to AMD GPU Hardware: Link
- Provides hardware background and terminology used throughout other guides
- Also
- AMD ROCm resources Learning Center: https://developer.amd.com/resources/rocm-resources/rocm-learning-center/
...