Page Menu

High Performance Computing

Attention SCI Cluster Users:

We have a report of MacOS Sonoma failing to open hpc.umassmed.edu and ood.umassmed web pages with Safari.  If you encounter this please try using another browser, such as Chrome, and let us know at hpc@umassmed.edu.

High Performance Computing (HPC) uses distributed computational cycles to decrease the amount of time a single job would take. HPC processing jobs typically consist of searching or time and process jobs. Processing string searching for genomic data comparisons assists with the speed-up of “needle” and “haystack” processing and analysis

Researchers utilize custom and open software to analyze, distribute, and run calculations on large data sets. Utilizing best practices users can decrease job run time several fold by using distribution and cluster related protocols such as MPI.

Examples of HPC distribution include: Monte Carlo computations, time and space computations, and string (over DNA, etc.) matching algorithms. Users can create programs, and scripts, on the cluster here at UMASS for pattern matching and general search based needs; for example using HG18 we can create simple shell scrip(s) using the Perl scripting language for effective pattern matching. 

Resources

AlphaFold
Free for staff and faculty
AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment. AlphaFold is available on the SCI cluster. If you don't have an account on the SCI Cluster, you can request one here: https://hpcportal.umassmed.edu/ 


Priority Queues for faculty/labs.

  • PI or Campus purchases equipment with agreement of 5-year life span.
  • PI retains access to the 5,000+ core shared cluster.
  • PI has priority queue on purchased equipment.
  • Jobs from short queue (<=4 hrs) are allowed to backfill when priority queues are idle. Backfilled jobs will not be preempted (comparable to the large queue).
  • Electricity, Rent, Software and Hardware maintenance, and cluster administration are all included in the capital purchase cost.
  • All software modules available to the Shared Cluster will be available to your Priority Queue cluster nodes

How to procure a Priority Queue.

  1. Contact:
  2. Provide your hardware requirements (cores, memory, storage) 
    • HPC team will prepare a quote for you & order the hardware for you, using UMass Chan's deep discounts with DELL and Lenovo.
    • HPC team will rack and administer your hardware for you.
  3. Sign a Memorandum of Understanding. The key conditions are:
    • Your hardware will be attached to the SCI Cluster.
    • A new queue will be created giving your team priority access to those cores.
    • Your hardware will be removed in five years.

The HPC environment runs the IBM LSF scheduling software for job management. The SCI Cluster consists of the following hardware:

Networking:

  • EDR/FDR based Infiniband (IB) network
  • 100 Gigabit Ethernet network for the storage environment

Storage:

  • 2+ petabytes of Panasas parallel high performance storage

Data storage and pricing details

Computing:

# of Nodes
Cores per Node
CPU
Memory per node
GPU
Total Cores
62
40 (20 x 2 CPUs)
Intel Xeon Gold 6230 2.1G, 20C/40T, 10.4GT/s, 27.5M Cache, Turbo, HT (125W) DDR4-2933
384G
N/A
N/A
2460
20
128 (64 x 2 CPUs)
AMD 7702 2GHz, 64C/128T, 256M, 200W,3200
512G
N/A
N/A
2560
10
40 (20 x 2 CPUs)
Intel Xeon Gold 6230 2.1G, 20C/40T, 10.4GT/s, 27.5M Cache, Turbo, HT (125W) DDR4-2933
512G
(4x) NVIDIA Tesla V100 SXM2 32G GPU Acceleratorfor NV Link
ncc=7.0
400
1
128 (64 x 2 CPUs)
AMD 7763 2.45GHz, 64C/128T, 256M, 280W, 3200
512G
(4x) NVIDIA HGX A100 SXM4 40GB 400W GPU Assembly
ncc=8.0
128



Scheduling Software:

  • The HPC environment runs the IBM LSF scheduling software for job management

HPC Accounts and Training:

If you are interested in an HPC account, please use this link to request access (VPN required).  Individual support for cluster usage is available.  For more information please contact hpc@umassmed.edu.