Overview - HPC @ QMUL Skip to content HPC @ QMUL Overview Initializing search HPC @ QMUL Home Request a service Request a service Request an HPC account Request a Github only account Request an application Request storage Upload public SSH key Introduction Introduction Linux introduction HPC introduction Logging in SSH keys Usage policy Mailing lists Getting help HPC compute nodes HPC compute nodes Overview DDY nodes EMF nodes NXG nodes NXN nodes NXV nodes SBG nodes SDV nodes SDX nodes SRM nodes WN nodes Andrena cluster Using the cluster Using the cluster Citing Apocrita in publications Submitting jobs Job Script Builder Using arrays Monitoring jobs Job statistics Debugging jobs Tuning job requests Using modules Using GPUs Compiling C, C++ and Fortran Code Choosing a Python distribution Man pages Ephemeral databases Infiniband Memory Storage Storage Storage systems Quotas Using $TMPDIR Scratch Moving data Deleting files Backups and snapshots Public datasets Globus Globus Getting started Creating an account Creating a collection Transferring files Confirming a transfer Accessing a collection Sharing data Managing groups Removing access Adding roles Application documentation Application documentation Overview Application list Singularity containers Singularity containers Overview Building containers Using containers Resources Languages Languages Anaconda Java Julia Python R Ruby SBCL Development tools and libraries Development tools and libraries Autotools Boost C++ Library PAPI Data management Data management GDAL HDF5 UDUNITS Numerical libraries Numerical libraries FFTW GSL Intel Math Kernel Library (MKL) Linear algebra (BLAS and LAPACK) General applications General applications FFmpeg GNU Parallel Legacy applications Metis Pandoc Pigz Zstd Bioinformatics applications Bioinformatics applications ABySS ANGSD ANNOVAR BamUtil Bartender BCFtools BEDOPS Bedtools BGEN Bismark BLAST+ BlobTools Bowtie Bowtie2 BUCKy BUSCO BWA Canu CAVIAR Cell Ranger Cufflinks DIAMOND Ensembl-VEP Entrez EPACTS Exomiser FastME Fastqc Flye FRC_align FreeBayes FREEC GATK Gautomatch GCTA Genometools Guidance HyPhy jModelTest2 KMC Kraken LAST LASTZ LDhelmet LoFreq Long Ranger MAFFT MaSuRCA MCL Meerkat Migrate Minimap2 MIRA Mothur MSMC MutScan Nextflow Phylobayes Plink Qualimap QUAST RAxML RAxML Next Generation RELION RepeatMasker RSEM Salmon SAMtools Scrappie Seqtk Skewer SLiM Space Ranger SPAdes SRA Tools Stacks STAR Supernova Taiyaki TopHat TransDecoder Trim Galore Trinity Wise2 Wtdbg2 Chemistry applications Chemistry applications AMBER CASTEP CRYSTAL DL_POLY DL_POLY Classic GAMESS Gaussian Gromacs LAMMPS MOPAC2016 Open Babel OpenMolcas Quantum Espresso RMCProfile ShengBTE TRAVIS VASP VMD Engineering applications Engineering applications Abaqus Ansys CST Studio Star-ccm+ Machine learning Machine learning DeepLabCut PyTorch SMAC3 TensorFlow Mathematical applications Mathematical applications Mathematica Matlab Scilab Stata OnDemand OnDemand Overview Jupyter R Studio TensorBoard Github Enterprise Tier 2 HPC Grant costing Blog About this site Nodes Overview¶ Apocrita runs a variety of different job types, and the cluster is comprised of different node types and queues to accommodate this. Unless you have a specific technical reason, you should avoid selecting a specific node type to run your job on. Some nodes are restricted depending on the source of funding. The following nodes are open access, allowing jobs from any user. Open Access Required variables Count Cores RAM Arch ddy 56 48 384GB Intel emf highmem 14 24 768GB Intel nxv 34 32 256GB Intel nxg gpu 4 32 256GB Intel sbg gpu 3 32 384GB Intel sdv infiniband=sdv-i 32 24 192GB Intel sdx 52 36 384GB Intel srm highmem 2 36 768GB Intel Total 197 7048 70.8TB The following nodes are restricted to specific groups controlled by the purchasers of the nodes. Requests for access to these nodes will need to be confirmed by the owner of the node before access will be granted. We additionally have a node called burst1 used for lab and training sessions, and as a burst node for extending the capacity of the cluster outside term-time. Restricted Access Required variables Count Cores RAM Arch ddy 58 48 384GB Intel nxn infiniband=nxn 32 16 64GB Intel nxn 6 24 192GB Intel nxv 8 32 256GB Intel sdv serial 10 24 192GB Intel sdv parallel infiniband=sdv-ii 15 24 192GB Intel panos1 1 32 512GB Intel burst1 1 40 768GB Intel Total 131 4368 33.6TB Owned nodes complex Users with access to owned nodes can ensure jobs run on an owned node by requesting -l owned in the job script or on the command line e.g. qlogin -l owned The following table lists the CPU instruction sets supported by Apocrita Nodes. Note that AVX-512 is a family of extensions and an entry in this table may not mean that all AVX-512 extensions are supported on the node. To see whether a particular AVX-512 extension is supported you can check the CPU flags on a specific node. Node Type SSE SSE2 SSSE3 SSE4 AVX AVX2 AVX-512 ddy emf nxn parallel nxn serial nxv nxg sdv srm sbg sdx Hyperthreading is not enabled on compute nodes. Jobs are therefore allocated real cores. Previous Getting help Next DDY nodes Made with Material for MkDocs