'UNSW - Science

UNSW - Science - HPC

OpenMPI

For information about OpenMPI please visit http://www.open-mpi.org. More information will be placed here as it is written.

MPI Versions Available via Module

Software Name Version
OpenMPI 1.4.4 OpenMPI 1.4.4
openmpi 1.4.4-intel openmpi 1.4.4-intel
OpenMPI 1.4.5 OpenMPI 1.4.5
openmpi 1.4.5-intel openmpi 1.4.5-intel
OpenMPI 1.6.4 OpenMPI 1.6.4
openmpi 1.6.4-intel openmpi 1.6.4-intel
OpenMPI 1.6.5 OpenMPI 1.6.5
openmpi 1.6.5-intel openmpi 1.6.5-intel
OpenMPI 1.8.3 OpenMPI 1.8.3
openmpi 1.8.3-intel openmpi 1.8.3-intel
OpenMPI 1.8.8-gcc6 OpenMPI 1.8.8-gcc6
PhyloBayes-MPI 20161021 PhyloBayes-MPI 20161021

Mothur MPI Job Script Examples

If you want to run Mothur and use more memory that is available on an individual compute node or split your job into more parts so that it finishes sooner then you can use the command mothur-mpi to run your job over multiple compute nodes at the same time. This uses OpenMPI to split the job over several nodes and pass information back and forth as required.

Here is an example of using 24 cores spread across 4 compute nodes with 20Gb of memory requested for each CPU core.

Note: If you run a job using OpenMPI the walltime and memory requirements may be very different to what you would expect so some experimentation may be required.

#!/bin/bash
 
#PBS -N mothur
#PBS -l nodes=4:ppn=6
#PBS -l pvmem=20gb
#PBS -l walltime=10:00:00
#PBS -j oe
#PBS -M fred.bloggs@unsw.edu.au
#PBS -m bae
 
module add mothur/1.36.0
module add openmpi/1.8.3
 
cd /srv/scratch/z1234567/mothur/
 
mpiexec mothur-mpi \
    "#chimera.uchime(fasta=sample.trim.contigs.good.unique.good.filter.unique.precluster.fasta, \
    count=sample.trim.contigs.good.unique.good.filter.unique.precluster.count_table, dereplicate=T);"

If you only end up needing 10Gb of memory for each core then the following job script will use 36 cores on 3 compute nodes which have at least 120Gb of memory installed. This gives us the example below.

#!/bin/bash
 
#PBS -N mothur
#PBS -l nodes=3:ppn=12
#PBS -l pvmem=10gb
#PBS -l walltime=10:00:00
#PBS -j oe
#PBS -M fred.bloggs@unsw.edu.au
#PBS -m bae
 
module add mothur/1.36.0
module add openmpi/1.8.3
 
cd /srv/scratch/z1234567/mothur/
 
mpiexec mothur-mpi \
    "#chimera.uchime(fasta=sample.trim.contigs.good.unique.good.filter.unique.precluster.fasta, \
    count=sample.trim.contigs.good.unique.good.filter.unique.precluster.count_table, dereplicate=T);"

Trinity MPI Job Script Examples

If you want to run Trinity and use more memory that is available on an individual compute node or split your job into more parts so that it finishes sooner then there is a way to run your job over multiple compute nodes at the same time. This uses OpenMPI to split the job over several nodes and pass information back and forth as required.

Here is an example of using 24 cores spread across 4 compute nodes with 20Gb of memory requested for each CPU core.

Note: If you run a job using OpenMPI the walltime and memory requirements may be very different to what you would expect so some experimentation may be required.

Note: You may need to do some experimentation with the --CPU and --max_memory options to determine the best mix.

#!/bin/bash
 
#PBS -N Trinity_MPI
#PBS -l nodes=4:ppn=6
#PBS -l pvmem=20gb
#PBS -l walltime=12:00:00
#PBS -j oe
#PBS -M joe.bloggs@unsw.edu.au
#PBS -m ae
 
cd /srv/scratch/z1234567
 
module load bowtie/1.0.0
module add samtools/0.1.19
module load trinity/2.0.6
module load openmpi/1.8.3
 
mpiexec Trinity --seqType fq --normalize_reads --no_version_check --CPU 1 --max_memory 60G --SS_lib_type RF\
 --left /srv/scratch/z1234567/L4_fwd.fastq.gz,/srv/scratch/z1234567/L3_fwd.fastq.gz\
 --right /srv/scratch/z1234567/L4_rev.fastq.gz,/srv/scratch/z1234567/L3_rev.fastq.gz\
 --output /srv/scratch/z1234567/trinity