MPI Versions Available via Module
Trinity MPI Job Script Examples
If you want to run Trinity and use more memory that is available on an individual compute node or split your job into more parts so that it finishes sooner then there is a way to run your job over multiple compute nodes at the same time. This uses OpenMPI to split the job over several nodes and pass information back and forth as required.
Here is an example of using 24 cores spread across 4 compute nodes with 20Gb of memory requested for each CPU core.
Note: If you run a job using OpenMPI the walltime and memory requirements may be very different to what you would expect so some experimentation may be required.
Note: You may need to do some experimentation with the
#!/bin/bash #PBS -N Trinity_MPI #PBS -l nodes=4:ppn=6 #PBS -l pvmem=20gb #PBS -l walltime=12:00:00 #PBS -j oe #PBS -M firstname.lastname@example.org #PBS -m ae cd /srv/scratch/z1234567 module load bowtie/1.0.0 module add samtools/0.1.19 module load trinity/2.0.6 module load openmpi/1.8.3 mpiexec Trinity --seqType fq --normalize_reads --no_version_check --CPU 1 --max_memory 60G --SS_lib_type RF\ --left /srv/scratch/z1234567/L4_fwd.fastq.gz,/srv/scratch/z1234567/L3_fwd.fastq.gz\ --right /srv/scratch/z1234567/L4_rev.fastq.gz,/srv/scratch/z1234567/L3_rev.fastq.gz\ --output /srv/scratch/z1234567/trinity