Difference between revisions of "OpenFOAM"
(Created page with "There are two flavors of OpenFOAM: * [https://www.openfoam.com/ OpenFOAM.com] variant by OpenCFD Ltd an affiliate of ESI Group. * [https://openfoam.org/ OpenFOAM.org] variant...")
Revision as of 19:08, 11 February 2020
There are two flavors of OpenFOAM:
- OpenFOAM.com variant by OpenCFD Ltd an affiliate of ESI Group.
- OpenFOAM.org variant released by the OpenFOAM Foundation Ltd. It is this version that we have installed upon request.
- Homepage: OpenFOAM.org: OpenFOAM is free, open source software for CFD from the OpenFOAM Foundation. According to Wiki OpenFOAM (for "Open-source Field Operation And Manipulation") is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, most prominently including computational fluid dynamics (CFD).
$ module spider openfoam ----------------------------------------------------------- openfoam-org: openfoam-org/6 ----------------------------------------------------------- You will need to load all module(s) on any one of the lines below before the "openfoam-org/6" module is available to load. swset/2018.05 gcc/7.3.0 openmpi/3.1.0 Help: OpenFOAM is a GPL-opensource C++ CFD-toolbox. The openfoam.org release is managed by the OpenFOAM Foundation Ltd as a licensee of the OPENFOAM trademark. This offering is not approved or endorsed by OpenCFD Ltd, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM trademark.
$ module load swset/2018.05 gcc/7.3.0 openmpi/3.1.0 $ module load openfoam-org/6 $ qiime Usage: qiime [OPTIONS] COMMAND [ARGS]... QIIME 2 command-line interface (q2cli) -------------------------------------- To get help with QIIME 2, visit https://qiime2.org. To enable tab completion in Bash, run the following command or add it to your .bashrc/.bash_profile: source tab-qiime To enable tab completion in ZSH, run the following commands or add them to your .zshrc: autoload bashcompinit && bashcompinit && source tab-qiime Options: --version Show the version and exit. --help Show this message and exit. Commands: info Display information about current deployment. tools Tools for working with QIIME 2 files. dev Utilities for developers and advanced users. alignment Plugin for generating and manipulating alignments. composition Plugin for compositional data analysis. cutadapt Plugin for removing adapter sequences, primers, and other unwanted sequence from sequence data. dada2 Plugin for sequence quality control with DADA2. deblur Plugin for sequence quality control with Deblur. demux Plugin for demultiplexing & viewing sequence quality. diversity Plugin for exploring community diversity. emperor Plugin for ordination plotting with Emperor. feature-classifier Plugin for taxonomic classification. feature-table Plugin for working with sample by feature tables. fragment-insertion Plugin for extending phylogenies. gneiss Plugin for building compositional models. longitudinal Plugin for paired sample and time series analyses. metadata Plugin for working with Metadata. phylogeny Plugin for generating and manipulating phylogenies. quality-control Plugin for quality control of feature and sequence data. quality-filter Plugin for PHRED-based filtering and trimming. sample-classifier Plugin for machine learning prediction of sample metadata. taxa Plugin for working with feature taxonomy annotations. vsearch Plugin for clustering and dereplicating with vsearch.
Batch / Interactive Session Example:
After logging onto teton either:
1) Create an interactive session: In the example below change arcc to your project name, and modify the time you think you need, the example below is set for 60 minutes.
[...@tlog1 qiime2]$ salloc --account=arcc --time=60:00 salloc: Granted job allocation 3489587 [...@m067 qiime2]$ [...@m067 qiime2]$ module load qiime2/2019.1 [...@m067 qiime2]$ qiime feature-table filter-samples \ --i-table data/R1-5_table_forwards.qza \ --m-metadata-file data/metadata_R1-5.txt \ --p-where "GroupStatus='JLinfected' OR GroupStatus='JLcontrol'" \ --o-filtered-table data/output_results.qza
2) Submit a job: Below is an example of a batch file:
#!/bin/bash #SBATCH --account=arcc #SBATCH --time=00:30:00 #SBATCH --nodes=1 #SBATCH --mem=0 #SBATCH --output=qiime_%A.out #SBATCH --chdir=/project/arcc/salexan5/qiime2 module load qiime2/2019.10 srun qiime feature-table filter-samples \ --i-table data/R1-5_table_forwards.qza \ --m-metadata-file data/metadata_R1-5.txt \ --p-where "GroupStatus='JLinfected' OR GroupStatus='JLcontrol'" \ --o-filtered-table data/output_results.qza wait
Out of the box, qiime2 does not automatically run in parallel, but some of the plugins/commands can be configured to use multiple cores.
One example is classify-sklearn which is a pre-fitted sklearn-based taxonomy classifier. This command has the
--p-n-jobs option that allows multiple cores to be used. An example skeleton batch is demonstrated below (remember to add account/time and other SBATCH parameters):
#SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=32 #SBATCH --mem=0 #SBATCH --partition=teton-hugemem module load qiime2/2019.10 srun qiime feature-classifier classify-sklearn --i-classifier input_file.qza \ --i-reads rep-seqs-single.qza \ --o-classification output_file.qza \ --p-n-jobs -1
- There are no hard and fast rules on how to configure your batch files as in most cases it will depend on the size of your data and extent of analysis.
- You will need to read and understand how to use the plugin/command as they can vary.
- Memory is still probably going to be a major factor in how many
cpus-per-tasks you choose.
- In the example above we were only able to use 32 cores because we ran the job on one of the
teton-hugemempartition nodes. Using a standard teton node we were only able to use 2 cores. The latter still gave us an improvement of running for 9 hours and 45 minutes, compared to 17 hours with only a single core. But, using 32 cores on a hugemem node, the job ran in 30 minutes!
- Remember, hugemem nodes can be popular, so you might actually end up queuing for days to ran a job in half an hour when you could have jumped on a teton node immediately and already have the longer running job finished.
- Depending on the size of data/analysis you might be able to use more cores on a teton node.
- You will need to perform/track analysis to understand what works for your data/analysis. Do not just use a hugemem node!
- If you have any questions, or need assistance, don't hesitate to contact arcc, we're happy to help with this type of analysis.
Back to HPC Installed Software