Skip to content


This doc need ❤️

Using a "regular" NextFlow workfow:

module load nextflow slurm-drmaa graphviz

# Or let nf-core client download the workflow
srun nextflow run ... -profile ifb_core.config ...

# To launch in background
sbatch --wrap "nextflow run ... -profile ifb_core.config ..."

Functional annotation with Orson pipeline

Orson is a functional annotation pipeline developed in Nextflow by the SeBiMERteam and available at this address :

# Clone repository 
git clone

You might at first run the pipeline with test dataset :

module load nextflow graphviz

# Get the config file for your cluster
wget -O orson/conf/ifb_core.config

# Run Orson with test dataset
cd orson/
srun nextflow run -profile test,singularity --downloadDB_enable false \
--hit_tool=diamond --blast_db "/path/to/indexed/db" \
-c conf/ifb_core.config -resume

The test dataset will be imported and then multiple analysis will be run :

  • EggNOG-Mapper
  • InterProScan
  • BeeDeem
  • Busco
  • Diamond

Once you know it works well, you can run analysis with your own dataset :

Here an example of sbatch file :

#SBATCH -p long

module load nextflow graphviz

nextflow run --fasta "query.fa" --query_type p -profile custom,singularity --downloadDB_enable false \
--blast_db "/path/to/indexed/db" -c conf/ifb_core.config -resume

By default, the previous tools will be launched, but it is possible to disable some of them.

There are some useful arguments :

--query_type [n,p]

Set to "n" for nucleic acid sequences input or to "p" for protein sequences.

--hit_tool [PLAST, BLAST, diamond]

Indicates the tool of your choice for the comparison of your sequences to the reference database.


The output directory where the results will be published.


The temporary directory where intermediate data will be written. (Can be /scratch/ directory)

Please refere as Orson's documentation for more details :

Note that Orson will check the presence of Singularity containers in orson/container/ and if it doesn't find it, it will import them.

Using nf-core:

All nf-core pipelines have been successfully configured for use on the ABiMS cluster.

Check this page: