UPPMAX Configuration
All nf-core pipelines have been successfully configured for use on the Swedish UPPMAX clusters.
Getting help
We have a Slack channel dedicated to UPPMAX users on the nf-core Slack: https://nfcore.slack.com/channels/uppmax
Using the UPPMAX config profile
The recommended way to activate Nextflow
, nf-core
, and any pipeline
available in nf-core
on UPPMAX is to use the module system:
To use, run the pipeline with -profile uppmax
(one hyphen).
This will download and launch the uppmax.config
which has been pre-configured with a setup suitable for the UPPMAX servers.
It will enable Nextflow
to manage the pipeline jobs via the Slurm
job scheduler.
Using this profile, Docker
image(s) containing required software(s) will be downloaded, and converted to Singularity
image(s) if needed before execution of the pipeline.
Recent version of Nextflow
also support the environment variable NXF_SINGULARITY_CACHEDIR
which can be used to supply images.
Images for some nf-core
pipelines are available under /sw/data/ToolBox/nf-core/
and those can be used by NXF_SINGULARITY_CACHEDIR=/sw/data/ToolBox/nf-core/; export NXF_SINGULARITY_CACHEDIR
.
In addition to this config profile, you will also need to specify an UPPMAX project id.
You can do this with the --project
flag (two hyphens) when launching Nextflow
.
For example:
NB: If you’re not sure what your UPPMAX project ID is, try running
groups
or checking SUPR.
Just run Nextflow
on a login node and it will handle everything else.
Remember to use -bg
to launch Nextflow
in the background, so that the pipeline doesn’t exit if you leave your terminal session.
Alternatively, you can also launch Nextflow
in a screen
or a tmux
session.
Using AWS iGenomes references
A local copy of the AWS iGenomes
resource has been made available on all UPPMAX clusters so you should be able to run the pipeline against any reference available in the conf/igenomes.config
.
You can do this by simply using the --genome <GENOME_ID>
parameter.
Getting more memory
If your nf-core
pipeline run is running out of memory, you can run on a fat node with more memory using the following Nextflow
flags:
This raises the ceiling of available memory from the default of 128.GB
to 256.GB
.
rackham
has nodes with 128GB, 256GB and 1TB memory available.
Note that each job will still start with the same request as normal, but restarted attempts with larger requests will be able to request greater amounts of memory.
All jobs will be submitted to fat nodes using this method, so it’s only for use in extreme circumstances.
Different UPPMAX clusters
The UPPMAX nf-core configuration profile uses the hostname
of the active environment to automatically apply the following resource limits:
rackham
- cpus available: 20 cpus
- memory available: 125 GB
bianca
- cpus available: 16 cpus
- memory available: 109 GB
miarka
- cpus available: 48 cpus
- memory available: 357 GB
Development config
If doing pipeline development work on UPPMAX, the devel
profile allows for faster testing.
Applied after main UPPMAX config, it overwrites certain parts of the config and submits jobs to the devcore
queue, which has much faster queue times.
All jobs are limited to 1 hour to be eligible for this queue and only one job allowed at a time. It is not suitable for use with real data.
To use it, submit with -profile uppmax,devel
.
Running on bianca
⚠️ For more information, please follow the following guides:
For security reasons, there is no internet access on bianca
so you can’t download from or upload files to the cluster directly.
Before running a nf-core pipeline on bianca
you will first have to download the pipeline and singularity images needed elsewhere and transfer them via the wharf
area to your own bianca
project.
In this guide, we use rackham
to download and transfer files to the wharf
area, but it can also be done on your own computer.
If you use rackham
to download the pipeline and the singularity containers, we recommend using an interactive session (cf interactive guide), which is what we do in the following guide.
It is recommended to activate Nextflow
, nf-core
and your nf-core
pipeline through the module system (see Using the UPPMAX config profile
above). In case you need a specific version of any of these tools you can
follow the guide below.
Download and install Nextflow
Install nf-core tools
Download and transfer a nf-core pipeline
The principle is to have every member of your project to be able to use the same nf-core/<PIPELINE>
version at the same time.
So every member of the project who wants to use nf-core/<PIPELINE>
will need to do:
And then nf-core/<PIPELINE>
can be used with:
Update a pipeline
To update, repeat the same steps as for installing and update the link.
You can for example keep a nf-core-<PIPELINE>-default
version that you are sure is working, an make a link for a nf-core-<PIPELINE>-testing
or nf-core-<PIPELINE>-development
.