diff --git a/.vscode/settings.json b/.vscode/settings.json index 90896b6..4c1cb0a 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -3,7 +3,10 @@ "restructuredtext.updateDelay": 300, "restructuredtext.languageServer.disabled": true, "cSpell.words": [ + "Apptainer", "bioconda", + "bioconductor", + "biocontainers", "BIOINFO", "bioinformatics", "bunop", @@ -13,15 +16,20 @@ "Cozzi", "cozzip", "cpus", + "dockerhub", "engelbart", "fasta", "fastq", "fastqc", "freebayes", + "galaxyproject", + "genindex", "IBBA", "Iscr", "lprod", + "maxdepth", "methylseq", + "Miniconda", "mirdeep", "mkdir", "Nextflow", @@ -32,19 +40,28 @@ "println", "pypi", "pytest", + "quickstart", "resequencing", "rnaseq", + "rstudio", "SAMPLESHEET", "samtools", "SBATCH", + "scancel", "sessionid", "slurm", + "spyder", + "sratoolkit", "sshfs", "subfolders", "subworkflow", "Subworkflows", + "summarizedexperiment", + "Sylabs", "testdata", + "toctree", "TRIMGALORE", + "uninstallation", "whitespaces", "workdir" ], diff --git a/docs/conf.py b/docs/conf.py index c722604..6382e7c 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -22,7 +22,7 @@ author = 'Paolo Cozzi' # The full version, including alpha/beta/rc tags -release = 'v0.2.3' +release = 'v0.2.4' # -- General configuration --------------------------------------------------- diff --git a/docs/general/conda.rst b/docs/general/conda.rst index c2c791c..671228e 100644 --- a/docs/general/conda.rst +++ b/docs/general/conda.rst @@ -12,10 +12,10 @@ environments. It was born to support the python ecosystem, however most software has been supported by conda, for example `R`_ and its packages, and there are channels like `bioconda`_, which collect and maintain a lot of useful softwares. The main advantage in using conda environments is that packages could be installed -directly with their dependencies, whitout the needing to compile everything. Moreover +directly with their dependencies, without the needing to compile everything. Moreover conda and its environments can be installed by an user without administrative privileges. Packages and dependencies are installed inside user directories, and a complete -unistallation could be done by erasing the conda installation folder. +uninstallation can be done by erasing the conda installation folder. From the `conda`_ official documentation: .. _R: https://docs.anaconda.com/anaconda/user-guide/tasks/using-r-language/ @@ -38,7 +38,7 @@ Is Conda already installed? Conda isn't installed by default on your system. However on a shared resource or a remote machine could be already installed by the system administrator. Try to -undertstand if conda is installed using ``which``, for example:: +understand if conda is installed using ``which``, for example:: (base) cozzip@cloud1:~$ which conda /usr/local/Miniconda3-py38_4.8.3-Linux-x86_64/bin/conda @@ -62,9 +62,9 @@ Conda is installed with a a lot of dependencies, like spyder editor, jupyter not and many other packages. Miniconda is a lighter version of anaconda, which installs only the minimal packages required to work correctly with conda. In general, you could decide to install the whole Conda in a local installation, since in your personal computer -you could exploit the benefit of the editors and the grafical user interfaces. +you could exploit the benefit of the editors and the graphical user interfaces. When working on a remote server, using Miniconda is recommended since you have the -full control on what is installed and generally you don't need starting grafical +full control on what is installed and generally you don't need starting graphical interfaces on a remote servers. If you are in doubt, please see the `Anaconda or Miniconda`_ section of conda installation guide. @@ -104,7 +104,7 @@ You could enable a conda environment using ``conda activate``, for example:: $ conda activate R-4.3 -You should see that the environment name near the bash prompt changed to the desidered +You should see that the environment name near the bash prompt changed to the desired environment. In order to exit the current environment (and return to your previous environment), you have to deactivate with:: @@ -165,7 +165,7 @@ that you want to import, for example:: .. hint:: - When you export an environment with conda, yon don't simply export infomations + When you export an environment with conda, yon don't simply export information to re-build your environment relying on package version, but you also track information about the **package build version**, in order to be able to download the same file required to install a particular library. @@ -302,7 +302,7 @@ could contains useful information. It's a bad idea to set the ``$PATH`` environment variable using the *config API*, since when disabling the conda environment, the ``$PATH`` will be unset, causing your terminal not working correctly. If you need to add a path to ``$PATH``, you - need to manually edit the ``env_vars.sh`` files. Ensure to activate your desidered + need to manually edit the ``env_vars.sh`` files. Ensure to activate your desired environment (in order to resolve the ``$CONDA_PREFIX`` environment variable) and then: @@ -334,5 +334,5 @@ could contains useful information. # see: https://unix.stackexchange.com/a/496050 export PATH=$(echo $PATH | tr ":" "\n" | grep -v '/home/core/software/sratoolkit/bin' | xargs | tr ' ' ':') - See conda `Manaing environments `__ + See conda `Managing environments `__ for more information. diff --git a/docs/general/singularity.rst b/docs/general/singularity.rst index 4b2a849..52c9056 100644 --- a/docs/general/singularity.rst +++ b/docs/general/singularity.rst @@ -4,9 +4,283 @@ Singularity .. contents:: Table of Contents -Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor -incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud -exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute -irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat -nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa -qui officia deserunt mollit anim id est laborum. +About Singularity +----------------- + +**Singularity** is a free, cross-platform, and open-source computer program for +virtualization. It is used to create reproducible and portable software containers +for scientific computing and high-performance computing (HPC). +Reproducibility and portability imply the ability to move containers from system +to system (e.g., a new machine). With Singularity containers, developers can work +in customized, reproducible environments that can be copied and executed on other platforms. +You can refer to the `Singularity`_ official documentation. + +.. _Singularity: https://docs.sylabs.io/guides/latest/user-guide/ + +Containers +^^^^^^^^^^ + +Containers are single files that allow the transfer of computing environments +without worrying about installing all needed software and dependencies on each different OS or machine. +Containers are very useful for **reproducible science**: Singularity containers +include all programs, libraries, data, and scripts for a specific scientific problem, +and can then be archived or distributed for replication, regardless of the hardware +architecture or OS used. + +Singularity containers are similar to Docker containers, but they are designed for HPC environments: +since Singularity containers do not require root access to run, they are more secure +and easier to use in HPC environments. + +Singularity containers can be used to run applications, workflows, and entire operating systems. +They can be used to run software that is not available on the host system, or to +run software that requires a specific version of a library or tool. +Singularity containers can also be used to run software that requires a specific +version of an operating system, for example you can have a container based on a +specific version of Ubuntu, CentOS, or Debian which could be required in order +to install and run a specific software. + +.. note:: + + Singularity is already installed in our IBBA infrastructure, and it's available + on our *core* machine and in every cluster *nodes* + +Apptainer and SingularityCE +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Apptainer and SingularityCE are two branches that originated from the original +Singularity project. Apptainer is the community-driven continuation of Singularity, +maintained under the Linux Foundation. It aims to provide a secure, stable, and +performant container runtime for scientific and high-performance computing. +SingularityCE (Community Edition) is maintained by Sylabs and focuses on +delivering enterprise-grade features and support. Both versions retain the core +principles of Singularity, such as ease of use, security, and compatibility with +HPC environments, but they may offer different features and updates based on their +respective development goals. + +Differences between Apptainer and SingularityCE +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +While both Apptainer and SingularityCE originated from the same Singularity project +and share many core principles, there are some differences between them: + +- **Governance and Maintenance**: Apptainer, managed by the Linux Foundation, is + community-driven and focuses on secure, stable, and performant container runtime + for scientific and HPC. SingularityCE, maintained by Sylabs, aims to deliver + enterprise-grade features and support. +- **Development Goals**: Apptainer emphasizes community contributions and open + development, focusing on stability and security for HPC. SingularityCE focuses + on enterprise features, commercial support, and may include proprietary enhancements. +- **Features and Updates**: Apptainer prioritizes features and updates benefiting + the scientific and HPC community, driven by community needs. SingularityCE offers + features and updates tailored for enterprise users, focusing on commercial use cases. +- **Support and Documentation**: Apptainer relies on community support and contributions, + with resources provided by the community and the Linux Foundation. SingularityCE + provides enterprise-level support and documentation, with resources offered by Sylabs. +- **Licensing**: Apptainer is licensed under the Apache License 2.0, allowing free + use, modification, and distribution. SingularityCE's licensing may vary, with + some components potentially proprietary. +- **Compatibility**: Apptainer is designed to be compatible with Singularity containers + and workflows, maintaining compatibility with existing features. SingularityCE + may introduce new features that are not backward-compatible with older versions. + +Please see `this discussion `_ +for more information regarding Apptainer and SingularityCE. + +Singularity compatibility +~~~~~~~~~~~~~~~~~~~~~~~~~ + +The community behind the Apptainer development wants to minimize the differences +between Apptainer and singularity: for example the `*.sif` images should work +in both environments. Even the environments variables should work with both software, +where Apptainer can read Singularity environments variables if not defined. Moreover, +Apptainer will have a symlink to the ``apptainer`` executable named ``singularity``: +this means that all the ``singularity`` commands will be executed with the proper +executable. See `Singularity Compatibility `_ +documentation for more information. + +Docker and Singularity +^^^^^^^^^^^^^^^^^^^^^^ + +Docker and Singularity are both popular containerization technologies, but they +serve different purposes and environments. Docker is widely used in software development +for creating, deploying, and managing containers in a variety of environments, +including cloud and local development setups. It requires root privileges to run, +which can pose security risks in multi-user environments. Singularity, on the +other hand, is designed specifically for high-performance computing (HPC) and +scientific workloads. It does not require root access to run containers, making +it more secure for shared computing environments. Both Docker and Singularity +allow for the creation of portable and reproducible environments, but Singularity's +focus on security and compatibility with HPC systems sets it apart from Docker's +broader application scope. + +Singularity and Docker Integration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Singularity provides seamless integration with Docker, allowing users to leverage +Docker images without needing Docker installed on their systems. This integration +offers several benefits: + +- **No Docker Installation Required**: You can use Docker images with Singularity + without having Docker installed on your system. +- **Shell Access**: Singularity allows you to shell into a Docker image that has + been converted into a Singularity container. +- **Instant Execution**: You can run a Docker image instantly as a Singularity + container, providing quick access to the software environment. +- **Pulling Docker Images**: Singularity can pull Docker images directly from Docker + Hub without requiring sudo privileges. +- **Building from Docker Layers**: You can build Singularity images using bases + from assembled Docker layers, which include the environment, guts, and labels + defined in the Docker image. + +These features make it easy to use Docker images in high-performance computing (HPC) +environments where Singularity is preferred for its security and compatibility. +For more information, please see the +`Singularity and Docker `_ +documentation. + +Searching for a container +------------------------- + +Singularity Hub (SHub) was previously a platform where users could store and share +Singularity containers. However, Singularity Hub is no longer actively maintained +as of April 2021. Instead, Singularity users now commonly use container registries +like `Docker Hub `_ or `Sylabs Cloud`_ +to host and search for Singularity containers. + +To search for a Singularity container, follow these steps depending on the platform: + +Using Docker Hub with Singularity +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Singularity can pull containers directly from Docker Hub. + +You can search for containers on `Docker Hub `_. +Once you find a suitable container, use Singularity to pull it: + +.. code-block:: bash + + singularity pull [container_name.sif] docker:///: + +Where ``container_name.sif`` is an optional parameters which set the output file +name of the downloaded container. + +Using Sylabs Cloud +^^^^^^^^^^^^^^^^^^ + +Sylabs provides a cloud platform for Singularity containers, and it’s a common +replacement for Singularity Hub. Visit `Sylabs Cloud`_ +to search for containers. To pull a container from Sylabs Cloud: + +.. code-block:: bash + + singularity pull [container_name.sif] library:////: + +Where ``container_name.sif`` is an optional parameters which set the output file +name of the downloaded container. + +Using Biocontainers +^^^^^^^^^^^^^^^^^^^ + +Biocontainers is a community-driven project that provides bioinformatics software +in containers. You can search for bioinformatics containers on the +`Biocontainers `_ website. To pull a container from +Biocontainers: + +.. code-block:: bash + + singularity pull [container_name.sif] docker://quay.io/biocontainers/: + +Where ``container_name.sif`` is an optional parameters which set the output file +name of the downloaded container. + +Using docker-daemon +^^^^^^^^^^^^^^^^^^^ + +Sometimes you may have a Docker container already pulled on your system, or you +have just created a docker image and you want to convert it to a Singularity container: +you can use the `docker-daemon` URI to pull the image from the local Docker daemon: + +.. code-block:: bash + + singularity pull [container_name.sif] docker-daemon:: + +Where ``container_name.sif`` is an optional parameters which set the output file. + +.. warning:: + + Please note that when using the `docker-daemon` URI, you don't need to specify + ``docker-daemon://`` but just ``docker-daemon:`` followed by the image id. + +Using mulled-search +^^^^^^^^^^^^^^^^^^^ + +mulled-search is part of the `galaxy-tool-util `_ +that allows you to search for bioinformatics software containers in the Bioconda +and Biocontainers repositories. To search for a container using mulled-search, +you should specify the destination (e.g., quay) and the software you are looking for, +for example: + +.. code-block:: bash + + mulled-search --destination quay singularity -s bwa samtools + +When searching for more than one software in the same time, mulled-search will +returns also mulled containers, which are containers that have multiple software +installed in the same container. Since is not trivial to understand software +versions in mulled containers, there's another tool in the ``galaxy-tool-util`` +to determine the container *hash* of the desired software: + +.. code-block:: bash + + mulled-hash bwa=0.7.17,samtools=1.19.2 + +This will return the hash of the container that contains the specified software +versions. You can use this hash to filter out the desired url from the mulled-search: + +.. code-block:: bash + + mulled-search --destination singularity -s bwa samtools | \ + grep $(mulled-hash bwa=0.7.17,samtools=1.19.2) + +The returned url can be used to pull the container with singularity: + +.. code-block:: bash + + singularity pull bwa_samtools.sif \ + https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:a34558545ae1413d94bde4578787ebef08027945-0 + +.. note:: + + The `mulled-search` tool is already installed in our shared infrastructure at IBBA. + +Using a Local Singularity Image +------------------------------- + +If you have a `.sif` container locally, you can run it directly with Singularity: + +.. code-block:: bash + + singularity run .sif + +This means also that you could copy a pulled container to a different machine and be +able to run a singularity container. + +.. hint:: + + In our shared infrastructure at IBBA, we have a shared folder directory in which + we put singularity container managed with nextflow: those containers are downloaded + by nextflow but can be used like any other pulled singularity container. + See :ref:`Setting NXF_SINGULARITY_CACHEDIR ` for more information + +.. _Sylabs Cloud: https://cloud.sylabs.io/library + +Create a container +------------------ + +Build a container +^^^^^^^^^^^^^^^^^ + +Build a container without root access +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Create a mulled container +^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/docs/index.rst b/docs/index.rst index dfa4d9f..f2923f1 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -41,7 +41,7 @@ section of our documentation. nextflow/getting-started nextflow/running nextflow/customize - nextflow/trubleshooting + nextflow/troubleshooting Indices and tables ================== diff --git a/docs/nextflow/customize.rst b/docs/nextflow/customize.rst index 6e98b62..e91998d 100644 --- a/docs/nextflow/customize.rst +++ b/docs/nextflow/customize.rst @@ -74,7 +74,9 @@ for a process by name, the second one let you to specify the requirements for ev process having the same label. To lower resources requirements, it's better to start by redefining the most used labels, like ``process_high`` and ``process_medium``, and then redefine single process by names. Start with an empty configuration -file and add a ``process`` scope like this:: +file and add a ``process`` scope like this: + +.. code-block:: groovy process { withLabel:process_single { @@ -107,7 +109,18 @@ or ``-config`` option: You can also declare resources dynamically. For example, you can make use of the ``check_max`` function, but you will require to define the ``check_max`` function -in your custom configuration file:: +in your custom configuration file. Remember also to specify ``max_memory``, ``max_cpus`` +and ``max_time`` in your *custom configuration file*: + +.. code-block:: groovy + + params { + // Max resource options + // Defaults only, expecting to be overwritten + // need to be specified in order to ``check_max`` function to work + max_memory = '64.GB' + max_cpus = 32 + max_time = '240.h'} process { withLabel:process_medium { @@ -166,7 +179,9 @@ Provide custom parameters to a process Some modules may require additional parameters to be provided in order to work correctly. This parameters can be specified with the ``ext.args`` variable within -the process scope in the custom configuration file, for example:: +the process scope in the custom configuration file, for example: + +.. code-block:: groovy process { withName:process_fastqc { @@ -192,7 +207,9 @@ test data. A profile is defined in a configuration file, which is specified using the ``-profile`` option when running nextflow. A profile require a name which is used to identify the profile and a set of parameters. For example, you -can define a profile like this in your ``custom.config`` file:: +can define a profile like this in your ``custom.config`` file: + +.. code-block:: groovy profiles { cineca { @@ -229,7 +246,9 @@ You should have also a ``modules`` directory inside your project:: cd my-new-pipeline touch main.nf nextflow.config modules.json README.md .nf-core.yml -Next you have to edit ``modules.json`` in order to have minimal information:: +Next you have to edit ``modules.json`` in order to have minimal information: + +.. code-block:: json { "name": "", @@ -304,7 +323,9 @@ In order to have a minimal pipeline, you need to add at least an unnamed workflo to your pipeline. Moreover, you should declare the input channels and the modules or the processes you plan to use. Suppose to create a minimal pipeline to do a *fastqc* analysis on a set of reads. You can install the ``fastqc`` module as described -above and then add a workflow like this in your ``main.nf``:: +above and then add a workflow like this in your ``main.nf``: + +.. code-block:: groovy // Declare syntax version nextflow.enable.dsl=2 @@ -326,7 +347,9 @@ this is why we create an input channel and then we add *meta* relying on file na Please refer to the module ``main.nf`` file to understand how to call a module and how to pass parameters to it. Next you will need also a minimal ``nextflow.config`` configuration file to run your pipeline, in order -to define where *softwares* could be found, and other useful options:: +to define where *softwares* could be found, and other useful options: + +.. code-block:: groovy params { input = null @@ -516,7 +539,9 @@ script, you can re-use the same parameters within different scripts and keep your main file unmodified: this keeps the stuff simple and let you to focus only on important changes with your *CVS*. For example, you could define a custom ``params.json`` *JSON* config file in which specify your -specific requirements:: +specific requirements: + +.. code-block:: json { "readPaths": "$baseDir/fastq/*.fastq.gz", @@ -526,7 +551,9 @@ specific requirements:: All the other parameters which cannot be specified using the command line interface need to be provided in a *custom configuration* file using the standard nextflow -syntax:: +syntax: + +.. code-block:: groovy profiles { slurm { @@ -570,7 +597,9 @@ your pipeline like this:: nextflow run . -profile test,singularity Where the ``test`` profile is specified in ``nextflow.config`` and refers to -the *test dataset* you provide with your pipeline:: +the *test dataset* you provide with your pipeline: + +.. code-block:: groovy profiles { ... diff --git a/docs/nextflow/trubleshooting.rst b/docs/nextflow/troubleshooting.rst similarity index 83% rename from docs/nextflow/trubleshooting.rst rename to docs/nextflow/troubleshooting.rst index 709ec8b..a095bef 100644 --- a/docs/nextflow/trubleshooting.rst +++ b/docs/nextflow/troubleshooting.rst @@ -1,6 +1,6 @@ -Trubleshooting -============== +Troubleshooting +=============== .. contents:: Table of Contents @@ -38,7 +38,9 @@ You can get the same information by getting logs from the nextflow row. For exam supposing that our last run is named ``sharp_feynman`` (you can get information about run name using ``nextflow log`` or ``nextflow log -quiet``), you can get information about steps working dir by printing specific *fields* with ``nextflow log``, for -example:: +example: + +.. code-block:: bash $ nextflow log sharp_feynman -f 'process,status,exit,hash,duration,workdir' remove_whitespaces COMPLETED 0 bd/2ebe9a 551ms /home/cozzip/nf-mirna/work/bd/2ebe9a9f2e1703a18059fbdf1191e7 @@ -64,7 +66,9 @@ same folder we get from nextflow error report. an error in nextflow configuration. Now is time to understand what happened. Enter in the failed job work directory an -list all files (including hidden ones) with `ls -a`:: +list all files (including hidden ones) with `ls -a`: + +.. code-block:: bash $ ls -a .command* .command.begin .command.err .command.log .command.out .command.run .command.sh @@ -77,23 +81,29 @@ parameter in the pipeline configuration files. In order to have information on errors, we can manually execute the nextflow steps: first of all, we need to export an environment variable in order to increase -nextflow verbosity:: +nextflow verbosity: - $ export NXF_DEBUG=2 +.. code-block:: bash + + export NXF_DEBUG=2 Next we can execute the ``.command.run`` scripts, which is executed by nextflow and -that call ``.command.sh``:: +that call ``.command.sh``: + +.. code-block:: bash - $ bash .command.run + bash .command.run Command is expected to fail (since nextflow returned an error previously). However by setting ``NXF_DEBUG=2``, we can see all commands launched by nextflow and in particular the ``singularity`` command launched by nextflow. Next we can take such command, simplify it and launch a singularity session in order to test our command using a terminal inside the same singularity container used by our pipeline -step, for example with:: +step, for example with: + +.. code-block:: bash - $ singularity exec -B $HOME -B /home/ -B $PWD/ /home/core/nxf_singularity_cache/bunop-mirdeep2.img /bin/bash + singularity exec -B $HOME -B /home/ -B $PWD/ /home/core/nxf_singularity_cache/bunop-mirdeep2.img /bin/bash Where all ``-B`` parameters indicate all folders that will be mounted inside our container (such as our ``$HOME`` directory, the ``/home`` directory, which is the @@ -139,11 +149,13 @@ into ``$NXF_SINGULARITY_CACHEDIR`` cache directory. Track the failed ``command`` in nextflow output, then move in ``$NXF_SINGULARITY_CACHEDIR`` directory and call such command manually. After downloading the image, rename the file and remove the ``.pulling.[0-9]*`` from the image name (nextflow images should end with ``.img`` -extension). For example in the previous case:: +extension). For example in the previous case: - $ cd $NXF_SINGULARITY_CACHEDIR - $ singularity pull --name quay.io-biocontainers-bioconductor-summarizedexperiment-1.18.1--r40_0.img.pulling.1610634041691 docker://quay.io/biocontainers/bioconductor-summarizedexperiment:1.18.1--r40_0 > /dev/null - $ mv quay.io-biocontainers-bioconductor-summarizedexperiment-1.18.1--r40_0.img.pulling.1610634041691 quay.io-biocontainers-bioconductor-summarizedexperiment-1.18.1--r40_0.img +.. code-block:: bash + + cd $NXF_SINGULARITY_CACHEDIR + singularity pull --name quay.io-biocontainers-bioconductor-summarizedexperiment-1.18.1--r40_0.img.pulling.1610634041691 docker://quay.io/biocontainers/bioconductor-summarizedexperiment:1.18.1--r40_0 > /dev/null + mv quay.io-biocontainers-bioconductor-summarizedexperiment-1.18.1--r40_0.img.pulling.1610634041691 quay.io-biocontainers-bioconductor-summarizedexperiment-1.18.1--r40_0.img After that, you could resume your nextflow pipeline by adding the ``-resume`` option in your command line in order using the cached results of the previous calculations @@ -168,22 +180,51 @@ Is such case, you have two options. The first is to execute a previous version o the pipeline that is compatible with your nextflow version. You can have information on version on `nf-core pipeline `__ or directly from the GitHub project of `nf-core `__ organization. -Once you find your desidered version, you have to declare it with the parameter -``-r`` when calling nextflow, for example:: +Once you find your desired version, you have to declare it with the parameter +``-r`` when calling nextflow, for example: + +.. code-block:: bash - $ nextflow run nf-core/rnaseq -r 2.0 -profile test,singularity -resume + nextflow run nf-core/rnaseq -r 2.0 -profile test,singularity -resume The second option is to upgrade your nextflow version. You can install a specific version of nextflow from the `nextflow release page `__ Copy the nextflow asset link present in every release, and then install nextflow like -this:: +this: - $ wget -qO- https://github.com/nextflow-io/nextflow/releases/download/v20.12.0-edge/nextflow-20.12.0-edge-all | bash +.. code-block:: bash + + wget -qO- https://github.com/nextflow-io/nextflow/releases/download/v20.12.0-edge/nextflow-20.12.0-edge-all | bash This will download all the requirements and will put nextflow in your current directory. Change the nextflow default permissions to ``755`` and move such executable in a directory with a higher position in your ``$PATH`` environment, for example ``$HOME/bin`` +Cannot find pipeline version +---------------------------- + +Sometimes is possible that you cannot find a specific version of a pipeline that +you know is present in the remote repository with an error like this:: + + Cannot find revision `x.x.x` -- Make sure that it exists in the remote repository + +This could happen if your local version of the pipeline (in your ``$HOME/.nextflow/assets/``) +is not updated with the remote repository. In this case, you need to synchronize your local +version with the remote repository, for example: + +.. code-block:: bash + + nextflow pull nf-core/methylseq + +You can also specify a specific version of the pipeline to pull, for example: + +.. code-block:: bash + + nextflow pull nf-core/methylseq -r 2.7.1 + +This will update your local version of the pipeline, and you will be able to call +the desired version of the pipeline. + Cannot execute nextflow interactively -------------------------------------