https://nf-co.re logo
Join Slack
Powered by
# help
  • n

    Nick Eckersley

    07/09/2025, 8:12 AM
    Hello, I am trying to run nfcore/mag. I cleared the work directory from a previous run and am trying to start from scratch again with a different set of samples. It is run on SLURM using the departmental config. The job will run for about 15 seconds and then fail every time. Here is a portion of the .out:
    Copy code
    executor >  slurm (9)
    [25/e87748] NFC…FASTQC_RAW (N073_run0_raw) | 0 of 4 ✘
    [8d/869603] NFC…OCESSING:FASTP (N073_run0) | 0 of 4 ✘
    [dc/ec7b0e] NFC…SM259684v1_genomic.fna.gz) | 0 of 1
    [-        ] NFC…BOWTIE2_PHIX_REMOVAL_ALIGN -
    [-        ] NFC…EPROCESSING:FASTQC_TRIMMED -
    [-        ] NFC…AD_PREPROCESSING:CAT_FASTQ -
    [-        ] NFC…PREPROCESSING:NANOPLOT_RAW -
    [-        ] NFC…PREPROCESSING:PORECHOP_ABI -
    [-        ] NFC…EAD_PREPROCESSING:NANOLYSE -
    [-        ] NFC…EAD_PREPROCESSING:FILTLONG -
    [-        ] NFC…OCESSING:NANOPLOT_FILTERED -
    [-        ] NFC…:MAG:CENTRIFUGE_CENTRIFUGE -
    [-        ] NFC…MAG:MAG:CENTRIFUGE_KREPORT -
    [-        ] NFCORE_MAG:MAG:KRAKEN2         -
    [-        ] NFCORE_MAG:MAG:POOL_LONG_READS -
    [-        ] NFCORE_MAG:MAG:METASPADES      -
    [-        ] NFC…E_MAG:MAG:METASPADESHYBRID -
    [-        ] NFCORE_MAG:MAG:MEGAHIT         -
    [-        ] NFC…_MAG:MAG:GUNZIP_ASSEMBLIES -
    [-        ] NFCORE_MAG:MAG:QUAST           -
    Plus 31 more processes waiting for tasks…
    Pulling Singularity image <https://depot.galaxyproject.org/singularity/fastp:0.23.4--h5f740d0_0> [cache /home/neckersl/scratch/private/nfcore_mag/work/singularity/depot.galaxyproject.org-singularity-fastp-0.23.4--h5f740d0_0.img]
    Pulling Singularity image <https://depot.galaxyproject.org/singularity/bowtie2:2.4.2--py38h1c8e9b9_1> [cache /home/neckersl/scratch/private/nfcore_mag/work/singularity/depot.galaxyproject.org-singularity-bowtie2-2.4.2--py38h1c8e9b9_1.img]
    Pulling Singularity image <https://depot.galaxyproject.org/singularity/fastqc:0.12.1--hdfd78af_0> [cache /home/neckersl/scratch/private/nfcore_mag/work/singularity/depot.galaxyproject.org-singularity-fastqc-0.12.1--hdfd78af_0.img]
    WARN: Singularity cache directory has not been defined -- Remote image will be stored in the path: /home/neckersl/scratch/private/nfcore_mag/work/singularity -- Use the environment variable NXF_SINGULARITY_CACHEDIR to specify a different location
    [nf-core/mag] ERROR: no bins passed the bin size filter specified between --bin_min_size 0 and --bin_max_size null. Please adjust parameters.
    ERROR ~ Error executing process > 'NFCORE_MAG:MAG:SHORTREAD_PREPROCESSING:BOWTIE2_PHIX_REMOVAL_BUILD (GCA_002596845.1_ASM259684v1_genomic.fna.gz)'
    
    Caused by:
      Process `NFCORE_MAG:MAG:SHORTREAD_PREPROCESSING:BOWTIE2_PHIX_REMOVAL_BUILD (GCA_002596845.1_ASM259684v1_genomic.fna.gz)` terminated with an error exit status (1)
    
    
    Command executed:
    
      mkdir bowtie
      bowtie2-build --threads 1 GCA_002596845.1_ASM259684v1_genomic.fna.gz GCA_002596845
    
      cat <<-END_VERSIONS > versions.yml
      "NFCORE_MAG:MAG:SHORTREAD_PREPROCESSING:BOWTIE2_PHIX_REMOVAL_BUILD":
          bowtie2: $(echo $(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*$//')
      END_VERSIONS
    
    Command exit status:
      1
    
    Command output:
      (empty)
    
    Command error:
      INFO:    Environment variable SINGULARITYENV_TMPDIR is set, but APPTAINERENV_TMPDIR is preferred
      INFO:    Environment variable SINGULARITYENV_NXF_TASK_WORKDIR is set, but APPTAINERENV_NXF_TASK_WORKDIR is preferred
      INFO:    Environment variable SINGULARITYENV_NXF_DEBUG is set, but APPTAINERENV_NXF_DEBUG is preferred
      WARNING: Skipping mount /var/lib/apptainer/mnt/session/etc/resolv.conf [files]: /etc/resolv.conf doesn't exist in container
      bash: .command.run: No such file or directory
    
    Work dir:
      /home/neckersl/scratch/private/nfcore_mag/work/dc/ec7b0eb2e8becb57f9f20e81f5f4eb
    
    Container:
      /home/neckersl/scratch/private/nfcore_mag/work/singularity/depot.galaxyproject.org-singularity-bowtie2-2.4.2--py38h1c8e9b9_1.img
    
    Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
    
     -- Check '.nextflow.log' file for details
    -[nf-core/mag] Pipeline completed with errors-
    The script I used was the same as one I have used previously that worked fine, just the raw reads are different:
    Copy code
    #!/bin/bash
    #SBATCH --job-name=nfcore_mag
    #SBATCH --output=../logs/%x_%j.out
    #SBATCH --error=../logs/%x_%j.err
    #SBATCH --cpus-per-task=8
    #SBATCH --mem=32G
    #SBATCH --partition=long
    
    
    # Activate Conda
    source /mnt/apps/users/neckersl/conda/etc/profile.d/conda.sh
    conda activate nfcore
    
    # Run the pipeline
    nextflow run nf-core/mag -r 4.0.0 \
      -profile cropdiversityhpc \
      --input "$HOME/scratch/private/nfcore_mag/data/N072-75_fb_samplesheet.csv" \
      --outdir "$HOME/scratch/private/nfcore_mag/output/spades_fb" \
      --gtdb_db /mnt/shared/datasets/databases/gtdb/GTDB_280324 \
      -work-dir "$HOME/scratch/private/nfcore_mag/work"
    Any help would be greatly appreciated. Thanks.
    j
    p
    • 3
    • 35
  • s

    Suhan Cho

    07/09/2025, 10:45 AM
    Hello everyone, I'm trying to exclude some chromosomes before aligning / processing bulk RNA-seq, is there any function that one could add customized script in nf-core rnaseq or do i just have to remove some chromosomes in the gtf-level? Thanks in advance 🙂
    #️⃣ 1
    c
    n
    • 3
    • 3
  • j

    Jimmy Lail

    07/09/2025, 1:52 PM
    Hello, I am stuck troubleshooting a subworkflow nf-test for “Diamond”. When I run the nf-test
    nf-test test subworkflows/local/diamond/tests/main.nf.tests
    , I get an output of
    No tests to execute
    . This subworkflow is to execute four modules:
    NCBIREFSEQDOWNLOAD
    ,
    DIAMONDPREPARETAXA
    ,
    DIAMOND_MAKEDB
    , and
    DIAMOND_BLASTP
    . The first two modules are local modules and nf-testing succeeds while the later two are nf-core installed modules. Here is a link to the github PR: https://github.com/nf-core/proteinannotator/pull/50 Here is the subworkflow main.nf
    Copy code
    include { NCBIREFSEQDOWNLOAD } from '../../../modules/local/ncbirefseqdownload/main'
    include { DIAMONDPREPARETAXA } from '../../../modules/local/diamondpreparetaxa/main'
    include { DIAMOND_MAKEDB } from '../../../modules/nf-core/diamond/makedb/main'
    include { DIAMOND_BLASTP  } from '../../../modules/nf-core/diamond/blastp/main'
    
    /*
    * Pipeline parameters
    */
    // params.refseq_release = 'complete'
    // params.taxondmp_zip = '<<ftp://ftp.ncbi.nih.gov/pub/taxonomy/taxdump.tar.gz>>'
    // params.taxonmap = '<<ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/prot.accession2taxid.gz>>'
    // params.diamond_outfmt = 6
    // params.diamond_blast_columns = qseqid
    
    workflow DIAMOND {
        take:
        ch_fasta // channel: [ val(meta), [ fasta ] ]
    
        main:
    
        ch_versions = Channel.empty()
    
        // TODO nf-core: substitute modules here for the modules of your subworkflow
        NCBIREFSEQDOWNLOAD(
            params.refseq_release
        )
        ch_diamond_reference_fasta = NCBIREFSEQDOWNLOAD.out.refseq_fasta
        ch_versions = ch_versions.mix(NCBIREFSEQDOWNLOAD.out.versions.first())
    
        DIAMONDPREPARETAXA (
            params.taxondmp_zip
        )
        ch_taxonnodes = DIAMONDPREPARETAXA.out.taxonnodes
        ch_taxonnames = DIAMONDPREPARETAXA.out.taxonnames
        ch_versions = ch_versions.mix(DIAMONDPREPARETAXA.out.versions.first())
    
        DIAMOND_MAKEDB (
            ch_diamond_reference_fasta,
            params.taxonmap, // make default <<ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/prot.accession2taxid.gz>>
            ch_taxonnodes,
            ch_taxonnames
        )
        ch_diamond_db = DIAMOND_MAKEDB.out.db
        ch_versions = ch_versions.mix(DIAMOND_MAKEDB.out.versions.first())
    
        //ch_diamond_db = Channel.of( [ [id:"diamond_db"], file(params.diamond_db, checkIfExists: true) ] )
    
        DIAMOND_BLASTP (
            ch_fasta,
            ch_diamond_db,
            params.diamond_outfmt,
            params.diamond_blast_columns,
        )
        emit:
        ch_versions = ch_versions.mix(DIAMOND_BLASTP.out.versions.first())
        ch_diamond_tsv = DIAMOND_BLASTP.out.tsv
    Here is the main.nf.test:
    Copy code
    nextflow_workflow {
    
        name "Test Subworkflow DIAMOND"
        script "../main.nf"
        workflow "DIAMOND"
    
        tag "subworkflows"
        tag "subworkflows_"
        tag "subworkflows/diamond"
        // TODO nf-core: Add tags for all modules used within this subworkflow. Example:
        tag "ncbirefseqdownload"
        tag "diamondpreparetaxa"
        tag "diamond/makedb"
        tag "diamond/blastp"
    
        // TODO nf-core: Change the test name preferably indicating the test-data and file-format used
        setup {
            run("NCBIREFSEQDOWNLOAD") {
                script "../../../../modules/local/ncbirefseqdownload/main.nf"
                process {
                    """
                    input[0] = 'other'
                    """
                }
            }
            run("DIAMONDPREPARETAXA") {
                script "../../../../modules/local/diamondpreparetaxa/main.nf"
                process {
                    """
                    input[0] = '<<ftp://ftp.ncbi.nih.gov/pub/taxonomy/taxdump.tar.gz>>'
                    """
                }
            }
            run("DIAMOND_MAKEDB") {
                script "../../../../modules/nf-core/diamond/makedb/main.nf"
                process {
                    """
                    input[0] = [ [id:'test2'], [ NCBIREFSEQDOWNLOAD.out.refseq_fasta ] ]
                    input[1] = '<<ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/prot.accession2taxid.gz>>'
                    input[2] = DIAMONDPREPARETAXA.out.taxonnodes
                    input[3] = DIAMONDPREPARETAXA.out.taxonnames
                    """
                }
            }
            run("DIAMOND_BLASTP") {
                script "../../../../modules/nf-core/diamond/makedb/main.nf"
                process {
                    """
                    input[0] = [ [id:'test'], file(params.modules_testdata_base_path + 'genomics/sarscov2/genome/proteome.fasta', checkIfExists: true) ]
                    input[1] = DIAMOND_MAKEDB.out.db
                    input[2] = 6
                    input[3] = 'qseqid qlen'
                    """
                }
            }
        }
        test("Test Diamond subworkflow succeeds") {
    
            when {
                params {
                    params.refseq_release = 'complete'
                    params.taxondmp_zip = '<<ftp://ftp.ncbi.nih.gov/pub/taxonomy/taxdump.tar.gz>>'
                    params.taxonmap = '<<ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/prot.accession2taxid.gz>>'
                    params.diamond_outfmt = 6
                    params.diamond_blast_columns = 'qseqid'
                }
                workflow {
                    """
                    input[0] = file("test1.fasta", checkIfExists: true)
                    """
                }
            }
    
            then {
                assertAll(
                    { assert workflow.success},
                    { assert snapshot(workflow.out).match()}
                    //TODO nf-core: Add all required assertions to verify the test output.
                )
            }
        }
    }
    Thank you to any and all help.
  • g

    Grigorii Nos

    07/09/2025, 2:08 PM
    Hi! I was using rnadnavar pipeline, and had error in process NFCORE_RNADNAVARRNADNAVARBAM_PROCESSINGBAM GATK PREPROCESSINGBAM_SPLITNCIGARREADSCRAM MERGE INDEX SAMTOOLSMERGE_CRAM related to pulling sing image, then ended up pulling it manually and hardcoding path to it in main.nf. Now get this error that makes no sense, does anyone hace any idea what to do? ERROR ~ Error executing process > 'NFCORE_RNADNAVARRNADNAVARBAM_PROCESSINGBAM GATK PREPROCESSINGBAM_SPLITNCIGARREADSCRAM MERGE INDEX SAMTOOLSMERGE_CRAM (1)' Caused by: Not a valid path value type: java.util.LinkedHashMap ([id:[GCA_000001405.15_GRCh38_full_analysis_set]]) Container: /home/projects/..../singularity-cache/biocontainers-samtools\:v1.9-4-deb_cv1 Tip: view the complete command output by changing to the process work dir and entering the command
    cat .command.out
    -- Check '.nextflow.log' file for details ERROR ~ Pipeline failed. Please refer to troubleshooting docs: https://nf-co.re/docs/usage/troubleshooting -- Check '.nextflow.log' file for details
    #️⃣ 1
    c
    n
    • 3
    • 2
  • k

    Krista Pipho

    07/09/2025, 5:59 PM
    Hello again! I am trying to use the Multi-QC module that comes with the NF-core template. I want to display custom content as described here: https://docs.seqera.io/multiqc/custom_content I have used this strategy: https://github.com/MultiQC/test-data/blob/main/data/custom_content/embedded_config/bargraph_multiple_samples_no_sort_mqc.csv I think I am having a problem similar to the one discussed here that was marked as resolved: https://github.com/MultiQC/MultiQC/issues/2666 There is no error generated, but the content is not shown. Do you have any advice for me?
  • m

    Michael Beavitt

    07/09/2025, 7:33 PM
    Hello, I'm trying to use 'wave' in the latest template build (3.3.2) and it doesn't seem to be defaulting to containers after checking for a dockerfile. I've set the appropriate line in my nextflow.config (going by this documentation: https://www.nextflow.io/docs/latest/wave.html) to
    Copy code
    wave.strategy           = ['dockerfile','container']
    And the only thing in the pipeline is multiqc. If I run the pipeline using:
    Copy code
    nextflow run main.nf -profile wave,test --outdir test
    Then I get an error that the multiqc executable was not found.
    Copy code
    (nextflow) mbeavitt@ORIGIN-LT-27:~/Code/Nextflow/test$ ./run.sh
    
     N E X T F L O W   ~  version 25.04.6
    
    Launching `main.nf` [sad_gilbert] DSL2 - revision: 3b35870dc7
    
    Input/output options
      input                     : <https://raw.githubusercontent.com/nf-core/test-datasets/viralrecon/samplesheet/samplesheet_test_illumina_amplicon.csv>
      outdir                    : test
    
    Institutional config options
      config_profile_name       : Test profile
      config_profile_description: Minimal test dataset to check pipeline function
    
    Generic options
      trace_report_suffix       : 2025-07-09_20-31-46
    
    Core Nextflow options
      runName                   : sad_gilbert
      launchDir                 : /home/mbeavitt/Code/Nextflow/test
      workDir                   : /home/mbeavitt/Code/Nextflow/test/work
      projectDir                : /home/mbeavitt/Code/Nextflow/test
      userName                  : mbeavitt
      profile                   : wave,test
      configFiles               : /home/mbeavitt/Code/Nextflow/test/nextflow.config
    
    !! Only displaying parameters that differ from the pipeline defaults !!
    ------------------------------------------------------
    executor >  local (1)
    [6b/5aa8b1] ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC [  0%] 0 of 1
    ERROR ~ Error executing process > 'ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC'
    
    executor >  local (1)
    [6b/5aa8b1] ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC [  0%] 0 of 1 ✘
    Execution cancelled -- Finishing pending tasks before exit
    ERROR ~ Error executing process > 'ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC'
    
    Caused by:
      Process `ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC` terminated with an error exit status (127)
    
    executor >  local (1)
    [6b/5aa8b1] ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC [  0%] 0 of 1 ✘
    Execution cancelled -- Finishing pending tasks before exit
    -[originsciences/repaq2fastq] Pipeline completed with errors-
    ERROR ~ Error executing process > 'ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC'
    
    Caused by:
      Process `ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC` terminated with an error exit status (127)
    
    executor >  local (1)
    [6b/5aa8b1] ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC [  0%] 0 of 1 ✘
    Execution cancelled -- Finishing pending tasks before exit
    -[originsciences/repaq2fastq] Pipeline completed with errors-
    ERROR ~ Error executing process > 'ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC'
    
    Caused by:
      Process `ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC` terminated with an error exit status (127)
    
    
    Command executed:
    
      multiqc \
          --force \
           \
          --config multiqc_config.yml \
           \
           \
           \
           \
           \
          .
    
      cat <<-END_VERSIONS > versions.yml
      "ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC":
          multiqc: $( multiqc --version | sed -e "s/multiqc, version //g" )
      END_VERSIONS
    
    Command exit status:
      127
    
    Command output:
      (empty)
    
    Command error:
      .command.sh: line 3: multiqc: command not found
    
    Work dir:
      /home/mbeavitt/Code/Nextflow/test/work/6b/5aa8b1d054a0a65276379388988010
    
    Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`
    
     -- Check '.nextflow.log' file for details
    ERROR ~ Pipeline failed. Please refer to troubleshooting docs: <https://nf-co.re/docs/usage/troubleshooting>
    
     -- Check '.nextflow.log' file for details
    Any ideas? What am I doing wrong? It seems from the .nextflow.log file that the wave container is being requested and returned successfully:
    Copy code
    Jul-09 20:31:55.966 [Actor Thread 22] DEBUG io.seqera.wave.plugin.WaveClient - Wave config: WaveConfig(enabled:true, endpoint:<https://wave.seqera.io>, containerConfigUrl:[], tokensCacheMaxDuration:30m, condaOpts:CondaOpts(mambaImage=mambaorg/micromamba:1.5.10-noble; basePackages=conda-forge::procps-ng, commands=null), strategy:[dockerfile, container], bundleProjectResources:null, buildRepository:null, cacheRepository:null, retryOpts:RetryOpts(delay:450ms, maxDelay:1m 30s, maxAttempts:10, jitter:0.25), httpClientOpts:HttpOpts(), freezeMode:false, preserveFileTimestamp:null, buildMaxDuration:40m, mirrorMode:null, scanMode:null, scanAllowedLevels:null)
    Jul-09 20:31:56.011 [Actor Thread 22] DEBUG io.seqera.wave.plugin.WaveClient - Request limiter blocked PT0.001S
    Jul-09 20:31:56.012 [Actor Thread 22] DEBUG io.seqera.wave.plugin.WaveClient - Wave request: <https://wave.seqera.io/v1alpha2/container>; attempt=1 - request: SubmitContainerTokenRequest(towerAccessToken:<redacted>, towerRefreshToken:null, towerWorkspaceId:null, towerEndpoint:<https://api.cloud.seqera.io>, containerImage:quay.io/biocontainers/multiqc:1.29--pyhdfd78af_0, containerFile:null, containerConfig:ContainerConfig(), condaFile:null, containerPlatform:linux/amd64, buildRepository:null, cacheRepository:null, timestamp:2025-07-09T20:31:56.010626154+01:00, fingerprint:68285522bb7722ef8aea452fcae38e1d, freeze:false, format:null, dryRun:false, workflowId:null, containerIncludes:null, packages:null, nameStrategy:null, mirror:false, scanMode:null, scanLevels:null)
    Jul-09 20:31:56.653 [Actor Thread 22] DEBUG io.seqera.wave.plugin.WaveClient - Wave response: statusCode=200; body={"requestId":"8f9ffe7cc83e","containerToken":"8f9ffe7cc83e","targetImage":"wave.seqera.io/wt/8f9ffe7cc83e/biocontainers/multiqc:1.29--pyhdfd78af_0","expiration":"2025-07-11T07:31:56.732752494Z","containerImage":"quay.io/biocontainers/multiqc:1.29--pyhdfd78af_0","freeze":false,"mirror":false,"succeeded":true}
    Please find my code at this repo with a run.sh script if you'd like to try and reproduce this: https://github.com/mbeavitt/example_wave_pipeline
    t
    • 2
    • 2
  • n

    Nour El Houda Barhoumi

    07/09/2025, 9:27 PM
    Hello I Hope you are doing well, I am analyzing bulk RNAseq of bacterial genome.and I Have biological replicates,2 for each condition. shall I analyze them seperatly before downstrem analysis? or merging fastq files before alignement? which method and strategy is more appropriate! thank you
    t
    • 2
    • 2
  • j

    Jaykishan Solanki

    07/10/2025, 7:33 AM
    hello guys is there any efficient way to combine gvcf using multi threading i have 128 cpu core HPC i have tired GenomeInfoDB but that too took lot of time and in between shutting down the engine
    t
    • 2
    • 1
  • a

    Anna Norén

    07/10/2025, 8:08 AM
    Hello, I've tried updating the
    blastn
    module by adding
    taxid
    as input in this pr. I would like to add taxid testdata to properly test the update, in which folder should I place this testdata?
    j
    • 2
    • 5
  • s

    Shravan Lingampally

    07/13/2025, 6:59 PM
    Hey, just wondering is anyone hiring or open to taking on an intern?
    n
    m
    • 3
    • 3
  • s

    Samuel Lampa

    07/14/2025, 3:38 PM
    I have a problem, after updating our pipeline to nf-core 3.2.0 template (work started before the 3.3.0), that now when running the main pipeline with just the
    --help
    flag (
    nextflow run <http://main.nf|main.nf> --help
    ), does no longer print the available options etc, but rather I get errors from missing parameters etc:
    Copy code
    ERROR ~ Validation of pipeline parameters failed!
    
     -- Check '.nextflow.log' file for details
    The following invalid input values have been detected:
    
    * Missing required parameter(s): input, outdir
    * Missing required parameter(s): db
    So, somehow I have managed to mess up the pipeline initialization code I assume. Are there any well known caveats related to this? I have been trying to carefully compare our code with the one in the template branch, but haven't found any obvious differences as of yet.
    t
    • 2
    • 3
  • n

    Nour El Houda Barhoumi

    07/14/2025, 4:51 PM
    Hello,I hope you’re doing well. I’m working on a bulk RNA-seq dataset aligned to a bacterial genome using Bowtie2. When inspecting my BAM file with
    samtools view
    , I noticed that the mapping quality (MAPQ) scores range from 0 to 42. I would like to ask: • How should I filter out low-quality reads based on MAPQ values? • What MAPQ threshold would you recommend for bacterial RNA-seq to ensure reliable alignments? Thank you in advance for your help.
    t
    • 2
    • 4
  • k

    Kanishka Manna

    07/14/2025, 11:04 PM
    Hello everyone, I am facing a problem .. Essentially, I have four
    longdownload
    labeled processes that take a while to download reference databases. I think these are taking a while to download because all 4 processes are running at the same time, potentially bogging down our internet connection. I would like to force each of these downloads to run sequentially, rather than parallel. I tried changing the label from medium to
    longdownload
    with a set
    maxForks = 1
    . However, after testing, since these processes are all named differently, these 4 jobs are submitted to the SLURM scheduler in parallel. Any suggestions on how to force these jobs to run sequentially? I don't even need true sequential (in a specific order), I just want to ensure only one of these jobs are run at a time. Thank you 🙏🏼 Below is a snippet of the config file ..
    Copy code
    executor {
        name = 'slurm'
        queueSize = 10
    }
    
    // Default resource allocations to run the pipeline locally
    process {
    
        executor = 'slurm'
    
        // Set global SLURM options
        clusterOptions = '--parsable'
    
        // Common SLURM directives
        queue = 'cpu-s1-0'  // Override per-label with `clusterOptions` below
        cluster = { 
            "-A cpu-s1-0"  // Override per-label if needed
        }   
    
        errorStrategy = 'retry'
        maxRetries = 1 
    
        // Resource specifications by label
        withLabel: 'low' {
            cpus = 2 
            memory = 8.GB
            time = '8h'
            clusterOptions = '--partition=cpu-s1-0 --account=cpu-s1-0'
        }   
    
        withLabel: 'longdownload' {
            cpus = 2 
            memory = 8.GB
            time = '168h'
            maxForks = 1
            clusterOptions = '--partition=cpu-s1-0 --account=cpu-s1-0'
        }
  • j

    Jared Andrews

    07/15/2025, 2:24 PM
    Quick, potentially stupid question. If
    -r
    is not specified, is the latest tagged release used by default?
    p
    • 2
    • 5
  • m

    Michael Heskett

    07/15/2025, 5:46 PM
    hi I wasnt sure if this is best posted in configs or help, but can anyone tell me if this does what I am expecting which is, giving ALL tasks a maximum resource limit of what is listed here? my core/sarek pipeline failed because tumor/normal calling took longer than the default 8h process { resourceLimits = [memory: 245.GB, cpus: 32, time: 100.h] }
    p
    • 2
    • 8
  • m

    Michael Heskett

    07/15/2025, 5:47 PM
    also, has anyone used core/sarek with large WGS files? how does it break up the large files for parallelization? it could take a week to run WGS tumor/normal calling with mutect2 on a single file without parallelization
    t
    • 2
    • 1
  • m

    Michael Heskett

    07/15/2025, 7:01 PM
    does anyone know why core/rnaseq would produce a whole directory with genome and genome index even when you provide your own genome and --star-index directory?
  • m

    Michael Heskett

    07/15/2025, 7:57 PM
    has anyone had experience where using --resume leads to stalling indefinitely? as if nextflow is waiting for files that dont exist?
  • a

    Aleksandra Vitkovac

    07/17/2025, 9:07 AM
    I’ve tried to run nf-core/raredisease pipeline.. I got error : “vcfanno.go116 found 7 sources from 3 files vcfanno.go157 falling back to non-bgzip” And after that pipeline didn’t make index file Has anyone experienced a similar error?
    #️⃣ 1
    c
    • 2
    • 2
  • n

    Nour El Houda Barhoumi

    07/17/2025, 11:10 AM
    hello, I Hope you are doing well. Has anyone idea how converting bam files to tagalign for pairedend reads? thank you
    t
    • 2
    • 5
  • h

    Hannah

    07/17/2025, 3:59 PM
    Hi everyone, have anyone encountered this issue before? It seems that version number is rounded, i.e.
    1.10
    to
    1.1
    . The cause of this might be due to
    org.yaml.snakeyaml.Yaml
    , introduced in 3.2.0, treated version number as float. https://github.com/nf-core/modules/blob/master/subworkflows/nf-core/utils_nfcore_pipeline/main.nf#L81. The previous code using
    yaml
    work fine as version values remain string (still in quote i.e.
    gunzip: '1.10'
    ). I have locally modified
    utils_nf-core_pipeline/main.nf
    but wondering if is there a sub-workflow patch command to avoid nf-core linting in failing or is there a fix for this already Thank you for your support!
    😬 1
    😱 1
  • e

    eparisis

    07/18/2025, 10:35 AM
    Hey! I’ve set the
    errorstrategy
    for maxbin to ignore as it stops with an error if it cant create bins. My question now is how can I disable re-executing this module when using
    -resume
    ? It messes up all the cashing downstream and tries to re-run the process every time I resume. Whats the best workaround to that?
    t
    • 2
    • 5
  • d

    Daniel Lundin

    07/18/2025, 1:36 PM
    I was asked why we don't support unzipped input fastq files in #C02FC6VFQG1. Personally, I can't see why one would want not to keep them gzipped, but is this a general principle in nf-core that I can refer to when replying?
    a
    t
    • 3
    • 6
  • j

    Joshua Williams

    07/19/2025, 4:44 PM
    Hi all. I’m working on a HPC where the software node has internet access but the submission node does not. I’m trying to run the nf-core/ampliseq pipeline. I’ve been successful in running the demo pipeline, but I am unable to run the whole pipeline, which gets stuck at Process NFCORE_AMPLISEQAMPLISEQQIIME2_TABLEFILTERTAXA, saying “Invalid value for ‘--o-filtered-table’: ‘’ is not a writable directory, cannot write output to it”. If I skip that step by providing option --exclude_taxa “none”, it just fails later with the same error message on Process NFCORE_AMPLISEQAMPLISEQQIIME2_EXPORT:QIIME2_EXPORT_RELASV which cannot be skipped as easily. • On software node, build a mamba environment containing openjdk 17 (needed for running nextflow) mamba create -n nextflow_env openjdk -y The installed version is 17.0.15-internal • On software node, download nextflow executable curl -s https://get.nextflow.io | bash The installed version is 25.04.6 • On software node, download and extract pipeline and singularity images nf-core pipelines download ampliseq –container-system singularity –compress tar.gz tar -zxvf nf-core-ampliseq_2.14.0.tar.gz • On software node, download nf-schema plugin ./nextflow plugin install nf-schema@2.2.0 • On software node, build test dataset with metadata, fastq files, and sample sheet. mkdir -p ampliseq_test_data wget https://raw.githubusercontent.com/nf-core/test-datasets/ampliseq/samplesheets/Metadata.tsv -P ampliseq_test_data wget https://github.com/nf-core/test-datasets/raw/ampliseq/testdata/1a_S103_L001_R1_001.fastq.gz -P ampliseq_test_data wget https://github.com/nf-core/test-datasets/raw/ampliseq/testdata/1a_S103_L001_R2_001.fastq.gz -P ampliseq_test_data wget https://github.com/nf-core/test-datasets/raw/ampliseq/testdata/1_S103_L001_R1_001.fastq.gz -P ampliseq_test_data wget https://github.com/nf-core/test-datasets/raw/ampliseq/testdata/1_S103_L001_R2_001.fastq.gz -P ampliseq_test_data wget https://github.com/nf-core/test-datasets/raw/ampliseq/testdata/2a_S115_L001_R1_001.fastq.gz -P ampliseq_test_data wget https://github.com/nf-core/test-datasets/raw/ampliseq/testdata/2a_S115_L001_R2_001.fastq.gz -P ampliseq_test_data wget https://github.com/nf-core/test-datasets/raw/ampliseq/testdata/2_S115_L001_R1_001.fastq.gz -P ampliseq_test_data wget https://github.com/nf-core/test-datasets/raw/ampliseq/testdata/2_S115_L001_R2_001.fastq.gz -P ampliseq_test_data Make SampleSheet.tsv containing: sampleID forwardReads reverseReads sampleID_1a ampliseq_test_data/1a_S103_L001_R1_001.fastq.gz ampliseq_test_data/1a_S103_L001_R2_001.fastq.gz sampleID_1 ampliseq_test_data/1_S103_L001_R1_001.fastq.gz ampliseq_test_data/1_S103_L001_R2_001.fastq.gz sampleID_2a ampliseq_test_data/2a_S115_L001_R1_001.fastq.gz ampliseq_test_data/2a_S115_L001_R2_001.fastq.gz sampleID_2 ampliseq_test_data/2_S115_L001_R1_001.fastq.gz ampliseq_test_data/2_S115_L001_R2_001.fastq.gz • On software node, download silva reference databases mkdir -p nf-ampliseq-taxonomy wget -O nf-ampliseq-taxonomy/silva_nr99_v138.2_toSpecies_trainset.fa.gz \ https://zenodo.org/records/14169026/files/silva_nr99_v138.2_toSpecies_trainset.fa.gz wget -O nf-ampliseq-taxonomy/silva_v138.2_assignSpecies.fa.gz \ https://zenodo.org/records/14169026/files/silva_v138.2_assignSpecies.fa.gz • On submission node, create hpc_custom.config // hpc_custom.config process { executor = ‘slurm’ queue = ‘medium’ } • Finally, build SLURM submission script #!/bin/bash #SBATCH --job-name=nf-core-ampliseq #SBATCH --partition=medium #SBATCH --mem=4G #SBATCH --output=nfcore_ampliseq.out #SBATCH --error=nfcore_ampliseq.err export PATH=$PWD:$PATH # run “nextflow” instead of “./nextflow” export NXF_OFFLINE=true # HPC submission node is not internet connected export NXF_SINGULARITY_CACHEDIR=~/nextflow/ampliseq_pipeline/nf-core-ampliseq_2.14.0/singularity-images # Tell nextflow where to find ampliseq images export SINGULARITY_CACHEDIR=~/nextflow/ampliseq_pipeline/nf-core-ampliseq_2.14.0/singularity-images # Tell singularity where to find ampliseq images (NXF_SINGULARITY_CACHEDIR didn’t seem to do anything) # Activate nextflow_env containing openjdk 17.0.15 eval “$(mamba shell hook --shell bash)” mamba activate nextflow_env # NXF_OPTS here tries to get QIIME2 to inject environmental variables for QIIME2 directly into the singularity container NXF_OPTS=“-Denv.MPLCONFIGDIR=$PWD/mplconfigdir -Denv.NUMBA_CACHE_DIR=$PWD/numbaccache -Denv.XDG_CONFIG_HOME=$PWD/xdgconfig -Denv.TMPDIR=$PWD/tmp” \ nextflow run ./nf-core-ampliseq_2.14.0/2_14_0/ \ -profile singularity \ -c hpc_custom.config \ --input Samplesheet.tsv \ --metadata ./ampliseq_test_data/Metadata.tsv \ --FW_primer GTGYCAGCMGCCGCGGTAA \ --RV_primer GGACTACNVGGGTWTCTAAT \ --dada_ref_tax_custom ./nf-ampliseq-taxonomy/silva_nr99_v138.2_toSpecies_trainset.fa.gz \ --dada_ref_tax_custom_sp ./nf-ampliseq-taxonomy/silva_v138.2_assignSpecies.fa.gz \ --exclude_taxa “none” \ --outdir results/ • nf-core output error message: executor > slurm (39) [32/8e0e33] NFC…AW_DATA_FILES (sampleID_2) | 4 of 4 ✔️ [b7/9e8def] NFC…LISEQ:FASTQC (sampleID_2a) | 4 of 4 ✔️ [30/e63868] NFC…UTADAPT_BASIC (sampleID_2) | 4 of 4 ✔️ [a4/40c9af] NFC…RY_STD (cutadapt_standard) | 1 of 1 ✔️ [a4/dd34f0] NFC…dapt_standard_summary.tsv) | 1 of 1 ✔️ [88/869759] NFC…ESSING:DADA2_QUALITY1 (FW) | 2 of 2 ✔️ [ef/e689fc] NFC…REPROCESSING:TRUNCLEN (FW) | 2 of 2 ✔️ [9a/9ea6aa] NFC…DA2_FILTNTRIM (sampleID_1) | 4 of 4 ✔️ [43/7de63d] NFC…ESSING:DADA2_QUALITY2 (RV) | 2 of 2 ✔️ [f2/6e6452] NFC…SEQAMPLISEQDADA2_ERR (1) | 1 of 1 ✔️ [61/66437a] NFC…PLISEQ:DADA2_DENOISING (1) | 1 of 1 ✔️ [16/5b977c] NFC…PLISEQ:DADA2_RMCHIMERA (1) | 1 of 1 ✔️ [d7/b9eb61] NFC…QAMPLISEQDADA2_STATS (1) | 1 of 1 ✔️ [f2/5c36c2] NFC…LISEQAMPLISEQDADA2_MERGE | 1 of 1 ✔️ [07/532b57] NFC…PLISEQ:MERGE_STATS_STD (1) | 1 of 1 ✔️ [c6/3fe79a] NFC…Q:BARRNAP (ASV_seqs.fasta) | 1 of 1 ✔️ [c2/13a145] NFC…EQAMPLISEQBARRNAPSUMMARY | 1 of 1 ✔️ [31/b25b9a] NFC…_toSpecies_trainset.fa.gz) | 0 of 1 [- ] NFC…XONOMY_WF:DADA2_ADDSPECIES - [f4/880012] NFC…IME2_INASV (ASV_table.tsv) | 1 of 1 ✔️ [e6/b2f557] NFC…ME2_INSEQ (ASV_seqs.fasta) | 1 of 1 ✔️ [e1/a7386c] NFC…XPORT:QIIME2_EXPORT_RELASV | 0 of 1 [1c/fdde37] NFC…ETADATA_ALL (Metadata.tsv) | 1 of 1 ✔️ [c4/589fa8] NFC…TA_PAIRWISE (Metadata.tsv) | 1 of 1 ✔️ [63/793d2e] NFC…IME2_DIVERSITY:QIIME2_TREE | 0 of 1 ✘ Plus 17 more processes waiting for tasks… ERROR ~ Error executing process > ‘NFCORE_AMPLISEQAMPLISEQQIIME2_EXPORT:QIIME2_EXPORT_RELASV’ Caused by: Process
    NFCORE_AMPLISEQ:AMPLISEQ:QIIME2_EXPORT:QIIME2_EXPORT_RELASV
    terminated with an error exit status (1) Command executed: export XDG_CONFIG_HOME=“./xdgconfig” export MPLCONFIGDIR=“./mplconfigdir” export NUMBA_CACHE_DIR=“./numbacache” #convert to relative abundances qiime feature-table relative-frequency \ --i-table table.qza \ --o-relative-frequency-table relative-table-ASV.qza #export to biom qiime tools export \ --input-path relative-table-ASV.qza \ --output-path relative-table-ASV #convert to tab separated text file “rel-table-ASV.tsv” biom convert \ -i relative-table-ASV/feature-table.biom \ -o rel-table-ASV.tsv --to-tsv cat <<-END_VERSIONS > versions.yml “NFCORE_AMPLISEQAMPLISEQQIIME2_EXPORTQIIME2 EXPORT RELASV” qiime2: $( qiime --version | sed ‘1!d;s/.* //’ ) END_VERSIONS Command exit status: 1 Command output: (empty) Command error: QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment. Matplotlib created a temporary cache directory at /tmp/matplotlib-tnkujqlh because the default path (mplconfigdir) is not a writable directory; it is highly recommended > Usage: qiime feature-table relative-frequency [OPTIONS] Convert frequencies to relative frequencies by dividing each frequency in a sample by the sum of frequencies in that sample. Inputs: --i-table ARTIFACT FeatureTable[Frequency] The feature table to be converted into relative frequencies. [required] Outputs: --o-relative-frequency-table ARTIFACT FeatureTable[RelativeFrequency] The resulting relative frequency feature table. [required] Miscellaneous: --output-dir PATH Output unspecified results to a directory --verbose / --quiet Display verbose output to stdout and/or stderr during execution of this action. Or silence output if execution is successful (silence is golden). --example-data PATH Write example data and exit. --citations Show citations and exit. --use-cache DIRECTORY Specify the cache to be used for the intermediate work of this action. If not provided, the default cache under $TMP/qiime2/<uname> will be used. IMPORTANT FOR HPC USERS: If you are on an HPC system and are using parallel execution it is important to set this to a location that is globally accessible to all nodes in the cluster. --help Show this message and exit. There was a problem with the command: (1/1) Invalid value for ‘--o-relative-frequency-table’: ‘’ is not a writable directory, cannot write output to it. Work dir: /hpc-home/jowillia/nextflow/ampliseq_pipeline/work/e1/a7386c0c95e8ade59979cd258a0e82 Container: /hpc-home/jowillia/nextflow/ampliseq_pipeline/nf-core-ampliseq_2.14.0/singularity-images/quay.io-qiime2-amplicon-2024.10.img Tip: view the complete command output by changing to the process work dir and entering the command
    cat .command.out
    -- Check ‘.nextflow.log’ file for details • nextflow.log details Jul-19 173837.373 [TaskFinalizer-7] DEBUG nextflow.processor.TaskProcessor - Handling unexpected condition for task: name=NFCORE_AMPLISEQAMPLISEQQIIME2_EXPORT:QIIME2_EXPORT_RELASV; work-dir=/hpc-home/jowillia/nextflow/ampliseq_pipeline/work/e1/a7386c0c95e8ade59979cd258a0e82 error [nextflow.exception.ProcessFailedException]: Process
    NFCORE_AMPLISEQ:AMPLISEQ:QIIME2_EXPORT:QIIME2_EXPORT_RELASV
    terminated with an error exit status (1) Jul-19 173837.377 [TaskFinalizer-8] DEBUG nextflow.processor.TaskProcessor - Handling unexpected condition for task: name=NFCORE_AMPLISEQAMPLISEQQIIME2_DIVERSITY:QIIME2_TREE; work-dir=/hpc-home/jowillia/nextflow/ampliseq_pipeline/work/63/793d2edf0a5bef30b22d590957a0e4 error [nextflow.exception.ProcessFailedException]: Process
    NFCORE_AMPLISEQ:AMPLISEQ:QIIME2_DIVERSITY:QIIME2_TREE
    terminated with an error exit status (1) Jul-19 173837.441 [TaskFinalizer-7] ERROR nextflow.processor.TaskProcessor - Error executing process > ‘NFCORE_AMPLISEQAMPLISEQQIIME2_EXPORT:QIIME2_EXPORT_RELASV’ Tip: you can try to figure out what’s wrong by changing to the process work dir and showing the script file named
    .command.sh
    Then just the details from above in the .out file. Apologies for the long message. I’ve been troubleshooting for 3 days and I’m close to giving up. I would be incredibly grateful if anyone had experienced these issues before and could shed some light on it. Many thanks, Josh
    p
    s
    • 3
    • 7
  • e

    Eric Samorodnitsky

    07/20/2025, 8:23 PM
    Hi all, I am a research scientist at The Ohio State University working on cancer genomics. I'd like to join the nf-core community. Can someone help me get in? 🙂
    p
    t
    • 3
    • 20
  • h

    Haidong Yi

    07/21/2025, 2:44 AM
    Hi everyone, can anyone help with this PR (https://github.com/nf-core/modules/pull/8718)? The error encountered in the ci/cd is very strange as its working well in
    conda
    and
    singularity
    environment but having errors in the docker environment. The testing is working well without errors in my macbook pro (m-chip) in the docker environment as well. Only in the
    x86_64
    environment, it gives a
    95
    exitcode even though the outputs from the program are normal. I also tried compiling the C program using the source code. It didn't give any errors like the docker environment. So, I think this error is related to docker exec environment for this program. I don't know how to fix it.
  • n

    Nour El Houda Barhoumi

    07/21/2025, 9:22 AM
    hello , I hope you are doing well, I’m currently performing a combined analysis of RNA-seq and ChIP-seq data in a bacterial genome. I have a few questions regarding alignment quality: 1. For quality control, is it recommended to keep only uniquely mapped reads based on a certain MAPQ threshold when working with bacterial genomes? 2. If I use
    samtools view -q XX
    to filter my BAM files, does it break the pairing information in paired-end data? Thank you
    t
    • 2
    • 3
  • w

    Willram Scholz

    07/21/2025, 9:44 AM
    Hi everyone, I'm a student research assistant working at the German Cancer Research Center in Heidelberg. I'm currently trying to get the nf-core/methylseq pipeline to work on our cluster, but I'm running into a memory limit error (TERM_MEMLIMIT: job killed after reaching LSF memory usage limit. Exited with exit code 130.) and I hope to find help here. I’m using nextflow release 24.10.2 and running the methylseq pipeline version -r 3.0.0. Apart from the params my config file looks like this:
    Copy code
    singularity {
       enabled = true
       autoMounts = true
    }
     
    process {
       resourceLimits = [
           memory: 500.GB,
           cpus: 30,
           time: 128.h
       ]
       executor = 'lsf'
       scratch = '[path_to_my_scratch_folder]/$LSB_JOBID'
    }
     
    executor {
       name = 'lsf'
       perTaskReserve = false
       perJobMemLimit = true
       queueSize = 10
       submitRateLimit = '3 sec'
    }
    However, when I try to run the pipeline, the session is aborted because the process ‘NFCORE_METHYLSEQMETHYLSEQFASTQ_ALIGN_DEDUP_BISMARK:BISMARK_ALIGN’ gets terminated with an error exit status (130). When looking at the individual command log files I find the following:
    Copy code
    TERM_MEMLIMIT: job killed after reaching LSF memory usage limit.
    Exited with exit code 130.
     
    Resource usage summary:
     
       CPU time :                                  1288250.00 sec.
       Max Memory :                                147456 MB
       Average Memory :                            92024.89 MB
       Total Requested Memory :                    147456.00 MB
       Delta Memory :                              0.00 MB
       Max Swap :                                  -
       Max Processes :                             77
       Max Threads :                               150
       Run time :                                  59302 sec.
       Turnaround time :                           59305 sec.
    However, I can’t find any mention of a memory setting in the individual .command.sh files either:
    Copy code
    #!/usr/bin/env bash
     
    set -e # Exit if a tool returns a non-zero status/exit code
    set -u # Treat unset variables and parameters as an error
    set -o pipefail # Returns the status of the last command to exit with a non-zero status or zero if all successfully execute
    set -C # No clobber - prevent output redirection from overwriting files.
     
    bismark \
       -1 [filename_1].fq.gz -2 [filename_2].fq.gz \
       --genome BismarkIndex \
       --bam \
       --bowtie2    --unmapped --multicore 8
     
    cat <<-END_VERSIONS > versions.yml
    "NFCORE_METHYLSEQ:METHYLSEQ:FASTQ_ALIGN_DEDUP_BISMARK:BISMARK_ALIGN":
       bismark: $(echo $(bismark -v 2>&1) | sed 's/^.*Bismark Version: v//; s/Copyright.*$//')
    END_VERSIONS
    So, TLTR, I set the memory limit in my config file to 500 GB, but the Bismark alignment terminates after reaching 147456 MB, and I don’t know how to fix this issue. I previously increased the memory limit in the config file from 250 GB to 500 GB but that didn’t change anything. Can anybody help me with this problem? Many thanks, Will
    n
    t
    • 3
    • 6
  • l

    Louis Le Nézet

    07/21/2025, 1:04 PM
    Does anyone know why I'm getting the following error with dragen for
    NARFMAP_ALIGN
    on github action ? https://github.com/nf-core/modules/actions/runs/16416474545/job/46383642317?pr=8724 I've updated the container from:
    Copy code
    - '<oras://community.wave.seqera.io/library/narfmap_align:8bad41386eab9997>':
     - '<http://community.wave.seqera.io/library/narfmap_align:517a1fed8e4e84c1|community.wave.seqera.io/library/narfmap_align:517a1fed8e4e84c1>' }"
     + '<https://community-cr-prod.seqera.io/docker/registry/v2/blobs/sha256/a9/a9634de8646d72c54319cc5683949929af4b38e2245b4bc8d28a6666a1f702d6/data>':
     + '<http://community.wave.seqera.io/library/narfmap_samtools_pigz:77d0682b7dae0251|community.wave.seqera.io/library/narfmap_samtools_pigz:77d0682b7dae0251>' }"
    And I get the following error
    Copy code
    >   Reading reference input file GRCh38_chr21.fa: 47377349 bytes...
        >   -07-21 12:08:45 	[7f9cbfa44740]	Version: 1.4.2
        >   2025-07-21 12:08:45 	[7f9cbfa44740]	argc: 9 argv: dragen-os --build-hash-table true --ht-reference GRCh38_chr21.fa --output-directory narfmap --ht-num-threads 2
        >   Supressing decoys
        >   
        >   Total: 1 sequence, 46709983 bases (42018816 after trimming/padding)
        >   
        >   Spawning 1 threads build STR table...
        >   Encoding binary reference sequence...
        >     1 sequence, 46709983 bases (42018816 after trimming/padding)
        >   .command.sh: line 8:    33 Killed                  dragen-os --build-hash-table true --ht-reference GRCh38_chr21.fa --output-directory narfmap --ht-num-threads 2
    m
    • 2
    • 1
  • n

    Nick Eckersley

    07/21/2025, 6:53 PM
    Hi, this might be a dumb question but how can I check if this is doing anything?:
    Copy code
    executor >  slurm (429)
    [-        ] NFC…PREPROCESSING:NANOPLOT_RAW -
    [-        ] NFC…:MAG:CENTRIFUGE_CENTRIFUGE -
    [-        ] NFC…MAG:MAG:CENTRIFUGE_KREPORT -
    [-        ] NFCORE_MAG:MAG:KRAKEN2         -
    [bf/7560b6] NFC…(ERZ24813857-contig.fa.gz) | 4 of 4, cached: 4 ✔
    [5d/8e6266] NFC…AG:MAG:QUAST (SPAdes-N075) | 4 of 4, cached: 4 ✔
    [fa/97da1d] NFCORE_MAG:MAG:PRODIGAL (N072) | 4 of 4, cached: 4 ✔
    [c1/5de39e] NFC…SEMBLY_BUILD (SPAdes-N072) | 4 of 4, cached: 4 ✔
    [0b/876070] NFC…Y_ALIGN (SPAdes-N072-N074) | 16 of 16, cached: 16 ✔
    [d0/6423a9] NFC…RIZEBAMCONTIGDEPTHS (N072) | 4 of 4, cached: 4 ✔
    [e9/bc7d2b] NFC…NING:CONVERT_DEPTHS (N072) | 4 of 4, cached: 4 ✔
    [8a/083b3f] NFC…G:METABAT2_METABAT2 (N073) | 4 of 4, cached: 4 ✔
    [64/a9a862] NFC…MAG:BINNING:MAXBIN2 (N072) | 4 of 4, cached: 4 ✔
    [93/af97fd] NFC…_MAXBIN2_EXT (SPAdes-N072) | 4 of 4, cached: 4 ✔
    [19/8566af] NFC…INNING:SEQKIT_STATS (N072) | 4 of 4, cached: 4 ✔
    [56/187e2b] NFC…ASTA (SPAdes-MaxBin2-N072) | 8 of 8, cached: 8 ✔
    [56/cfe9b7] NFC…es-MaxBin2-N072.082.fa.gz) | 428 of 428, cached: 428 ✔
    [-        ] NFC…:MAG:BINNING:GUNZIP_UNBINS -
    [46/c9275a] NFC…PTHS (SPAdes-MaxBin2-N072) | 8 of 8, cached: 8 ✔
    [31/929d7b] NFC…PLOT (SPAdes-MaxBin2-N072) | 8 of 8, cached: 8 ✔
    [fe/e4fac3] NFC…:DEPTHS:MAG_DEPTHS_SUMMARY | 1 of 1, cached: 1 ✔
    [a0/1c3e67] NFC…:BIN_QC:BUSCO_BUSCO (N072) | 8 of 8, cached: 8 ✔
    [8c/4d6059] NFC…C:CONCAT_BINQC_TSV (busco) | 1 of 1, cached: 1 ✔
    [c9/d1325d] NFC…classified-unrefined-N072) | 8 of 8, cached: 8 ✔
    [f6/d2e3a2] NFC…MAG:MAG:QUAST_BINS_SUMMARY | 1 of 1, cached: 1 ✔
    [b4/9836a4] NFC… (SPAdes-MetaBAT2-N072.26) | 428 of 428, retries: 1 ✔
    Plus 7 more processes waiting for tasks…
    
    (base) neckersl@gruffalo:~/scratch/private/nfcore_mag/logs$ squeue -u $USER
                 JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
               2837979      long nfcore_m neckersl  R 1-10:13:03      1 n24-64-384-giles
    Thanks
    n
    t
    • 3
    • 3