https://nf-co.re logo
Join Slack
Powered by
# help
  • f

    François-Xavier Stubbe

    08/21/2025, 12:54 PM
    Hey! I'm trying to dynamically set a scale_factor for deeptools_bamcoverage. Failing so far. Has anyone achieved this?
    m
    s
    • 3
    • 2
  • j

    James Fellows Yates

    08/22/2025, 12:13 PM
    Is it still recommended/necessary to run
    .first()
    after mixing module versions into
    ch_versions
    ? Given
    .unique
    is run prior passing to MultiQC, is there any overhead benefit for taking just the
    version.yaml
    from the first module invocation vs passing all
    versions.yml
    and running unique
    m
    m
    n
    • 4
    • 33
  • s

    Sylvia Li

    08/22/2025, 6:26 PM
    Workflow gets hung up on subworkflows .view() call. View() works fine outside of it, How i am calling the subworkflow
    Copy code
    def longpac_longpolish = SAMPLESHEETFILTERING.out.list_longpac_longPolish
        def flattened_result = longpac_longpolish
            .filter { value -> value instanceof List && !value.isEmpty() }
            .flatMap()
        flattened_result.view()
    PACBIO_SUBWORKFLOW(flattened_result)
    it views() fine, emitting [[id:Sample1, polish:long, basecaller:NA], short1NA, short2NA, TestDatasetNfcore/Pacbio_illuminaPolish/PacbioSRR27591472.hifi.fastq.gz, assemblyNA] [[id:Sample2, polish:long, basecaller:NA], short1NA, short2NA, TestDatasetNfcore/Pacbio_illuminaPolish/PacbioSRR27591472.hifi.fastq.gz, assemblyNA] but when i pass it to the subworkflow
    Copy code
    workflow PACBIO_SUBWORKFLOW {
    
        take:
        ch_input_full // channel: [ val(meta), files/data, files/data, files/data..etc ]
        // bam_file
        // polish
        // gambitdb
        // krakendb
    
        main:
        def ch_output = Channel.empty()
        def ch_versions = Channel.empty()
        println("hello")
        ch_input_full.view()
    It just prints hello, and gets hung up, doesn't seem to ever print the channel values? just sits there. I dont understand why? my nextflow.log also says all processes finished, all barriers passed
    Copy code
    Aug-22 13:23:17.907 [main] DEBUG nextflow.script.ScriptRunner - > Awaiting termination 
    Aug-22 13:23:17.907 [main] DEBUG nextflow.Session - Session await
    Aug-22 13:23:17.907 [main] DEBUG nextflow.Session - Session await > all processes finished
    Aug-22 13:23:17.908 [main] DEBUG nextflow.Session - Session await > all barriers passed
    n
    • 2
    • 2
  • j

    Juan E. Arango Ossa

    08/22/2025, 6:33 PM
    As we know process names get truncated with ansi output. I know I can get full names if I use
    -ansi-log false
    but I do want the ansi output to have the latest colored output. I saw in this issue @Phil Ewels was suggesting something with full names as in the pic. Was this implemented? Can I get something like that with ansi logs and full process name or at least longer? As it is, it's still very challenging to read
    p
    • 2
    • 28
  • s

    Sylvia Li

    08/22/2025, 10:38 PM
    if i have 2 channels from channel factories ch_1 = Channel.of(1,2,3) ch_2 = Channel.of(4,5,6) if i input them into a subworkflow together subworkflow(ch_1, ch_2) will they always emit in order? so the first value of ch_1 will be with the first value of ch_2 and so on?
    t
    a
    • 3
    • 2
  • n

    Nour El Houda Barhoumi

    08/23/2025, 3:02 PM
    Hello, I hope you are doing well. In my BAM file, I noticed that some reads with the same QNAME appear multiple times but have different MAPQ values (for example, one is 42 and the other is 0), and they also have different flags. If I remove the reads with MAPQ = 0, will this risk producing an unbalanced BAM file? Thank you.
    t
    • 2
    • 2
  • y

    yanzi L.

    08/26/2025, 10:59 PM
    I need help on running the nf-core/phaseimpute, I run the test profile, and I have this error. This used to be working for me
    a
    • 2
    • 1
  • f

    Fredrick

    08/28/2025, 2:07 AM
    I need help with GATK4. With this workflow setup, GATK4_HAPLOTYPECALLER only runs on a single sample. Am i doing something wrong? Am trying little gymnastics here since some of the gatk4 subtools take references with metadata while other don't. I realised that there is a lot of joining and splitting involved as is but bear with me...
    Copy code
    FASTQ_ALIGN_BWA (
            ch_samplesheet,                                  // channel input reads: [ val(meta2), path(index) ]
            PREPARE_REFERENCE_INDEXES.out.bwa_index,         // channel BWA index: [ val(meta2), path(index) ]
            true,                                            // boolean value: true/false for sorting BAM files
            fasta,                                           // channel reference fasta: [ val(meta3), path(fasta) ]
        )
        ch_versions = ch_versions.mix( FASTQ_ALIGN_BWA.out.versions.first() )
    
        ch_bam_bai = FASTQ_ALIGN_BWA.out.bam.join( FASTQ_ALIGN_BWA.out.bai, by: 0)
    
        // Extract BAM and BAI channels from joined input
        ch_bam = ch_bam_bai.map { meta, bam, bai -> [meta, bam] }
        ch_bai = ch_bam_bai.map { meta, bam, bai -> [meta, bai] }
    
        /*
        MODULE: GATK4_ADDORREPLACEREADGROUPS
        */
        GATK4_ADDORREPLACEREADGROUPS (
            ch_bam,
            fasta,
            fasta_fai
        )
        ch_versions = ch_versions.mix(GATK4_ADDORREPLACEREADGROUPS.out.versions.first())
    
        /*
        MODULE: GATK4_MARKDUPLICATES
        */
    
        // DUBUG SANITY CHECKS: Create view for debugging
        // GATK4_ADDORREPLACEREADGROUPS.out.bam.view { "GATK4_MARKDUPLICATES input BAM: $it" }
        // fasta.map{ meta, fasta -> fasta }.view { "GATK4_MARKDUPLICATES input FASTA: $it" }
        // fasta_fai.map{ meta, fai -> fai }.view { "GATK4_MARKDUPLICATES input FASTA_FAI: $it" }
        
        GATK4_MARKDUPLICATES (
            GATK4_ADDORREPLACEREADGROUPS.out.bam,
            fasta.map{ meta, fasta -> fasta },
            fasta_fai.map{ meta, fai -> fai}
        )
        ch_versions = ch_versions.mix(GATK4_MARKDUPLICATES.out.versions.first())
    
        /*
        MODULE: GATK4_CALIBRATEDRAGSTRMODEL
        */
    
        // DUBUG SANITY CHECKS: create view for debugging
        // GATK4_MARKDUPLICATES.out.bam.join(GATK4_MARKDUPLICATES.out.bai).view { "GATK4_CALIBRATEDRAGSTRMODEL input BAM+BAI: $it" }
        // fasta.map{ meta, fasta -> fasta }.view { "GATK4_CALIBRATEDRAGSTRMODEL input FASTA: $it" }
        // fasta_fai.map{ meta, fai -> fai }.view { "GATK4_CALIBRATEDRAGSTRMODEL input FASTA_FAI: $it" }
        // genome_dict.view { "GATK4_CALIBRATEDRAGSTRMODEL input GENOME_DICT: $it" }
        // str_table.view { "GATK4_CALIBRATEDRAGSTRMODEL input STR_TABLE: $it" }
    
        GATK4_CALIBRATEDRAGSTRMODEL (
            GATK4_MARKDUPLICATES.out.bam.join(GATK4_MARKDUPLICATES.out.bai),
            fasta.map{ meta, fasta -> fasta },
            fasta_fai.map{ meta, fai -> fai },
            genome_dict.map{ meta, dict -> dict },
            str_table
        )
        ch_versions = ch_versions.mix(GATK4_CALIBRATEDRAGSTRMODEL.out.versions.first())
        /*
        MODULE: GATK4_HAPLOTYPECALLER
        Expected input:
            tuple val(meta), path(input), path(input_index), path(intervals), path(dragstr_model)
            tuple val(meta2), path(fasta)
            tuple val(meta3), path(fai)
            tuple val(meta4), path(dict)
            tuple val(meta5), path(dbsnp)
            tuple val(meta6), path(dbsnp_tbi)
        */
        ch_gatk_markduplicates = GATK4_MARKDUPLICATES.out.bam
            .join(GATK4_MARKDUPLICATES.out.bai, by: 0, failOnMismatch: true)
            .join(GATK4_CALIBRATEDRAGSTRMODEL.out.dragstr_model, by: 0, failOnMismatch: true)
            .combine(bed)
            .map { meta, bam, bai, model, bed -> [meta, bam, bai, bed, model] }
            .set { ch_gatk_haplo_input }
    
        // ch_gatk_haplo_input.view() { "GATK4_HAPLOTYPECALLER INPUT: $it" }
        GATK4_HAPLOTYPECALLER (
            ch_gatk_haplo_input,
            fasta,
            fasta_fai,
            genome_dict,
            dbsnp.map { meta, vcf -> [meta, vcf] },
            dbsnp_tbi.map { tbi -> ["dbsnp_tbi", tbi] }
        )
        ch_versions = ch_versions.mix(GATK4_HAPLOTYPECALLER.out.versions.first())
    m
    • 2
    • 5
  • r

    Richard Francis

    08/28/2025, 4:14 PM
    Originally posted in #C07MFUQAR1B but reposted here in case anyone can provide assistance. Many thanks in advance.
    t
    • 2
    • 5
  • t

    Thiseas C. Lamnidis

    08/29/2025, 9:29 AM
    Hi all! I am trying to add a
    log.message
    on successful pipeline completion (in addition to the standard
    Pipeline completed successfully
    one). At first I tried adding it to
    subworkflows/nf-core/utils_nfcore_pipeline/main.nf
    , but doing so breaks linting because the file is different to remote. I could ignore this check in
    .nf-core.yml
    , but that seems dangerous as it is generally a good idea to keep those important core functions checked imo. So I made my own copy of the
    completionSummary
    function, which I added directly within
    subworkflows/local/utils_nfcore_eager_pipeline/main.nf
    . It looks like this:
    Copy code
    def easterEgg(monochrome_logs) {
        def colors = logColours(monochrome_logs) as Map
        if (workflow.stats.ignoredCount == 0) {
                if (workflow.success) {
                    // <https://en.wiktionary.org/wiki/jw.f_pw>
                    log.info("-${colors.green}𓂻 𓅱 𓆑 𓊪 𓅱${colors.reset}-")
                }
        }
    }
    Here’s the code from the
    completionSummary
    function, for reference:
    Copy code
    def completionSummary(monochrome_logs=true) {
        def colors = logColours(monochrome_logs) as Map
        if (workflow.success) {
            if (workflow.stats.ignoredCount == 0) {
                log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Pipeline completed successfully${colors.reset}-")
            }
            else {
                log.info("-${colors.purple}[${workflow.manifest.name}]${colors.yellow} Pipeline completed successfully, but with errored process(es) ${colors.reset}-")
            }
        }
        else {
            log.info("-${colors.purple}[${workflow.manifest.name}]${colors.red} Pipeline completed with errors${colors.reset}-")
        }
    }
    I then call my
    easterEgg
    function within
    PIPELINE_COMPLETION
    , directly after
    completionSummary
    , like so:
    Copy code
    workflow PIPELINE_COMPLETION {
        [...]
        workflow.onComplete {
            [...]
            completionSummary(monochrome_logs)
            easterEgg(monochrome_logs)
            [...]
        }
    }
    Considering it is essentially a copy of
    completionSummary
    , I would expect this to work, but instead I get this error:
    Copy code
    -[nf-core/eager] Pipeline completed successfully-
    ERROR ~ Failed to invoke `workflow.onComplete` event handler
    
     -- Check script './workflows/../subworkflows/local/../../subworkflows/local/utils_nfcore_eager_pipeline/main.nf' at line: 190 or see '.nextflow.log' file for more details
    It seems I cannot access the
    workflow
    object to check its
    .success
    or
    .stats.ignoredCount
    attributes. The error stays the same when I flip the order of checks, so it seems I cannot access the
    workflow
    object altogether. Any ideas what is going on here? This is rather unintuitive.
    m
    s
    a
    • 4
    • 30
  • s

    Sam Sims

    08/29/2025, 11:09 AM
    Hi all! I am attempting to get nf-test snapshot testing working with GitHub Actions, but I am running into some issues with filepaths that seem to be causing the snapshot to fail. Locally nf-test seems to resolve a relative path to the output file and thus that relative path is saved in the snapshot (which is what I should expect I think?). However when I run this in GH actions I get a full filepath that just points to a work directory by the looks of things (6b5fb1e4015fc9f93a37a33a917222c3) which I assume is causing the snapshot to fail e.g:
    Copy code
    java.lang.RuntimeException: Different Snapshot:
      [													[
          {													    {
              "0": [												        "0": [
                  [												            [
                      "cchf_test",										                "cchf_test",
                      "3052518.warning.json:md5,1b59b4c73ec5eb7a87a2e6b1cc810e9a"			   |	                "/home/runner/work/scylla/scylla/.nf-test/tests/6b5fb1e4015fc9f93a37a33a917222c3
                  ]												            ]
              ],												        ],
              "warning_ch": [											        "warning_ch": [
                  [												            [
                      "cchf_test",										                "cchf_test",
                      "3052518.warning.json:md5,1b59b4c73ec5eb7a87a2e6b1cc810e9a"			   |	                "/home/runner/work/scylla/scylla/.nf-test/tests/6b5fb1e4015fc9f93a37a33a917222c3
                  ]												            ]
              ]												        ]
          },													    },
          "hcid.counts.csv:md5,c45ab01001988dc88e4469ae29a92448"						    "hcid.counts.csv:md5,c45ab01001988dc88e4469ae29a92448"
      ]													]											        ],
    In my test I am doing something like this
    assert snapshot(workflow.out, path("${outputDir}/cchf_test/qc/hcid.counts.csv")).match()
    Interestingly it seems in this example the
    hcid.counts.csv
    file works fine - its just the outputs of
    workflow.out
    that seem to have this problem I might be missing something obvious, but I have been stumped for a while trying to figure this out - and so thought Id see if anyone had any ideas. Thanks 🙂
    m
    • 2
    • 7
  • c

    Cheyenne

    08/29/2025, 12:29 PM
    Does the nf-core/base docker image already include things like fastqc, samtools, etc. or do I need to add those separately to my dockerfile?
    m
    p
    p
    • 4
    • 12
  • k

    karima

    09/01/2025, 1:03 PM
    Hi all! I am currently trying to run the nf-core/rnaseq test dataset as part of learning the pipeline. I am relatively new to Nextflow and nf-core workflows. While running the pipeline, I encountered the following error:
    ERROR ~ Error executing process > 'NFCORE_RNASEQ:RNASEQ:FASTQ_QC_TRIM_FILTER_SETSTRANDEDNESS:FASTQ_FASTQC_UMITOOLS_TRIMGALORE:FASTQC (RAP1_UNINDUCED_REP1)'
    Caused by:
    Process requirement exceeds available memory -- req: 15 GB; avail: 14.8 GB
    My machine specifications are: RAM: 14 GB and CPUs: 8 configuration file: process { cpus = 4 memory = '12 GB' time = '12h' withLabel:process_low { cpus = 1 memory = '4 GB' time = '2h' } withLabel:process_medium { cpus = 2 memory = '6 GB' time = '4h' } withLabel:process_high { cpus = 4 memory = '12 GB' time = '10h' } } Could you please advise on the best way to successfully run the test dataset ?
    t
    • 2
    • 4
  • f

    Fredrick

    09/03/2025, 5:03 AM
    Hi everyone 👋 I’m looking to learn more about the pipelines used in pharmacogenomics analyses. If you work in this space, I’d love to hear what tools or workflows you rely on, especially for variant calling and interpretation. I'm particularly interested in panels that span across: • Pharmacogenes (e.g. CYP2D6) • Enzyme deficiencies (e.g. G6PD) • Primary immunodeficiencies (e.g. UBA1) • Hematologic disorders Do you use nf-core pipelines, custom workflows, or something else entirely? What sequencing formats do you normally use (short-read and long-read)? Thanks in advance for any insights and recommendations
  • u

    Ugo Iannacchero

    09/03/2025, 4:03 PM
    Hi, I was wondering when the weekly help-desk will come back for European Summer Time. Thanks!
    j
    m
    • 3
    • 3
  • c

    Cheyenne

    09/06/2025, 10:12 PM
    On macOS there seems to be some issue with the fastqc nf-core module not always posting the .exitcode file when it’s done running? Is there a good way to stabilize this? I’ve tried tuning the resources and also hardcoding exit 0 in the module script but it seems to be on the nextflow side? I am running with docker as the profile
    m
    p
    • 3
    • 9
  • u

    Ugo Iannacchero

    09/10/2025, 6:49 AM
    Hi everyone, I was running an analysis with nf-core/sammyseq on paired-end data (I think this is the first PE run we do with this pipeline) and I hit a crash at
    PICARD_MARKDUPLICATES
    .
    Copy code
    [... terminated with an error exit status (1) -- Execution is retried (1)
    ... retried (2)
    ERROR ~ Error executing process > '...:PICARD_MARKDUPLICATES (Tnaive_24h_act_repB_S4)'
    
    Command executed:
    picard -Xmx13107M MarkDuplicates \
      --ASSUME_SORTED true --REMOVE_DUPLICATES false --VALIDATION_STRINGENCY LENIENT --TMP_DIR tmp \
      --INPUT Tnaive_24h_act_repB_S4.bam \
      --OUTPUT Tnaive_24h_act_repB_S4.md.bam \
      --REFERENCE_SEQUENCE GRCh38.primary_assembly.genome.fa \
      --METRICS_FILE Tnaive_24h_act_repB_S4.md.MarkDuplicates.metrics.txt
    
    Command error (picard 3.3.0):
    Command error:
      /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory
      05:49:06.233 INFO  NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-3.3.0-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so
      [Wed Sep 10 05:49:06 GMT 2025] MarkDuplicates --INPUT Tnaive_24h_act_repB_S4.bam --OUTPUT Tnaive_24h_act_repB_S4.md.bam --METRICS_FILE Tnaive_24h_act_repB_S4.md.MarkDuplicates.metrics.txt --REMOVE_DUPLICATES false --ASSUME_SORTED true --TMP_DIR tmp --VALIDATION_STRINGENCY LENIENT --REFERENCE_SEQUENCE GRCh38.primary_assembly.genome.fa --MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP 50000 --MAX_FILE_HANDLES_FOR_READ_ENDS_MAP 8000 --SORTING_COLLECTION_SIZE_RATIO 0.25 --TAG_DUPLICATE_SET_MEMBERS false --REMOVE_SEQUENCING_DUPLICATES false --TAGGING_POLICY DontTag --CLEAR_DT true --DUPLEX_UMI false --FLOW_MODE false --FLOW_DUP_STRATEGY FLOW_QUALITY_SUM_STRATEGY --FLOW_USE_END_IN_UNPAIRED_READS false --FLOW_USE_UNPAIRED_CLIPPED_END false --FLOW_UNPAIRED_END_UNCERTAINTY 0 --FLOW_UNPAIRED_START_UNCERTAINTY 0 --FLOW_SKIP_FIRST_N_FLOWS 0 --FLOW_Q_IS_KNOWN_END false --FLOW_EFFECTIVE_QUALITY_THRESHOLD 15 --ADD_PG_TAG_TO_READS true --DUPLICATE_SCORING_STRATEGY SUM_OF_BASE_QUALITIES --PROGRAM_RECORD_ID MarkDuplicates --PROGRAM_GROUP_NAME MarkDuplicates --READ_NAME_REGEX <optimized capture of last three ':' separated fields as numeric values> --OPTICAL_DUPLICATE_PIXEL_DISTANCE 100 --MAX_OPTICAL_DUPLICATE_SET_SIZE 300000 --VERBOSITY INFO --QUIET false --COMPRESSION_LEVEL 5 --MAX_RECORDS_IN_RAM 500000 --CREATE_INDEX false --CREATE_MD5_FILE false --help false --version false --showHidden false --USE_JDK_DEFLATER false --USE_JDK_INFLATER false
      [Wed Sep 10 05:49:06 GMT 2025] Executing as root@fcada74b96e5 on Linux 3.10.0-1160.59.1.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 22.0.1-internal-adhoc.conda.src; Deflater: Intel; Inflater: Intel; Provider GCS is available; Picard version: Version:3.3.0
      INFO	2025-09-10 05:49:06	MarkDuplicates	Start of doWork freeMemory: 47987624; totalMemory: 58720256; maxMemory: 13748928512
      INFO	2025-09-10 05:49:06	MarkDuplicates	Reading input file and constructing read end information.
      INFO	2025-09-10 05:49:06	MarkDuplicates	Will retain up to 49814958 data points before spilling to disk.
      [Wed Sep 10 05:49:06 GMT 2025] picard.sam.markduplicates.MarkDuplicates done. Elapsed time: 0.01 minutes.
      Runtime.totalMemory()=713031680
      To get help, see <http://broadinstitute.github.io/picard/index.html#GettingHelp>
      Exception in thread "main" java.lang.NullPointerException: Cannot invoke "htsjdk.samtools.SAMReadGroupRecord.getReadGroupId()" because the return value of "htsjdk.samtools.SAMRecord.getReadGroup()" is null
      	at picard.sam.markduplicates.MarkDuplicates.buildSortedReadEndLists(MarkDuplicates.java:558)
      	at picard.sam.markduplicates.MarkDuplicates.doWork(MarkDuplicates.java:270)
      	at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:281)
      	at picard.cmdline.PicardCommandLine.instanceMain(PicardCommandLine.java:105)
      	at picard.cmdline.PicardCommandLine.main(PicardCommandLine.java:115)
    
    Work dir:
      /storage-daredevil/sammyseq_nfcore/Analisi/Linfociti/CD4/51_bp/work/1d/bcc4f0e42fbfd4ef35d277841bb40a
    
    Container:
      quay.io/biocontainers/picard:3.3.0--hdfd78af_0
    
    Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
    
     -- Check '.nextflow.log' file for details
    ERROR ~ Pipeline failed. Please refer to troubleshooting docs: <https://nf-co.re/docs/usage/troubleshooting>
    
     -- Check '.nextflow.log' file for details
    I don’t have much experience with paired-end inputs, so I’d like to ask if does anyone recognize this type of error? Could it mean that the pipeline currently doesn’t handle PE data correctly and therefore the code needs to be updated? Thanks in advance
    #️⃣ 1
    c
    p
    a
    • 4
    • 4
  • l

    Lis Arend

    09/10/2025, 7:25 AM
    Hi everyone, I have a question. I want to be able to annotate with VEP a single VCF file inside of my nextflow pipeline. I am quite new to nextflow, and am not sure whether to use the nf-core Sarek pipeline, the subworkflow vcf_annotate_ensemblvep or the module ensemblvep_vep. Can you provide me feedback when to use what?
    f
    • 2
    • 5
  • b

    Benjamin Story

    09/10/2025, 2:17 PM
    Any thoughts? I'm launching a bunch of the the nf-core VEP module (in parallel) as such:
    export NXF_HOME=/mnt/sample; export NXF_OPTS='-Xms4g -Xmx6g -XX:+UseG1GC'; cd $NXF_HOME; echo $PWD; nextflow run /mnt/HDD2/test/vep_module/main.nf -with-docker <http://quay.io/biocontainers/ensembl-vep:111.0--pl5321h2a3209d_0|quay.io/biocontainers/ensembl-vep:111.0--pl5321h2a3209d_0> --my_id 'sample' --vcf '/mnt/HDD2/sample/merge.vcf.gz';
    I've been getting this intermittent java crash since updating java to version 17 a couple weeks ago (late July) due to the requirements from nextflow v25+. It worked before on java v11 with 0 crashes for over a year. all of this is on an Ubuntu server. I'm launching around 20 tasks in parallel usually they all work (one crash occurred the day I updated java ago) - I thought maybe it was due to updating nextflow. since then everything has been running smoothly (so at least 4 runs of 20 samples each). now today, I get a 2 crashes (was using tons of the server RAM)... so I thought maybe it was that. I killed all processes and relaunched but then a random different process fails. any thoughts on the source of this? maybe some OOM im not understanding. I dropped the number of parallel process down from 20 to 8 but it still happened. any idea?
    Copy code
    [2] "Downloading nextflow dependencies. It may require a few seconds, please wait .. \r\033[K"
     [3] " N E X T F L O W   ~  version 25.04.6"
     [4] ""
     [5] "Launching `/mnt/HDD2/test/vep_module/main.nf` [distraught_khorana] DSL2 - revision: 3a0ff5ed42"
     [6] ""
     [7] "#"
     [8] "# A fatal error has been detected by the Java Runtime Environment:"
     [9] "#"
    [10] "#  SIGSEGV (0xb) at pc=0x00007f7a8180e55a, pid=73649, tid=957"
    [11] "#"
    [12] "# JRE version: OpenJDK Runtime Environment (17.0.7+7) (build 17.0.7+7-Ubuntu-0ubuntu118.04)"
    [13] "# Java VM: OpenJDK 64-Bit Server VM (17.0.7+7-Ubuntu-0ubuntu118.04, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)"
    [14] "# Problematic frame:"
    [15] "# C  [ld-linux-x86-64.so.2+0x1d55a]"
    [16] "#"
    [17] "# Core dump will be written. Default location: Core dumps may be processed with \"/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E\" (or dumping to /mnt/HDD2/test/core.73649)"
    [18] "#"
    [19] "# An error report file with more information is saved as:"
    [20] "# /mnt/HDD2/test/hs_err_pid73649.log"
    [21] "#"
    [22] "# If you would like to submit a bug report, please visit:"
    [23] "#   Unknown"
    [24] "# The crash happened outside the Java Virtual Machine in native code."
    [25] "# See problematic frame for where to report the bug."
    [26] "#"
    p
    • 2
    • 2
  • e

    Eva Gunawan

    09/12/2025, 4:45 PM
    Hi everyone, I'm struggling to be able to run 2 modules if they are in an if statement. Essentially I have this if statement:
    Copy code
    if (ch_no_ntc == "false") {
            CREATE_REPORT (
                stuff...
            )
        }
        if (ch_no_ntc == "true") {
            CREATE_REPORT_NO_NTC (
                stuff...
            )
        }
    When I view ch_no_ntc, it shows "true". But it seemingly skips the module regardless of meeting the if condition. I have even added the modules to another troubleshooting if statement where ch_no_ntc is being created:
    Copy code
    if (ch_kraken_ntc == "empty" && ch_ntc_check == "empty" ) {
            Channel.of("false")
                .set{ ch_no_ntc }
            CREATE_REPORT (
                stuff...
            )
         } else {
            Channel.of("true")
                .set{ ch_no_ntc }
            CREATE_REPORT_NO_NTC (
                stuff...
            )
         }
    For some reason, it still ends up being skipped regardless of where I put it. I can run both of the modules just fine outside of the if statements. Just as a test, I've tried using a param denoted in the nextflow.config. For example:
    Copy code
    if (params.ntc_present == "true") {
            CREATE_REPORT (
                stuff...
            )
        }
        if (params.ntc_present == "false") {
            CREATE_REPORT_NO_NTC (
                stuff...
            )
        }
    Both modules work if there is a param set like this, but I need to determine if it is present inside the workflow itself. Any suggestions/advice? Thanks in advance 😄
    a
    m
    • 3
    • 7
  • l

    Luis Heinzlmeier

    09/15/2025, 8:25 AM
    Hello everyone, I hope this is the right place for my question. I am working on the hadge pipeline and would like to update the snapshots with
    nf-test test tests/default.nf.test --profile +singularity --update-snapshot
    . However, when I run nf-test in Codespaces, I get the following error message (I do not get this error when I run the pipeline locally):
    Copy code
    Sep-14 11:48:46.861 [Actor Thread 66] ERROR nextflow.extension.OperatorImpl - @unknown
    org.yaml.snakeyaml.parser.ParserException: while parsing a block mapping
     in 'reader', line 2, column 5:
        echo mkdir -p failed for path /h ... 
        ^
    expected <block end>, but found '<scalar>'
     in 'reader', line 2, column 79:
       ... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ... 
                         ^
    
        at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:654)
        at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:161)
        at org.yaml.snakeyaml.comments.CommentEventsCollector$1.peek(CommentEventsCollector.java:57)
        at org.yaml.snakeyaml.comments.CommentEventsCollector$1.peek(CommentEventsCollector.java:43)
        at org.yaml.snakeyaml.comments.CommentEventsCollector.collectEvents(CommentEventsCollector.java:136)
        at org.yaml.snakeyaml.comments.CommentEventsCollector.collectEvents(CommentEventsCollector.java:116)
        at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:291)
        at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:216)
        at org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:396)
        at org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:361)
        at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:329)
        at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:218)
        at org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:396)
        at org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:361)
        at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:329)
        at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:218)
        at org.yaml.snakeyaml.composer.Composer.getNode(Composer.java:141)
        at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:167)
        at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:178)
        at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:507)
        at org.yaml.snakeyaml.Yaml.load(Yaml.java:448)
        at nextflow.file.SlurperEx.load(SlurperEx.groovy:67)
        at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
        at Script_5c4e8d4051efa81e.processVersionsFromYAML(Script_5c4e8d4051efa81e:82)
        at jdk.internal.reflect.GeneratedMethodAccessor257.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:569)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:343)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
        at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
        at Script_5c4e8d4051efa81e$_softwareVersionsToYAML_closure2.doCall(Script_5c4e8d4051efa81e:101)
        at jdk.internal.reflect.GeneratedMethodAccessor256.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:569)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:280)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
        at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
        at nextflow.extension.MapOp$_apply_closure1.doCall(MapOp.groovy:56)
        at jdk.internal.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:569)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:280)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
        at groovy.lang.Closure.call(Closure.java:433)
        at groovyx.gpars.dataflow.operator.DataflowOperatorActor.startTask(DataflowOperatorActor.java:120)
        at groovyx.gpars.dataflow.operator.DataflowOperatorActor.onMessage(DataflowOperatorActor.java:108)
        at groovyx.gpars.actor.impl.SDAClosure$1.call(SDAClosure.java:43)
        at groovyx.gpars.actor.AbstractLoopingActor.runEnhancedWithoutRepliesOnMessages(AbstractLoopingActor.java:293)
        at groovyx.gpars.actor.AbstractLoopingActor.access$400(AbstractLoopingActor.java:30)
        at groovyx.gpars.actor.AbstractLoopingActor$1.handleMessage(AbstractLoopingActor.java:93)
        at groovyx.gpars.util.AsyncMessagingCore.run(AsyncMessagingCore.java:132)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:840)
    Sep-14 11:48:46.891 [Actor Thread 66] DEBUG nextflow.Session - Session aborted -- Cause: while parsing a block mapping
     in 'reader', line 2, column 5:
        echo mkdir -p failed for path /h ... 
        ^
    expected <block end>, but found '<scalar>'
     in 'reader', line 2, column 79:
       ... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ... 
                         ^
    Copy code
    Test [5d0fca1c] '-profile test' Assertion failed: 
    
    assert workflow.success
        |    |
        workflow false
    
    FAILED (534.124s)
    
     Assertion failed: 
      
     1 of 2 assertions failed
      
     Nextflow stdout:
      
     ERROR ~ while parsing a block mapping
      in 'reader', line 2, column 5:
         echo mkdir -p failed for path /h ... 
         ^
     expected <block end>, but found '<scalar>'
      in 'reader', line 2, column 79:
        ... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ... 
                          ^
      
      
      -- Check script '/workspaces/hadge/subworkflows/nf-core/utils_nfcore_pipeline/main.nf' at line: 82 or see '/workspaces/hadge/.nf-test/tests/5d0fca1c9bc3a6b101ae0cb52e6a311a/meta/nextflow.log' file for more details
     ERROR ~ Pipeline failed. Please refer to troubleshooting docs: <https://nf-co.re/docs/usage/troubleshooting>
      
      -- Check '/workspaces/hadge/.nf-test/tests/5d0fca1c9bc3a6b101ae0cb52e6a311a/meta/nextflow.log' file for details
     Nextflow stderr:
    m
    t
    • 3
    • 2
  • n

    Nadia Sanseverino

    09/15/2025, 2:48 PM
    Hi all! I'd like to ask for a clarification: I'm developing a new module and I've come to the testing and snapshot time. I'm currently running tests on my Asus VivoBook15 (if it makes any difference) and the commands
    nf-core modules test deeptools/bigwigcompare
    ,
    nf-core modules test deeptools/bigwigcompare --profile docker
    ,
    nf-core modules test deeptools/bigwigcompare --profile conda
    (all launched from modules root dir) just execute infinitely. But if I run
    nf-test test modules/nf-core/deeptools/bigwigcompare/tests/main.nf.test
    it works, it's just that on the tutorial pages https://nf-co.re/docs/tutorials/tests_and_test_data/nf-test_comprehensive_guide there's a broken link for '3. Testing modules' and the Modules tutorial doesn't mention much nf-test, so I can't find a reason why the nf-core modules test command doesn't wok. Thank you to anyone willing to look into this 😊
  • a

    Andries van Tonder

    09/15/2025, 3:06 PM
    I've been working on the new version of bactmap which was rewritten from scratch using a new template. This means its fundamentally changed from the original version and has a different commit history (I tried to create the dev/master PR). Could I get some help resolving this. I've done the stuff on the pipeline release checklist (https://nf-co.re/docs/checklists/pipeline_release)
    m
    m
    • 3
    • 12
  • h

    Helen Huang

    09/15/2025, 9:04 PM
    Hi all! I would like to get some help with using Nextflow on an Ubuntu server with I/O directly from a SMB/CIFS-mounted NAS. The following was output from running the test profile for the atacseq pipeline. Here’s the error message printed to screen:
    *ERROR ~ Failed to publish file*: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/work/aa/48b96b1bc55c153b2885f0cf2f2ea6/samplesheet.valid.csv; to: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/nf_atac_test/pipeline_info/samplesheet.valid.csv [copy] -- See log file for details
    Here’s the error in the log:
    DEBUG nextflow.processor.PublishDir - Failed to publish file: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/work/aa/48b96b1bc55c153b2885f0cf2f2ea6/samplesheet.valid.csv; to: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/nf_atac_test/pipeline_info/samplesheet.valid.csv [copy] -- attempt: 4; *reason: Input/output error*
    Ubuntu server: 18.04 LTS Nexflow version: version 25.04.6 SMB version: 3.0 (tried different versions, none worked) The NAS was mounted with noperm, which means everyone can write without permission checks, so it’s not a permission issue. I understand NFS mounting might work better but we have reasons to use SMB mounting. (The NAS is mounted in many different Windows and Mac systems as well, so NFS mounting is going to cause issues.) Thank you all! Our lab used to use pipelines we built ourselves, but now want to move towards using Nexfflow core pipelines.
    p
    • 2
    • 1
  • m

    Martin Rippin

    09/16/2025, 11:35 AM
    hi guys! I am trying to write a process that runs a tool to collect data from several analysis output directories containing the same files. I collected all the files necessary in one big tuple with subtuples looking something like:
    Copy code
    [ ['id1', 'id1/path/to/MetricsOutput.tsv', 'id1/path/to/RunCompletionStatus.xml'], ['id2', 'id2/path/to/MetricsOutput.tsv', 'id2/path/to/RunCompletionStatus.xml'], ... ]
    I am struggeling to define the structure correctly inside the process. I tried something like:
    Copy code
    input:
    tuple tuple(val(id), path(tsv), path(xml))
    but that does not work. Also the files will be mounted with their `basename`s which I also don't know how to solve. Does anyone have an idea how to solve this? I was thinking of just giving the root dir of all files and glob inside the process but maybe there is a more sophisticated way?
    m
    • 2
    • 3
  • h

    Hannes Kade

    09/16/2025, 5:47 PM
    hello! I'm new to nextflow and fairly new to bioinformatics in general, particularly using command line. I'm trying to run a test of the mag pipeline, but i'm getting an error i don't understand how to resolve. ERROR ~ Validation of pipeline parameters failed! -- Check '.nextflow.log' file for details The following invalid input values have been detected: * Missing required parameter(s): outdir -- Check script '/home/hazzard/.nextflow/assets/nf-core/mag/subworkflows/nf-core/utils_nfschema_plugin/main.nf' at line: 39 or see '.nextflow.log' file for more details (base) hazzard@DESKTOP-68FQOKO:/mnt/wsl/docker-desktop-bind-mounts/Ubuntu/bcd7a1f898c503385f2a83c3ba853c7acd3d7bb6b1ddd98b63de37dcda26623f$ nextflow run .nextflow.log N E X T F L O W ~ version 25.04.7 Launching
    .nextflow.log
    [romantic_fourier] DSL2 - revision: adf043ce82 ERROR ~ Script compilation error - file : /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/bcd7a1f898c503385f2a83c3ba853c7acd3d7bb6b1ddd98b63de37dcda26623f/.nextflow.log - cause: Unexpected input: ':' @ line 1, column 13. Sep-16 183653.621 [main] DEBUG nextflow.cli.Launcher - $> nextflow run .nextflow.log ^ 1 error
    j
    s
    • 3
    • 8
  • s

    Sylvia Li

    09/16/2025, 10:17 PM
    Is there a way to not save/output a modules outputs? I am using GUNZIP to uncompress a fasta.gz file but I don't necessarily need it to save it into the user's outputdir. Is it something in Modules.config file I need to do?
    j
    m
    • 3
    • 4
  • j

    Joshua

    09/17/2025, 3:43 AM
    my hpc has 192 cores per node. looks like the pipeline #C01FKFZ57SA is not using it to its full potential. any idea if i could specifify in process {} for slurm to make use of all cores?
    p
    p
    • 3
    • 8
  • h

    Hovakim Grabski

    09/17/2025, 9:44 AM
    Hi everyone, I am still learning nf-core, I am wondering, I have built a pipeline and a manuscript is in preparation, what is the right way to do? publish the paper then register in nf-core or vice versa?
    j
    • 2
    • 6
  • a

    Agrima Bhatt

    09/17/2025, 12:35 PM
    Hi all, I’m seeing repeated failures in my CI for
    nf-test / docker
    on several test cases (e.g. 1/7, 2/7, etc.) for my pipeline PR, but when I manually run the pipeline locally everything works fine. The pre-commit and linting checks pass, and some nf-test checks (like 6/7) are successful, but most fail after 1–3 minutes. What could be causing these nf-test docker failures in CI, especially when the pipeline runs without issues on my machine? Is there something specific I should check? Any advice on debugging nf-test failures would be appreciated! My PR : https://github.com/nf-core/seqinspector/pull/127
    m
    • 2
    • 1