https://nf-co.re logo
Join Slack
Powered by
# help
  • l

    Lis Arend

    09/10/2025, 7:25 AM
    Hi everyone, I have a question. I want to be able to annotate with VEP a single VCF file inside of my nextflow pipeline. I am quite new to nextflow, and am not sure whether to use the nf-core Sarek pipeline, the subworkflow vcf_annotate_ensemblvep or the module ensemblvep_vep. Can you provide me feedback when to use what?
    f
    • 2
    • 5
  • b

    Benjamin Story

    09/10/2025, 2:17 PM
    Any thoughts? I'm launching a bunch of the the nf-core VEP module (in parallel) as such:
    export NXF_HOME=/mnt/sample; export NXF_OPTS='-Xms4g -Xmx6g -XX:+UseG1GC'; cd $NXF_HOME; echo $PWD; nextflow run /mnt/HDD2/test/vep_module/main.nf -with-docker <http://quay.io/biocontainers/ensembl-vep:111.0--pl5321h2a3209d_0|quay.io/biocontainers/ensembl-vep:111.0--pl5321h2a3209d_0> --my_id 'sample' --vcf '/mnt/HDD2/sample/merge.vcf.gz';
    I've been getting this intermittent java crash since updating java to version 17 a couple weeks ago (late July) due to the requirements from nextflow v25+. It worked before on java v11 with 0 crashes for over a year. all of this is on an Ubuntu server. I'm launching around 20 tasks in parallel usually they all work (one crash occurred the day I updated java ago) - I thought maybe it was due to updating nextflow. since then everything has been running smoothly (so at least 4 runs of 20 samples each). now today, I get a 2 crashes (was using tons of the server RAM)... so I thought maybe it was that. I killed all processes and relaunched but then a random different process fails. any thoughts on the source of this? maybe some OOM im not understanding. I dropped the number of parallel process down from 20 to 8 but it still happened. any idea?
    Copy code
    [2] "Downloading nextflow dependencies. It may require a few seconds, please wait .. \r\033[K"
     [3] " N E X T F L O W   ~  version 25.04.6"
     [4] ""
     [5] "Launching `/mnt/HDD2/test/vep_module/main.nf` [distraught_khorana] DSL2 - revision: 3a0ff5ed42"
     [6] ""
     [7] "#"
     [8] "# A fatal error has been detected by the Java Runtime Environment:"
     [9] "#"
    [10] "#  SIGSEGV (0xb) at pc=0x00007f7a8180e55a, pid=73649, tid=957"
    [11] "#"
    [12] "# JRE version: OpenJDK Runtime Environment (17.0.7+7) (build 17.0.7+7-Ubuntu-0ubuntu118.04)"
    [13] "# Java VM: OpenJDK 64-Bit Server VM (17.0.7+7-Ubuntu-0ubuntu118.04, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)"
    [14] "# Problematic frame:"
    [15] "# C  [ld-linux-x86-64.so.2+0x1d55a]"
    [16] "#"
    [17] "# Core dump will be written. Default location: Core dumps may be processed with \"/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E\" (or dumping to /mnt/HDD2/test/core.73649)"
    [18] "#"
    [19] "# An error report file with more information is saved as:"
    [20] "# /mnt/HDD2/test/hs_err_pid73649.log"
    [21] "#"
    [22] "# If you would like to submit a bug report, please visit:"
    [23] "#   Unknown"
    [24] "# The crash happened outside the Java Virtual Machine in native code."
    [25] "# See problematic frame for where to report the bug."
    [26] "#"
    p
    • 2
    • 2
  • e

    Eva Gunawan

    09/12/2025, 4:45 PM
    Hi everyone, I'm struggling to be able to run 2 modules if they are in an if statement. Essentially I have this if statement:
    Copy code
    if (ch_no_ntc == "false") {
            CREATE_REPORT (
                stuff...
            )
        }
        if (ch_no_ntc == "true") {
            CREATE_REPORT_NO_NTC (
                stuff...
            )
        }
    When I view ch_no_ntc, it shows "true". But it seemingly skips the module regardless of meeting the if condition. I have even added the modules to another troubleshooting if statement where ch_no_ntc is being created:
    Copy code
    if (ch_kraken_ntc == "empty" && ch_ntc_check == "empty" ) {
            Channel.of("false")
                .set{ ch_no_ntc }
            CREATE_REPORT (
                stuff...
            )
         } else {
            Channel.of("true")
                .set{ ch_no_ntc }
            CREATE_REPORT_NO_NTC (
                stuff...
            )
         }
    For some reason, it still ends up being skipped regardless of where I put it. I can run both of the modules just fine outside of the if statements. Just as a test, I've tried using a param denoted in the nextflow.config. For example:
    Copy code
    if (params.ntc_present == "true") {
            CREATE_REPORT (
                stuff...
            )
        }
        if (params.ntc_present == "false") {
            CREATE_REPORT_NO_NTC (
                stuff...
            )
        }
    Both modules work if there is a param set like this, but I need to determine if it is present inside the workflow itself. Any suggestions/advice? Thanks in advance 😄
    a
    m
    • 3
    • 8
  • l

    Luis Heinzlmeier

    09/15/2025, 8:25 AM
    Hello everyone, I hope this is the right place for my question. I am working on the hadge pipeline and would like to update the snapshots with
    nf-test test tests/default.nf.test --profile +singularity --update-snapshot
    . However, when I run nf-test in Codespaces, I get the following error message (I do not get this error when I run the pipeline locally):
    Copy code
    Sep-14 11:48:46.861 [Actor Thread 66] ERROR nextflow.extension.OperatorImpl - @unknown
    org.yaml.snakeyaml.parser.ParserException: while parsing a block mapping
     in 'reader', line 2, column 5:
        echo mkdir -p failed for path /h ... 
        ^
    expected <block end>, but found '<scalar>'
     in 'reader', line 2, column 79:
       ... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ... 
                         ^
    
        at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:654)
        at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:161)
        at org.yaml.snakeyaml.comments.CommentEventsCollector$1.peek(CommentEventsCollector.java:57)
        at org.yaml.snakeyaml.comments.CommentEventsCollector$1.peek(CommentEventsCollector.java:43)
        at org.yaml.snakeyaml.comments.CommentEventsCollector.collectEvents(CommentEventsCollector.java:136)
        at org.yaml.snakeyaml.comments.CommentEventsCollector.collectEvents(CommentEventsCollector.java:116)
        at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:291)
        at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:216)
        at org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:396)
        at org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:361)
        at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:329)
        at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:218)
        at org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:396)
        at org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:361)
        at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:329)
        at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:218)
        at org.yaml.snakeyaml.composer.Composer.getNode(Composer.java:141)
        at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:167)
        at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:178)
        at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:507)
        at org.yaml.snakeyaml.Yaml.load(Yaml.java:448)
        at nextflow.file.SlurperEx.load(SlurperEx.groovy:67)
        at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
        at Script_5c4e8d4051efa81e.processVersionsFromYAML(Script_5c4e8d4051efa81e:82)
        at jdk.internal.reflect.GeneratedMethodAccessor257.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:569)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:343)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
        at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
        at Script_5c4e8d4051efa81e$_softwareVersionsToYAML_closure2.doCall(Script_5c4e8d4051efa81e:101)
        at jdk.internal.reflect.GeneratedMethodAccessor256.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:569)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:280)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
        at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
        at nextflow.extension.MapOp$_apply_closure1.doCall(MapOp.groovy:56)
        at jdk.internal.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:569)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:280)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
        at groovy.lang.Closure.call(Closure.java:433)
        at groovyx.gpars.dataflow.operator.DataflowOperatorActor.startTask(DataflowOperatorActor.java:120)
        at groovyx.gpars.dataflow.operator.DataflowOperatorActor.onMessage(DataflowOperatorActor.java:108)
        at groovyx.gpars.actor.impl.SDAClosure$1.call(SDAClosure.java:43)
        at groovyx.gpars.actor.AbstractLoopingActor.runEnhancedWithoutRepliesOnMessages(AbstractLoopingActor.java:293)
        at groovyx.gpars.actor.AbstractLoopingActor.access$400(AbstractLoopingActor.java:30)
        at groovyx.gpars.actor.AbstractLoopingActor$1.handleMessage(AbstractLoopingActor.java:93)
        at groovyx.gpars.util.AsyncMessagingCore.run(AsyncMessagingCore.java:132)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:840)
    Sep-14 11:48:46.891 [Actor Thread 66] DEBUG nextflow.Session - Session aborted -- Cause: while parsing a block mapping
     in 'reader', line 2, column 5:
        echo mkdir -p failed for path /h ... 
        ^
    expected <block end>, but found '<scalar>'
     in 'reader', line 2, column 79:
       ... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ... 
                         ^
    Copy code
    Test [5d0fca1c] '-profile test' Assertion failed: 
    
    assert workflow.success
        |    |
        workflow false
    
    FAILED (534.124s)
    
     Assertion failed: 
      
     1 of 2 assertions failed
      
     Nextflow stdout:
      
     ERROR ~ while parsing a block mapping
      in 'reader', line 2, column 5:
         echo mkdir -p failed for path /h ... 
         ^
     expected <block end>, but found '<scalar>'
      in 'reader', line 2, column 79:
        ... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ... 
                          ^
      
      
      -- Check script '/workspaces/hadge/subworkflows/nf-core/utils_nfcore_pipeline/main.nf' at line: 82 or see '/workspaces/hadge/.nf-test/tests/5d0fca1c9bc3a6b101ae0cb52e6a311a/meta/nextflow.log' file for more details
     ERROR ~ Pipeline failed. Please refer to troubleshooting docs: <https://nf-co.re/docs/usage/troubleshooting>
      
      -- Check '/workspaces/hadge/.nf-test/tests/5d0fca1c9bc3a6b101ae0cb52e6a311a/meta/nextflow.log' file for details
     Nextflow stderr:
    m
    t
    • 3
    • 2
  • n

    Nadia Sanseverino

    09/15/2025, 2:48 PM
    Hi all! I'd like to ask for a clarification: I'm developing a new module and I've come to the testing and snapshot time. I'm currently running tests on my Asus VivoBook15 (if it makes any difference) and the commands
    nf-core modules test deeptools/bigwigcompare
    ,
    nf-core modules test deeptools/bigwigcompare --profile docker
    ,
    nf-core modules test deeptools/bigwigcompare --profile conda
    (all launched from modules root dir) just execute infinitely. But if I run
    nf-test test modules/nf-core/deeptools/bigwigcompare/tests/main.nf.test
    it works, it's just that on the tutorial pages https://nf-co.re/docs/tutorials/tests_and_test_data/nf-test_comprehensive_guide there's a broken link for '3. Testing modules' and the Modules tutorial doesn't mention much nf-test, so I can't find a reason why the nf-core modules test command doesn't wok. Thank you to anyone willing to look into this 😊
  • a

    Andries van Tonder

    09/15/2025, 3:06 PM
    I've been working on the new version of bactmap which was rewritten from scratch using a new template. This means its fundamentally changed from the original version and has a different commit history (I tried to create the dev/master PR). Could I get some help resolving this. I've done the stuff on the pipeline release checklist (https://nf-co.re/docs/checklists/pipeline_release)
    m
    m
    p
    • 4
    • 29
  • h

    Helen Huang

    09/15/2025, 9:04 PM
    Hi all! I would like to get some help with using Nextflow on an Ubuntu server with I/O directly from a SMB/CIFS-mounted NAS. The following was output from running the test profile for the atacseq pipeline. Here’s the error message printed to screen:
    *ERROR ~ Failed to publish file*: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/work/aa/48b96b1bc55c153b2885f0cf2f2ea6/samplesheet.valid.csv; to: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/nf_atac_test/pipeline_info/samplesheet.valid.csv [copy] -- See log file for details
    Here’s the error in the log:
    DEBUG nextflow.processor.PublishDir - Failed to publish file: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/work/aa/48b96b1bc55c153b2885f0cf2f2ea6/samplesheet.valid.csv; to: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/nf_atac_test/pipeline_info/samplesheet.valid.csv [copy] -- attempt: 4; *reason: Input/output error*
    Ubuntu server: 18.04 LTS Nexflow version: version 25.04.6 SMB version: 3.0 (tried different versions, none worked) The NAS was mounted with noperm, which means everyone can write without permission checks, so it’s not a permission issue. I understand NFS mounting might work better but we have reasons to use SMB mounting. (The NAS is mounted in many different Windows and Mac systems as well, so NFS mounting is going to cause issues.) Thank you all! Our lab used to use pipelines we built ourselves, but now want to move towards using Nexfflow core pipelines.
    p
    • 2
    • 1
  • m

    Martin Rippin

    09/16/2025, 11:35 AM
    hi guys! I am trying to write a process that runs a tool to collect data from several analysis output directories containing the same files. I collected all the files necessary in one big tuple with subtuples looking something like:
    Copy code
    [ ['id1', 'id1/path/to/MetricsOutput.tsv', 'id1/path/to/RunCompletionStatus.xml'], ['id2', 'id2/path/to/MetricsOutput.tsv', 'id2/path/to/RunCompletionStatus.xml'], ... ]
    I am struggeling to define the structure correctly inside the process. I tried something like:
    Copy code
    input:
    tuple tuple(val(id), path(tsv), path(xml))
    but that does not work. Also the files will be mounted with their `basename`s which I also don't know how to solve. Does anyone have an idea how to solve this? I was thinking of just giving the root dir of all files and glob inside the process but maybe there is a more sophisticated way?
    m
    • 2
    • 3
  • h

    Hannes Kade

    09/16/2025, 5:47 PM
    hello! I'm new to nextflow and fairly new to bioinformatics in general, particularly using command line. I'm trying to run a test of the mag pipeline, but i'm getting an error i don't understand how to resolve. ERROR ~ Validation of pipeline parameters failed! -- Check '.nextflow.log' file for details The following invalid input values have been detected: * Missing required parameter(s): outdir -- Check script '/home/hazzard/.nextflow/assets/nf-core/mag/subworkflows/nf-core/utils_nfschema_plugin/main.nf' at line: 39 or see '.nextflow.log' file for more details (base) hazzard@DESKTOP-68FQOKO:/mnt/wsl/docker-desktop-bind-mounts/Ubuntu/bcd7a1f898c503385f2a83c3ba853c7acd3d7bb6b1ddd98b63de37dcda26623f$ nextflow run .nextflow.log N E X T F L O W ~ version 25.04.7 Launching
    .nextflow.log
    [romantic_fourier] DSL2 - revision: adf043ce82 ERROR ~ Script compilation error - file : /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/bcd7a1f898c503385f2a83c3ba853c7acd3d7bb6b1ddd98b63de37dcda26623f/.nextflow.log - cause: Unexpected input: ':' @ line 1, column 13. Sep-16 183653.621 [main] DEBUG nextflow.cli.Launcher - $> nextflow run .nextflow.log ^ 1 error
    j
    s
    • 3
    • 8
  • s

    Sylvia Li

    09/16/2025, 10:17 PM
    Is there a way to not save/output a modules outputs? I am using GUNZIP to uncompress a fasta.gz file but I don't necessarily need it to save it into the user's outputdir. Is it something in Modules.config file I need to do?
    j
    m
    • 3
    • 4
  • j

    Joshua

    09/17/2025, 3:43 AM
    my hpc has 192 cores per node. looks like the pipeline #C01FKFZ57SA is not using it to its full potential. any idea if i could specifify in process {} for slurm to make use of all cores?
    p
    p
    • 3
    • 8
  • h

    Hovakim Grabski

    09/17/2025, 9:44 AM
    Hi everyone, I am still learning nf-core, I am wondering, I have built a pipeline and a manuscript is in preparation, what is the right way to do? publish the paper then register in nf-core or vice versa?
    j
    h
    • 3
    • 12
  • a

    Agrima Bhatt

    09/17/2025, 12:35 PM
    Hi all, I’m seeing repeated failures in my CI for
    nf-test / docker
    on several test cases (e.g. 1/7, 2/7, etc.) for my pipeline PR, but when I manually run the pipeline locally everything works fine. The pre-commit and linting checks pass, and some nf-test checks (like 6/7) are successful, but most fail after 1–3 minutes. What could be causing these nf-test docker failures in CI, especially when the pipeline runs without issues on my machine? Is there something specific I should check? Any advice on debugging nf-test failures would be appreciated! My PR : https://github.com/nf-core/seqinspector/pull/127
    m
    • 2
    • 1
  • u

    Uri David Akavia

    09/18/2025, 9:48 AM
    Hi All. I've been modifying https://github.com/nf-core/modules/tree/master/modules/nf-core/custom/filterdifferentialtable to output up and down filtered genes as additional files. You can see my progress so far at https://github.com/akaviaLab/modules/commit/6545261ae4e33553e9cd5b39fb1d9007c0c8bb10. My question is - do I need to modify the meta variable for the up/down files? Right now, the meta.id has the sample name, but it will be identical for all three files. If I need to modify it, how should I do so? I've tried adapting the code shown in https://nfcore.slack.com/archives/CE6SDBX2A/p1755000935927579?thread_ts=1754998962.637019&amp;cid=CE6SDBX2A to modify the meta.id file, but I've been getting errors. Should I modify it in the workflows that call it? I'm worried about modification because the downstream analysis uses the meta.id to generate the output files, and if it is identical for up/down/all, there might be some confusion.
    n
    a
    • 3
    • 6
  • n

    Nadia Sanseverino

    09/18/2025, 2:39 PM
    Hello everybody! I tried to implement a parameter to choose between two different extensions of the same file (one binary
    .bigwig
    and one readable
    .bedgraph
    ). Citing modules guidelines,
    All _non-mandatory_ command-line tool _non-file_ arguments MUST be provided as a string via the $task.ext.args variable
    . Test-writing seem to suggest that I need a
    nextflow.config
    to successfully launch tests. I need a kind soul to take a look at my snippets and confirm if I'm all set to update my branch. • from main.nf
    Copy code
    input:
        tuple val(meta) , path(bigwig1)     , path(bigwig2)
        tuple val(meta2), path(blacklist)
        
        output:
        tuple val(meta), path("*.{bw,bedgraph}"), emit: output
        path "versions.yml"                     , emit: versions
    
        when:
        task.ext.when == null || task.ext.when
    
        script:
        def args = task.ext.args                                  ?: ""
        def prefix = task.ext.prefix                              ?: "${meta.id}"
        def blacklist_cmd = blacklist                             ? "--blackListFileName ${blacklist}" : ""        
        def extension = args.contains("--outFileFormat bedgraph") ? "bedgraph"                         : "bw"
        
        """
        bigwigCompare \\
            --bigwig1 $bigwig1 \\
            --bigwig2 $bigwig2 \\
            --outFileName ${prefix}.${extension} \\
            --numberOfProcessors $task.cpus \\
            $blacklist_cmd \\
            $args
    
        cat <<-END_VERSIONS > versions.yml
        "${task.process}":
            deeptools: \$(bigwigCompare --version | sed -e "s/bigwigCompare //g")
        END_VERSIONS
        """
    • from main.nf.tests
    Copy code
    test("homo_sapiens - 2 bigwig files - bedgraph output") {
    
            config "./nextflow.config"
    
            when {
                params {
                    deeptools_bigwigcompare_args = '--outFileFormat bedgraph'
                }
                process {
                    """
                    def bigwig1 = file(params.modules_testdata_base_path + 'genomics/homo_sapiens/illumina/bigwig/test_S2.RPKM.bw', checkIfExists: true)
                    def bigwig2 = file(params.modules_testdata_base_path + 'genomics/homo_sapiens/illumina/bigwig/test_S3.RPKM.bw', checkIfExists: true)
    
                    input[0] = [
                        [ id:'test' ],
                        bigwig1, 
                        bigwig2
                    ]
                    input[1] = [
                        [ id:'no_blacklist' ],
                        []
                    ]
                    """
                }
            }
    
            then {
                assertAll(
                    { assert process.success },
                    { assert snapshot(process.out.output,                                
                                      process.out.versions)
                                      .match()
                    }
                )
            }
        }
    • from nextflow.config
    Copy code
    process {
      withName: 'DEEPTOOLS_BIGWIGCOMPARE' {
          ext.args = params.deeptools_bigwigcompare_args
      }
    }
    s
    • 2
    • 4
  • c

    Chenyu Jin (Amend)

    09/19/2025, 12:19 PM
    hey all, I encountered a problem that when I get many files that I want to run in parallel in the same process. If take it in with
    each()
    in the process then it's not input as a file as
    path()
    does. How can I process each path?
    Copy code
    workflow {
    file = Channel.fromPath("${params.input_dir}/*", checkIfExists: true).view()
    index_reference(files, params.threads)
    
    }
    
    process index_reference {
        input:
        each(input_ref)
        val(threads)
    
    ...
    }
    a
    • 2
    • 5
  • e

    eparisis

    09/22/2025, 2:27 PM
    Hi there! Is there a way to rename files inside a channel and gzip/gunzip with a nextflow function inside the workflow code or is passing it through a process the only way?
    n
    t
    • 3
    • 3
  • e

    Evangelos Karatzas

    09/23/2025, 7:10 AM
    Is there currently a problem with AWS tests for pipeline release PRs? https://github.com/nf-core/proteinfamilies/actions/runs/17918528153/job/51008881450?pr=114
    m
    m
    r
    • 4
    • 16
  • m

    Megan Justice

    09/24/2025, 4:12 PM
    Hey, all! I'm running some NF pipelines in an AWS EC2 instance and am having issues with speed / throughput. Is anyone knowledgeable about optimizing pipelines on AWS that could help me out?
    e
    • 2
    • 1
  • s

    shaojun sun

    09/24/2025, 5:39 PM
    Hi there! Is there a pipeline to do WES analysis? Thanks!
    r
    • 2
    • 2
  • f

    Fabian Egli

    09/26/2025, 5:44 AM
    how can I select the architecture for a quai.io image and apply that patch to a workflow when running it?
  • f

    Fabian Egli

    09/26/2025, 8:15 AM
    I experience a process getting killed in a Docker container and don't know how to figure out why it is being kille. Does Anyone here know? First I thought it was a resource limit issue, but the error I got was not indicating that.
    command.sh: line xx:   282 Killed [...] command -with -parameters [...]
    • 1
    • 1
  • j

    James Fellows Yates

    09/26/2025, 9:11 AM
    Any fans of docker and symlink 'nesting' in Nextflow processes? (help!) https://community.seqera.io/t/how-to-handle-in-nextflow-docker-mounting-of-symlinked-files-within-a-symlinked-directory/2381 (I made a reprex!)
    n
    m
    • 3
    • 13
  • a

    Andrea Bagnacani

    09/26/2025, 2:26 PM
    Dear all, I'm using
    nf-core/samtools/merge
    to merge some BAM files. The input channel that I provide to this process has meta fields
    id
    and
    sample_name
    . The former is used by samtools merge to infer the file prefix for merging, while the latter is used in my pipeline to keep provenance info. When I run my pipeline, this performs the merge as intended. However, when I run `nf-core/samtools/merge`'s stub test,
    meta.sample_name
    ends up being interpreted as the relative path to a docker mount point, and since Docker mount points must be absolute, the stub test is (in my case) bound to fail:
    Copy code
    $ nf-test test tests/01.stub.nf.test --profile docker
    ...
    Command exit status:
        125
      Command output:
        (empty)
      Command error:
        docker: Error response from daemon: invalid volume specification: 'HG00666:HG00666': invalid mount config for type "volume": invalid mount path: 'HG00666' mount path must be absolute
        Run 'docker run --help' for more information
    From `.command.run`:
    Copy code
    ...
    nxf_launch() {
        docker run -i --cpu-shares 2048 --memory 12288m -e "NXF_TASK_WORKDIR" -e "NXF_DEBUG=${NXF_DEBUG:=0}" \
            \
            -v HG00666:HG00666 \  # <-- meta.sample_name becomes a mount point
            \
        -v /home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8:/home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8 -w "$NXF_TASK_WORKDIR" -u $(id -u):$(id -g) --name $NXF_BOXID quay.io/biocontainers/samtools:1.22.1--h96c455f_0 /bin/bash -c "eval $(nxf_container_env); /bin/bash /home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8/.command.run nxf_trace"
    }
    ...
    How do I make samtools merge ignore
    meta.sample_name
    when the docker cli is buit?
    n
    m
    +2
    • 5
    • 12
  • q

    Quentin Blampey

    09/30/2025, 2:41 PM
    Hello! I have one process that requires to write on the
    $HOME
    directory. I fixed it for Docker with
    containerOptions = ''
    , but for
    singularity
    I still receive an error saying
    Read-only file system
    Does anyone know how to fix that?
    a
    t
    p
    • 4
    • 23
  • k

    Kurayi

    10/02/2025, 8:31 PM
    None Nextflow related question here: is there any comprehensive website that lists PhD offers in Europe?
    j
    m
    +2
    • 5
    • 12
  • q

    Quentin Blampey

    10/08/2025, 1:19 PM
    Hi everyone, I'm developing a pipeline that works on objects stored as zarr directories. In short, it means that whenever a process creates a new output, it's a subdirectory inside this
    .zarr
    directory. Everything works well for "standard" usage (e.g., Docker / Singularity on a HPC / standard file system), but I have some staging issues on AWS Batch specifically. When one process updates the zarr (i.e. creating a new subdir), the new subdir is not passed to the following process, although I specify it in my inputs/outputs (and, again, it works nicely when not on the cloud). Does anyone faced similar issues? Do you have any idea how to fix it?
  • l

    Luuk Harbers

    10/08/2025, 3:22 PM
    Caching question: We are taking a set of files through github using the igenomes config (and
    getAttribute
    ) just like normally with fasta files etc. These files are on github lfs and so we specified them like this in the config (opposed to raw.githubusercontent.com )
    Copy code
    gnomad          = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_gnomad.vcf.gz>"
                dbsnp           = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_dbsnp.vcf.gz>"
                onekgenomes     = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_1kgenomes.vcf.gz>"
                colors          = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_colors.vcf.gz>"
    This works perfectly and downloads them fine. However the caching doesn't work and I'm unsure why. It will always restage the files from github lfs and result in processed not caching properly on resume. I'll put a
    nextflow.log
    snippet with hashes in a reply here.
    • 1
    • 2
  • s

    Sylvia Li

    10/08/2025, 6:07 PM
    If I am using a custom pipeline from someone else, that uses custom local modules. If i run nf-core modules update --all, would that mess with anything? is it recommended to do it to keep versions of nf-core modules updated?
    p
    • 2
    • 4
  • p

    Priyanka Surana

    10/10/2025, 7:24 AM
    For a new in-house pipeline, each module takes between 8s to 5m. This is extremely inefficient on the HPC. Is there a way to push the entire pipeline into a single job instead of running each module separately. We cannot run anything on the head node. Reposting here from #C0364PKGWJE
    m
    u
    +2
    • 5
    • 15