Lis Arend
09/10/2025, 7:25 AMBenjamin Story
09/10/2025, 2:17 PMexport NXF_HOME=/mnt/sample; export NXF_OPTS='-Xms4g -Xmx6g -XX:+UseG1GC'; cd $NXF_HOME; echo $PWD; nextflow run /mnt/HDD2/test/vep_module/main.nf -with-docker <http://quay.io/biocontainers/ensembl-vep:111.0--pl5321h2a3209d_0|quay.io/biocontainers/ensembl-vep:111.0--pl5321h2a3209d_0> --my_id 'sample' --vcf '/mnt/HDD2/sample/merge.vcf.gz';
I've been getting this intermittent java crash since updating java to version 17 a couple weeks ago (late July) due to the requirements from nextflow v25+. It worked before on java v11 with 0 crashes for over a year. all of this is on an Ubuntu server.
I'm launching around 20 tasks in parallel usually they all work (one crash occurred the day I updated java ago) - I thought maybe it was due to updating nextflow. since then everything has been running smoothly (so at least 4 runs of 20 samples each). now today, I get a 2 crashes (was using tons of the server RAM)... so I thought maybe it was that. I killed all processes and relaunched but then a random different process fails. any thoughts on the source of this? maybe some OOM im not understanding. I dropped the number of parallel process down from 20 to 8 but it still happened. any idea?
[2] "Downloading nextflow dependencies. It may require a few seconds, please wait .. \r\033[K"
[3] " N E X T F L O W ~ version 25.04.6"
[4] ""
[5] "Launching `/mnt/HDD2/test/vep_module/main.nf` [distraught_khorana] DSL2 - revision: 3a0ff5ed42"
[6] ""
[7] "#"
[8] "# A fatal error has been detected by the Java Runtime Environment:"
[9] "#"
[10] "# SIGSEGV (0xb) at pc=0x00007f7a8180e55a, pid=73649, tid=957"
[11] "#"
[12] "# JRE version: OpenJDK Runtime Environment (17.0.7+7) (build 17.0.7+7-Ubuntu-0ubuntu118.04)"
[13] "# Java VM: OpenJDK 64-Bit Server VM (17.0.7+7-Ubuntu-0ubuntu118.04, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)"
[14] "# Problematic frame:"
[15] "# C [ld-linux-x86-64.so.2+0x1d55a]"
[16] "#"
[17] "# Core dump will be written. Default location: Core dumps may be processed with \"/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E\" (or dumping to /mnt/HDD2/test/core.73649)"
[18] "#"
[19] "# An error report file with more information is saved as:"
[20] "# /mnt/HDD2/test/hs_err_pid73649.log"
[21] "#"
[22] "# If you would like to submit a bug report, please visit:"
[23] "# Unknown"
[24] "# The crash happened outside the Java Virtual Machine in native code."
[25] "# See problematic frame for where to report the bug."
[26] "#"
Eva Gunawan
09/12/2025, 4:45 PMif (ch_no_ntc == "false") {
CREATE_REPORT (
stuff...
)
}
if (ch_no_ntc == "true") {
CREATE_REPORT_NO_NTC (
stuff...
)
}
When I view ch_no_ntc, it shows "true". But it seemingly skips the module regardless of meeting the if condition. I have even added the modules to another troubleshooting if statement where ch_no_ntc is being created:
if (ch_kraken_ntc == "empty" && ch_ntc_check == "empty" ) {
Channel.of("false")
.set{ ch_no_ntc }
CREATE_REPORT (
stuff...
)
} else {
Channel.of("true")
.set{ ch_no_ntc }
CREATE_REPORT_NO_NTC (
stuff...
)
}
For some reason, it still ends up being skipped regardless of where I put it. I can run both of the modules just fine outside of the if statements. Just as a test, I've tried using a param denoted in the nextflow.config. For example:
if (params.ntc_present == "true") {
CREATE_REPORT (
stuff...
)
}
if (params.ntc_present == "false") {
CREATE_REPORT_NO_NTC (
stuff...
)
}
Both modules work if there is a param set like this, but I need to determine if it is present inside the workflow itself. Any suggestions/advice? Thanks in advance 😄Luis Heinzlmeier
09/15/2025, 8:25 AMnf-test test tests/default.nf.test --profile +singularity --update-snapshot
. However, when I run nf-test in Codespaces, I get the following error message (I do not get this error when I run the pipeline locally):
Sep-14 11:48:46.861 [Actor Thread 66] ERROR nextflow.extension.OperatorImpl - @unknown
org.yaml.snakeyaml.parser.ParserException: while parsing a block mapping
in 'reader', line 2, column 5:
echo mkdir -p failed for path /h ...
^
expected <block end>, but found '<scalar>'
in 'reader', line 2, column 79:
... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ...
^
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:654)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:161)
at org.yaml.snakeyaml.comments.CommentEventsCollector$1.peek(CommentEventsCollector.java:57)
at org.yaml.snakeyaml.comments.CommentEventsCollector$1.peek(CommentEventsCollector.java:43)
at org.yaml.snakeyaml.comments.CommentEventsCollector.collectEvents(CommentEventsCollector.java:136)
at org.yaml.snakeyaml.comments.CommentEventsCollector.collectEvents(CommentEventsCollector.java:116)
at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:291)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:216)
at org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:396)
at org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:361)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:329)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:218)
at org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:396)
at org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:361)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:329)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:218)
at org.yaml.snakeyaml.composer.Composer.getNode(Composer.java:141)
at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:167)
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:178)
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:507)
at org.yaml.snakeyaml.Yaml.load(Yaml.java:448)
at nextflow.file.SlurperEx.load(SlurperEx.groovy:67)
at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
at Script_5c4e8d4051efa81e.processVersionsFromYAML(Script_5c4e8d4051efa81e:82)
at jdk.internal.reflect.GeneratedMethodAccessor257.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:343)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
at Script_5c4e8d4051efa81e$_softwareVersionsToYAML_closure2.doCall(Script_5c4e8d4051efa81e:101)
at jdk.internal.reflect.GeneratedMethodAccessor256.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:280)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
at nextflow.extension.MapOp$_apply_closure1.doCall(MapOp.groovy:56)
at jdk.internal.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:280)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
at groovy.lang.Closure.call(Closure.java:433)
at groovyx.gpars.dataflow.operator.DataflowOperatorActor.startTask(DataflowOperatorActor.java:120)
at groovyx.gpars.dataflow.operator.DataflowOperatorActor.onMessage(DataflowOperatorActor.java:108)
at groovyx.gpars.actor.impl.SDAClosure$1.call(SDAClosure.java:43)
at groovyx.gpars.actor.AbstractLoopingActor.runEnhancedWithoutRepliesOnMessages(AbstractLoopingActor.java:293)
at groovyx.gpars.actor.AbstractLoopingActor.access$400(AbstractLoopingActor.java:30)
at groovyx.gpars.actor.AbstractLoopingActor$1.handleMessage(AbstractLoopingActor.java:93)
at groovyx.gpars.util.AsyncMessagingCore.run(AsyncMessagingCore.java:132)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Sep-14 11:48:46.891 [Actor Thread 66] DEBUG nextflow.Session - Session aborted -- Cause: while parsing a block mapping
in 'reader', line 2, column 5:
echo mkdir -p failed for path /h ...
^
expected <block end>, but found '<scalar>'
in 'reader', line 2, column 79:
... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ...
^
Test [5d0fca1c] '-profile test' Assertion failed:
assert workflow.success
| |
workflow false
FAILED (534.124s)
Assertion failed:
1 of 2 assertions failed
Nextflow stdout:
ERROR ~ while parsing a block mapping
in 'reader', line 2, column 5:
echo mkdir -p failed for path /h ...
^
expected <block end>, but found '<scalar>'
in 'reader', line 2, column 79:
... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ...
^
-- Check script '/workspaces/hadge/subworkflows/nf-core/utils_nfcore_pipeline/main.nf' at line: 82 or see '/workspaces/hadge/.nf-test/tests/5d0fca1c9bc3a6b101ae0cb52e6a311a/meta/nextflow.log' file for more details
ERROR ~ Pipeline failed. Please refer to troubleshooting docs: <https://nf-co.re/docs/usage/troubleshooting>
-- Check '/workspaces/hadge/.nf-test/tests/5d0fca1c9bc3a6b101ae0cb52e6a311a/meta/nextflow.log' file for details
Nextflow stderr:
Nadia Sanseverino
09/15/2025, 2:48 PMnf-core modules test deeptools/bigwigcompare
, nf-core modules test deeptools/bigwigcompare --profile docker
, nf-core modules test deeptools/bigwigcompare --profile conda
(all launched from modules root dir) just execute infinitely.
But if I run nf-test test modules/nf-core/deeptools/bigwigcompare/tests/main.nf.test
it works, it's just that on the tutorial pages https://nf-co.re/docs/tutorials/tests_and_test_data/nf-test_comprehensive_guide there's a broken link for '3. Testing modules' and the Modules tutorial doesn't mention much nf-test, so I can't find a reason why the nf-core modules test command doesn't wok.
Thank you to anyone willing to look into this 😊Andries van Tonder
09/15/2025, 3:06 PMHelen Huang
09/15/2025, 9:04 PM*ERROR ~ Failed to publish file*: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/work/aa/48b96b1bc55c153b2885f0cf2f2ea6/samplesheet.valid.csv; to: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/nf_atac_test/pipeline_info/samplesheet.valid.csv [copy] -- See log file for details
Here’s the error in the log:
DEBUG nextflow.processor.PublishDir - Failed to publish file: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/work/aa/48b96b1bc55c153b2885f0cf2f2ea6/samplesheet.valid.csv; to: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/nf_atac_test/pipeline_info/samplesheet.valid.csv [copy] -- attempt: 4; *reason: Input/output error*
Ubuntu server: 18.04 LTS
Nexflow version: version 25.04.6
SMB version: 3.0 (tried different versions, none worked)
The NAS was mounted with noperm, which means everyone can write without permission checks, so it’s not a permission issue. I understand NFS mounting might work better but we have reasons to use SMB mounting. (The NAS is mounted in many different Windows and Mac systems as well, so NFS mounting is going to cause issues.)
Thank you all! Our lab used to use pipelines we built ourselves, but now want to move towards using Nexfflow core pipelines.Martin Rippin
09/16/2025, 11:35 AM[ ['id1', 'id1/path/to/MetricsOutput.tsv', 'id1/path/to/RunCompletionStatus.xml'], ['id2', 'id2/path/to/MetricsOutput.tsv', 'id2/path/to/RunCompletionStatus.xml'], ... ]
I am struggeling to define the structure correctly inside the process. I tried something like:
input:
tuple tuple(val(id), path(tsv), path(xml))
but that does not work. Also the files will be mounted with their `basename`s which I also don't know how to solve. Does anyone have an idea how to solve this? I was thinking of just giving the root dir of all files and glob inside the process but maybe there is a more sophisticated way?Hannes Kade
09/16/2025, 5:47 PM.nextflow.log
[romantic_fourier] DSL2 - revision: adf043ce82
ERROR ~ Script compilation error
- file : /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/bcd7a1f898c503385f2a83c3ba853c7acd3d7bb6b1ddd98b63de37dcda26623f/.nextflow.log
- cause: Unexpected input: ':' @ line 1, column 13.
Sep-16 183653.621 [main] DEBUG nextflow.cli.Launcher - $> nextflow run .nextflow.log
^
1 errorSylvia Li
09/16/2025, 10:17 PMJoshua
09/17/2025, 3:43 AMHovakim Grabski
09/17/2025, 9:44 AMAgrima Bhatt
09/17/2025, 12:35 PMnf-test / docker
on several test cases (e.g. 1/7, 2/7, etc.) for my pipeline PR, but when I manually run the pipeline locally everything works fine. The pre-commit and linting checks pass, and some nf-test checks (like 6/7) are successful, but most fail after 1–3 minutes.
What could be causing these nf-test docker failures in CI, especially when the pipeline runs without issues on my machine?
Is there something specific I should check?
Any advice on debugging nf-test failures would be appreciated! My PR : https://github.com/nf-core/seqinspector/pull/127Uri David Akavia
09/18/2025, 9:48 AMNadia Sanseverino
09/18/2025, 2:39 PM.bigwig
and one readable .bedgraph
).
Citing modules guidelines, All _non-mandatory_ command-line tool _non-file_ arguments MUST be provided as a string via the $task.ext.args variable
.
Test-writing seem to suggest that I need a nextflow.config
to successfully launch tests.
I need a kind soul to take a look at my snippets and confirm if I'm all set to update my branch.
• from main.nf
input:
tuple val(meta) , path(bigwig1) , path(bigwig2)
tuple val(meta2), path(blacklist)
output:
tuple val(meta), path("*.{bw,bedgraph}"), emit: output
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ""
def prefix = task.ext.prefix ?: "${meta.id}"
def blacklist_cmd = blacklist ? "--blackListFileName ${blacklist}" : ""
def extension = args.contains("--outFileFormat bedgraph") ? "bedgraph" : "bw"
"""
bigwigCompare \\
--bigwig1 $bigwig1 \\
--bigwig2 $bigwig2 \\
--outFileName ${prefix}.${extension} \\
--numberOfProcessors $task.cpus \\
$blacklist_cmd \\
$args
cat <<-END_VERSIONS > versions.yml
"${task.process}":
deeptools: \$(bigwigCompare --version | sed -e "s/bigwigCompare //g")
END_VERSIONS
"""
• from main.nf.tests
test("homo_sapiens - 2 bigwig files - bedgraph output") {
config "./nextflow.config"
when {
params {
deeptools_bigwigcompare_args = '--outFileFormat bedgraph'
}
process {
"""
def bigwig1 = file(params.modules_testdata_base_path + 'genomics/homo_sapiens/illumina/bigwig/test_S2.RPKM.bw', checkIfExists: true)
def bigwig2 = file(params.modules_testdata_base_path + 'genomics/homo_sapiens/illumina/bigwig/test_S3.RPKM.bw', checkIfExists: true)
input[0] = [
[ id:'test' ],
bigwig1,
bigwig2
]
input[1] = [
[ id:'no_blacklist' ],
[]
]
"""
}
}
then {
assertAll(
{ assert process.success },
{ assert snapshot(process.out.output,
process.out.versions)
.match()
}
)
}
}
• from nextflow.config
process {
withName: 'DEEPTOOLS_BIGWIGCOMPARE' {
ext.args = params.deeptools_bigwigcompare_args
}
}
Chenyu Jin (Amend)
09/19/2025, 12:19 PMeach()
in the process then it's not input as a file as path()
does. How can I process each path?
workflow {
file = Channel.fromPath("${params.input_dir}/*", checkIfExists: true).view()
index_reference(files, params.threads)
}
process index_reference {
input:
each(input_ref)
val(threads)
...
}
eparisis
09/22/2025, 2:27 PMEvangelos Karatzas
09/23/2025, 7:10 AMMegan Justice
09/24/2025, 4:12 PMshaojun sun
09/24/2025, 5:39 PMFabian Egli
09/26/2025, 5:44 AMFabian Egli
09/26/2025, 8:15 AMcommand.sh: line xx: 282 Killed [...] command -with -parameters [...]
James Fellows Yates
09/26/2025, 9:11 AMAndrea Bagnacani
09/26/2025, 2:26 PMnf-core/samtools/merge
to merge some BAM files.
The input channel that I provide to this process has meta fields id
and sample_name
.
The former is used by samtools merge to infer the file prefix for merging, while the latter is used in my pipeline to keep provenance info.
When I run my pipeline, this performs the merge as intended. However, when I run `nf-core/samtools/merge`'s stub test, meta.sample_name
ends up being interpreted as the relative path to a docker mount point, and since Docker mount points must be absolute, the stub test is (in my case) bound to fail:
$ nf-test test tests/01.stub.nf.test --profile docker
...
Command exit status:
125
Command output:
(empty)
Command error:
docker: Error response from daemon: invalid volume specification: 'HG00666:HG00666': invalid mount config for type "volume": invalid mount path: 'HG00666' mount path must be absolute
Run 'docker run --help' for more information
From `.command.run`:
...
nxf_launch() {
docker run -i --cpu-shares 2048 --memory 12288m -e "NXF_TASK_WORKDIR" -e "NXF_DEBUG=${NXF_DEBUG:=0}" \
\
-v HG00666:HG00666 \ # <-- meta.sample_name becomes a mount point
\
-v /home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8:/home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8 -w "$NXF_TASK_WORKDIR" -u $(id -u):$(id -g) --name $NXF_BOXID quay.io/biocontainers/samtools:1.22.1--h96c455f_0 /bin/bash -c "eval $(nxf_container_env); /bin/bash /home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8/.command.run nxf_trace"
}
...
How do I make samtools merge ignore meta.sample_name
when the docker cli is buit?Quentin Blampey
09/30/2025, 2:41 PM$HOME
directory. I fixed it for Docker with containerOptions = ''
, but for singularity
I still receive an error saying Read-only file system
Does anyone know how to fix that?Kurayi
10/02/2025, 8:31 PMQuentin Blampey
10/08/2025, 1:19 PM.zarr
directory.
Everything works well for "standard" usage (e.g., Docker / Singularity on a HPC / standard file system), but I have some staging issues on AWS Batch specifically. When one process updates the zarr (i.e. creating a new subdir), the new subdir is not passed to the following process, although I specify it in my inputs/outputs (and, again, it works nicely when not on the cloud).
Does anyone faced similar issues? Do you have any idea how to fix it?Luuk Harbers
10/08/2025, 3:22 PMgetAttribute
) just like normally with fasta files etc. These files are on github lfs and so we specified them like this in the config (opposed to raw.githubusercontent.com )
gnomad = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_gnomad.vcf.gz>"
dbsnp = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_dbsnp.vcf.gz>"
onekgenomes = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_1kgenomes.vcf.gz>"
colors = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_colors.vcf.gz>"
This works perfectly and downloads them fine. However the caching doesn't work and I'm unsure why. It will always restage the files from github lfs and result in processed not caching properly on resume. I'll put a nextflow.log
snippet with hashes in a reply here.Sylvia Li
10/08/2025, 6:07 PMPriyanka Surana
10/10/2025, 7:24 AM