Moritz Banse
05/21/2025, 8:56 PMNFCORE_RNASEQ:RNASEQ:ALIGN_STAR:STAR_ALIGN (RAP1_UNINDUCED_REP2)
terminated with an error exit status (112)
Command executed:
STAR \
--genomeDir star \
--readFilesIn input1/RAP1_UNINDUCED_REP2_primary.fastq.gz \
--runThreadN 4 \
--outFileNamePrefix RAP1_UNINDUCED_REP2. \
\
--sjdbGTFfile genome_gfp.gtf \
--outSAMattrRGline 'ID:RAP1_UNINDUCED_REP2' 'SM:RAP1_UNINDUCED_REP2' \
--quantMode TranscriptomeSAM --twopassMode Basic --outSAMtype BAM Unsorted --readFilesCommand zcat --runRNGseed 0 --outFilterMultimapNmax 20 --alignSJDBoverhangMin 1 --outSAMattributes NH HI AS NM MD --outSAMstrandField intronMotif --quantTranscriptomeSAMoutput BanSingleEnd
if [ -f RAP1_UNINDUCED_REP2.Unmapped.out.mate1 ]; then
mv RAP1_UNINDUCED_REP2.Unmapped.out.mate1 RAP1_UNINDUCED_REP2.unmapped_1.fastq
gzip RAP1_UNINDUCED_REP2.unmapped_1.fastq
fi
if [ -f RAP1_UNINDUCED_REP2.Unmapped.out.mate2 ]; then
mv RAP1_UNINDUCED_REP2.Unmapped.out.mate2 RAP1_UNINDUCED_REP2.unmapped_2.fastq
gzip RAP1_UNINDUCED_REP2.unmapped_2.fastq
fi
cat <<-END_VERSIONS > versions.yml
"NFCORE_RNASEQRNASEQALIGN_STAR:STAR_ALIGN":
star: $(STAR --version | sed -e "s/STAR_//g")
samtools: $(echo $(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*$//')
gawk: $(echo $(gawk --version 2>&1) | sed 's/^.*GNU Awk //; s/, .*$//')
END_VERSIONS
Command exit status:
112
Command output:
(empty)
Command error:
Exiting because of FATAL ERROR: could not create FIFO file RAP1_UNINDUCED_REP2._STARtmp/tmp.fifo.read1
SOLUTION: check the if run directory supports FIFO files.
If run partition does not support FIFO (e.g. Windows partitions FAT, NTFS), re-run on a Linux partition, or point --outTmpDir to a Linux partition.
May 21 204533 ...... FATAL ERROR, exiting
Work dir:
/mnt/c/Users/moeba/work/3f/dedf759e8b7595dbf55ccf6f4b6fd9
Container:
quay.io/nf-core/htslib_samtools_star_gawk:311d422a50e6d829
Tip: when you have fixed the problem you can continue the execution adding the option -resume
to the run command line
-- Check '.nextflow.log' file for details
ERROR ~ Pipeline failed. Please refer to troubleshooting docs: https://nf-co.re/docs/usage/troubleshooting
-- Check '.nextflow.log' file for detailsMoritz Banse
05/21/2025, 8:57 PMShengchen Xie
05/21/2025, 10:18 PMworkflow/main.nf
, is that appropriate if I just make citation in the readme?Felix Kummer
05/22/2025, 7:52 AMassets/NO_FILE
and my main Nextflow config contains aod = "$projectDir/assets/NO_FILE"
as the default. The schema entry looks like this:
"aod": {
"type": "string",
"default": "${projectDir}/assets/NO_FILE",
"fa_icon": "fas fa-spray-can",
"description": "Custom Aerosol Optical Depth data.",
"help_text": "Directory containing a lookup table with custom Aerosol Optical Depth data. For the concrete format see the [FORCE docs](<https://force-eo.readthedocs.io/en/latest/components/lower-level/level2/depend.html>). This can be disregarded in most cases.",
"format": "path"
},
When I run nf-core pipelines schema build
, I get:
✨ Default for 'params.aod' in the pipeline config does not match schema. (schema: '<class 'str'>: ${projectDir}/assets/NO_FILE' | config: '<class 'str'>: <some_path_on_my_system>/nf-core-rangeland/assets/NO_FILE'). Update pipeline schema? [y/n]:n
This is obviously caused by $projectDir
being evaluated only in the config and not in the schema.
Can I just ignore this, or is there a more nf-core-coherent way to implement optional inputs?Luca
05/22/2025, 2:32 PMDiego Alvarez Saravia
05/25/2025, 3:59 PM.command.trace
files are being created correctly, but they don’t appear in the global trace file or the HTML report. I’m seeing this error in the log:
[TaskFinalizer-2] DEBUG nextflow.trace.TraceRecord - Not a valid trace `realtime` value: '2'
Tried running the tests profiles of:
• mag 4.0.0 and sarek 3.5.1, slurm cluster + singularity
• mag 4.0.0, local + docker
None of these configurations work.
However, running rnaseq-nf
works, so I it could be something more closely related to nf-core pipelines.Louis Le Nézet
05/26/2025, 11:49 AMArthur
05/27/2025, 1:17 AMgroupKey
+ groupTuple
wizardry?Nick Eckersley
05/27/2025, 11:12 AMSreeram Chandra Murthy Peela
05/27/2025, 7:11 PMMichael Beavitt
05/29/2025, 10:35 AMprocess BOOL_VARIANTS_EXIST {
tag "$meta.id"
label 'process_low'
input:
tuple val(meta), path(vcf)
output:
env 'VAR_EXIST'
"""
if [ "\$(zgrep -cv '#' ${vcf})" -eq 0 ]; then
VAR_EXIST=0
else
VAR_EXIST=1
fi
"""
}
This feels a little roundabout though, and I wanted to ask if it's possible to modify the meta in the process and emit it with the additional field 'vars_exist':0 or 'vars_exist':1
? I've also considered that perhaps I should be using a groovy function for this instead, or some kind of workflow syntax rather than a process.
I guess I would love to do something like:
// SNIP
output:
tuple val(meta + ['vars_exist':$VAR_EXIST]), path(vcf)
but clearly this is not valid syntax (?)
Would appreciate any advice!Louis Le Nézet
05/30/2025, 9:42 AMcellranger-arc/mkfastq
I observed a big difference in number of reads depending on the number of cpu used.
Simon told me to
If this tool is dependent on the number of CPUs to produce reproducible output, I would explicitly request the appropriate amount inside the tool, alongside the label.However I don't know how and what he mean by that... Does anybody know ?
ramya ranaganathan
05/30/2025, 1:06 PMLouis Le Nézet
05/30/2025, 2:04 PMNinghui Du
05/30/2025, 11:34 PMHowever, I noticed that many processes such asPipeline completed successfully
RSEM_CALCULATEEXPRESSION
, SAMTOOLS_SORT
, SAMTOOLS_STATS
, and others did not run — they were still listed as "waiting for tasks" even after the pipeline finished.
I’ve tried different pipeline versions, including -r 3.16.0
and -r dev
, and the issue is consistent. How can I troubleshoot or fix this? Any help would be greatly appreciated!Neerja Katiyar
06/01/2025, 9:10 PMexport_plots: false
in config, or remove the --export-plots
command line flag
plot | Failure adding logo to the plot: cannot identify image file _io.BytesIO object at 0x155546b1f880Rayan Hassaïne
06/02/2025, 8:14 AMSamuel Lampa
06/02/2025, 12:35 PMsamplesheetToList()
function, and I try to instead pick up the path from a channel, like so:
https://github.com/genomic-medicine-sweden/gms_16S/blob/78-consolidate-pipeline-in[…]ORKAROUND/subworkflows/local/utils_nfcore_taco_pipeline/main.nf
... but the pipeline seems to just stall when test-running, where the same test use to finish pretty quickly when not doing this.
I have a hunch that the problem is in my `runValidation()`` function, that I somehow don't "drive" the channel there in a proper way, since it is not returning any channel from the function:
https://github.com/genomic-medicine-sweden/gms_16S/blob/78-consolidate-pipeline-in[…]ORKAROUND/subworkflows/local/utils_nfcore_taco_pipeline/main.nf
I tried adding a .view()
call in the end there, to force the channels to be consumed, but it doesn't seem to help.lyfing
06/02/2025, 6:24 PMDerrik Gratz
06/02/2025, 10:36 PMmaciej -
06/03/2025, 10:25 AMJon Lim
06/03/2025, 10:33 AMYi
06/04/2025, 12:01 AMless nextflow-metatdenovo.out
NXF_HOME=/scratch/prj/cd_omics/IADR/nxf_home_link
NXF_WORK=/scratch/prj/cd_omics/IADR/metatdenovo_1/metatdenovo_run1/work
Running in: /scratch/prj/cd_omics/IADR/metatdenovo_1/metatdenovo_run1
N E X T F L O W ~ version 24.10.5
Launching `<https://github.com/nf-core/metatdenovo>` [small_mclean] DSL2 - revision: c42a8d4a0c [1.1.1]
ERROR ~ Can't open cache DB: /cephfs/volumes/hpc_data_prj/cd_omics/ce528200-f1f4-42d4-90ac-34e944b900f9/IADR/metatdenovo_1/metatdenovo_run1/.nextflow/cache/45d8139a-89f5-4bc9-a010-1a238fcf5a30/db
Nextflow needs to be executed in a shared file system that supports file locks.
Alternatively, you can run it in a local directory and specify the shared work
directory by using the `-w` command line option.
-- Check '.nextflow.log' file for details
I am wondering anyone knows how to fix this error and allows to use resume? Many thanks!Slackbot
06/04/2025, 9:54 AMNicholas Youngblut
06/04/2025, 6:01 PMoutput:
path "bcl_output/Undetermined_R1_001.fastq.gz", emit: fastq_r1_undet
path "bcl_output/Undetermined_R2_001.fastq.gz", emit: fastq_r2_undet, optional: true
The following causes the pipeline to stall forever on this process:
path "bcl_output/Undetermined_R1_001.fastq.gz", emit: fastq_r1_undet
path "bcl_output/Undetermined_R2_001.fastq.gz", emit: fastq_r2_undet, optional: true
path "bcl_output/**/*_R1_001.fastq.gz", emit: fastq_r1_det
If I remove the recursive glob, the pipeline completes:
path "bcl_output/Undetermined_R1_001.fastq.gz", emit: fastq_r1_undet
path "bcl_output/Undetermined_R2_001.fastq.gz", emit: fastq_r2_undet, optional: true
path "bcl_output/*_R1_001.fastq.gz", emit: fastq_r1_det
Any ideas why a recursive glob with cause Nextflow to stall forever? I can't even control-c interrupt the pipeline. I have to hard-kill the Nextflow job.
This pipeline was working fine up until recently. I'm using Nextflow 25.04.3
. This pipeline is critical for our operations.Guillaume Louvel
06/05/2025, 9:59 AMmeta.yml
schema (is there a specific channel for it?), for a module.
I have a module where I would like to specify alternative input types, like:
input:
- - foo:
type: file or directory
or
type: ["file", "directory"]
Currently not possible, but how about allowing it?Yuxin Ning
06/05/2025, 3:48 PMNils Homer
06/05/2025, 10:03 PMnf-core/modules
using nf-core modules bump-versions fgbio/fastqtobam
but I get the error:
Could not download container tags: Could not find singularity container for fgbioI am using
nf-core
3.3.1. Any guidance would be appreciated.Carolin Schwitalla
06/06/2025, 8:45 AMfl:filesystem:InvalidArgument: Absolute path not permitted
.
My processes are compiled matlab applications that have problems finding my input directory when provided symlinks.
• I tried converting symlinks to real paths in the script block - which worked locally but not on github CI.
• I tried to convert symlinks to real paths within my matlab compiled application - works locally but not on github
• i tried stageInMode 'copy'
and `'rellink`` (only for test profile) - copy
did not work because my application cannot find the input_dir, rellink
does not work in github
• i changed only the first process for testing all options, not the others not sure if this would be a problem but it fails always on the first process.
• i changed the output file paths from results/file.mat
to ./results/file.mat
- didn’t work.
• it could be a Matlab problem …
this is the PR https://github.com/nf-core/lsmquant/pull/15
Maybe someone has an idea what is going wrong 🙏Slackbot
06/06/2025, 9:29 AM