Nick Eckersley
07/09/2025, 8:12 AMexecutor > slurm (9)
[25/e87748] NFC…FASTQC_RAW (N073_run0_raw) | 0 of 4 ✘
[8d/869603] NFC…OCESSING:FASTP (N073_run0) | 0 of 4 ✘
[dc/ec7b0e] NFC…SM259684v1_genomic.fna.gz) | 0 of 1
[- ] NFC…BOWTIE2_PHIX_REMOVAL_ALIGN -
[- ] NFC…EPROCESSING:FASTQC_TRIMMED -
[- ] NFC…AD_PREPROCESSING:CAT_FASTQ -
[- ] NFC…PREPROCESSING:NANOPLOT_RAW -
[- ] NFC…PREPROCESSING:PORECHOP_ABI -
[- ] NFC…EAD_PREPROCESSING:NANOLYSE -
[- ] NFC…EAD_PREPROCESSING:FILTLONG -
[- ] NFC…OCESSING:NANOPLOT_FILTERED -
[- ] NFC…:MAG:CENTRIFUGE_CENTRIFUGE -
[- ] NFC…MAG:MAG:CENTRIFUGE_KREPORT -
[- ] NFCORE_MAG:MAG:KRAKEN2 -
[- ] NFCORE_MAG:MAG:POOL_LONG_READS -
[- ] NFCORE_MAG:MAG:METASPADES -
[- ] NFC…E_MAG:MAG:METASPADESHYBRID -
[- ] NFCORE_MAG:MAG:MEGAHIT -
[- ] NFC…_MAG:MAG:GUNZIP_ASSEMBLIES -
[- ] NFCORE_MAG:MAG:QUAST -
Plus 31 more processes waiting for tasks…
Pulling Singularity image <https://depot.galaxyproject.org/singularity/fastp:0.23.4--h5f740d0_0> [cache /home/neckersl/scratch/private/nfcore_mag/work/singularity/depot.galaxyproject.org-singularity-fastp-0.23.4--h5f740d0_0.img]
Pulling Singularity image <https://depot.galaxyproject.org/singularity/bowtie2:2.4.2--py38h1c8e9b9_1> [cache /home/neckersl/scratch/private/nfcore_mag/work/singularity/depot.galaxyproject.org-singularity-bowtie2-2.4.2--py38h1c8e9b9_1.img]
Pulling Singularity image <https://depot.galaxyproject.org/singularity/fastqc:0.12.1--hdfd78af_0> [cache /home/neckersl/scratch/private/nfcore_mag/work/singularity/depot.galaxyproject.org-singularity-fastqc-0.12.1--hdfd78af_0.img]
WARN: Singularity cache directory has not been defined -- Remote image will be stored in the path: /home/neckersl/scratch/private/nfcore_mag/work/singularity -- Use the environment variable NXF_SINGULARITY_CACHEDIR to specify a different location
[nf-core/mag] ERROR: no bins passed the bin size filter specified between --bin_min_size 0 and --bin_max_size null. Please adjust parameters.
ERROR ~ Error executing process > 'NFCORE_MAG:MAG:SHORTREAD_PREPROCESSING:BOWTIE2_PHIX_REMOVAL_BUILD (GCA_002596845.1_ASM259684v1_genomic.fna.gz)'
Caused by:
Process `NFCORE_MAG:MAG:SHORTREAD_PREPROCESSING:BOWTIE2_PHIX_REMOVAL_BUILD (GCA_002596845.1_ASM259684v1_genomic.fna.gz)` terminated with an error exit status (1)
Command executed:
mkdir bowtie
bowtie2-build --threads 1 GCA_002596845.1_ASM259684v1_genomic.fna.gz GCA_002596845
cat <<-END_VERSIONS > versions.yml
"NFCORE_MAG:MAG:SHORTREAD_PREPROCESSING:BOWTIE2_PHIX_REMOVAL_BUILD":
bowtie2: $(echo $(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*$//')
END_VERSIONS
Command exit status:
1
Command output:
(empty)
Command error:
INFO: Environment variable SINGULARITYENV_TMPDIR is set, but APPTAINERENV_TMPDIR is preferred
INFO: Environment variable SINGULARITYENV_NXF_TASK_WORKDIR is set, but APPTAINERENV_NXF_TASK_WORKDIR is preferred
INFO: Environment variable SINGULARITYENV_NXF_DEBUG is set, but APPTAINERENV_NXF_DEBUG is preferred
WARNING: Skipping mount /var/lib/apptainer/mnt/session/etc/resolv.conf [files]: /etc/resolv.conf doesn't exist in container
bash: .command.run: No such file or directory
Work dir:
/home/neckersl/scratch/private/nfcore_mag/work/dc/ec7b0eb2e8becb57f9f20e81f5f4eb
Container:
/home/neckersl/scratch/private/nfcore_mag/work/singularity/depot.galaxyproject.org-singularity-bowtie2-2.4.2--py38h1c8e9b9_1.img
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
-- Check '.nextflow.log' file for details
-[nf-core/mag] Pipeline completed with errors-
The script I used was the same as one I have used previously that worked fine, just the raw reads are different:
#!/bin/bash
#SBATCH --job-name=nfcore_mag
#SBATCH --output=../logs/%x_%j.out
#SBATCH --error=../logs/%x_%j.err
#SBATCH --cpus-per-task=8
#SBATCH --mem=32G
#SBATCH --partition=long
# Activate Conda
source /mnt/apps/users/neckersl/conda/etc/profile.d/conda.sh
conda activate nfcore
# Run the pipeline
nextflow run nf-core/mag -r 4.0.0 \
-profile cropdiversityhpc \
--input "$HOME/scratch/private/nfcore_mag/data/N072-75_fb_samplesheet.csv" \
--outdir "$HOME/scratch/private/nfcore_mag/output/spades_fb" \
--gtdb_db /mnt/shared/datasets/databases/gtdb/GTDB_280324 \
-work-dir "$HOME/scratch/private/nfcore_mag/work"
Any help would be greatly appreciated. Thanks.Suhan Cho
07/09/2025, 10:45 AMJimmy Lail
07/09/2025, 1:52 PMnf-test test subworkflows/local/diamond/tests/main.nf.tests
, I get an output of No tests to execute
. This subworkflow is to execute four modules: NCBIREFSEQDOWNLOAD
, DIAMONDPREPARETAXA
, DIAMOND_MAKEDB
, and DIAMOND_BLASTP
. The first two modules are local modules and nf-testing succeeds while the later two are nf-core installed modules. Here is a link to the github PR: https://github.com/nf-core/proteinannotator/pull/50
Here is the subworkflow main.nf
include { NCBIREFSEQDOWNLOAD } from '../../../modules/local/ncbirefseqdownload/main'
include { DIAMONDPREPARETAXA } from '../../../modules/local/diamondpreparetaxa/main'
include { DIAMOND_MAKEDB } from '../../../modules/nf-core/diamond/makedb/main'
include { DIAMOND_BLASTP } from '../../../modules/nf-core/diamond/blastp/main'
/*
* Pipeline parameters
*/
// params.refseq_release = 'complete'
// params.taxondmp_zip = '<<ftp://ftp.ncbi.nih.gov/pub/taxonomy/taxdump.tar.gz>>'
// params.taxonmap = '<<ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/prot.accession2taxid.gz>>'
// params.diamond_outfmt = 6
// params.diamond_blast_columns = qseqid
workflow DIAMOND {
take:
ch_fasta // channel: [ val(meta), [ fasta ] ]
main:
ch_versions = Channel.empty()
// TODO nf-core: substitute modules here for the modules of your subworkflow
NCBIREFSEQDOWNLOAD(
params.refseq_release
)
ch_diamond_reference_fasta = NCBIREFSEQDOWNLOAD.out.refseq_fasta
ch_versions = ch_versions.mix(NCBIREFSEQDOWNLOAD.out.versions.first())
DIAMONDPREPARETAXA (
params.taxondmp_zip
)
ch_taxonnodes = DIAMONDPREPARETAXA.out.taxonnodes
ch_taxonnames = DIAMONDPREPARETAXA.out.taxonnames
ch_versions = ch_versions.mix(DIAMONDPREPARETAXA.out.versions.first())
DIAMOND_MAKEDB (
ch_diamond_reference_fasta,
params.taxonmap, // make default <<ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/prot.accession2taxid.gz>>
ch_taxonnodes,
ch_taxonnames
)
ch_diamond_db = DIAMOND_MAKEDB.out.db
ch_versions = ch_versions.mix(DIAMOND_MAKEDB.out.versions.first())
//ch_diamond_db = Channel.of( [ [id:"diamond_db"], file(params.diamond_db, checkIfExists: true) ] )
DIAMOND_BLASTP (
ch_fasta,
ch_diamond_db,
params.diamond_outfmt,
params.diamond_blast_columns,
)
emit:
ch_versions = ch_versions.mix(DIAMOND_BLASTP.out.versions.first())
ch_diamond_tsv = DIAMOND_BLASTP.out.tsv
Here is the main.nf.test:
nextflow_workflow {
name "Test Subworkflow DIAMOND"
script "../main.nf"
workflow "DIAMOND"
tag "subworkflows"
tag "subworkflows_"
tag "subworkflows/diamond"
// TODO nf-core: Add tags for all modules used within this subworkflow. Example:
tag "ncbirefseqdownload"
tag "diamondpreparetaxa"
tag "diamond/makedb"
tag "diamond/blastp"
// TODO nf-core: Change the test name preferably indicating the test-data and file-format used
setup {
run("NCBIREFSEQDOWNLOAD") {
script "../../../../modules/local/ncbirefseqdownload/main.nf"
process {
"""
input[0] = 'other'
"""
}
}
run("DIAMONDPREPARETAXA") {
script "../../../../modules/local/diamondpreparetaxa/main.nf"
process {
"""
input[0] = '<<ftp://ftp.ncbi.nih.gov/pub/taxonomy/taxdump.tar.gz>>'
"""
}
}
run("DIAMOND_MAKEDB") {
script "../../../../modules/nf-core/diamond/makedb/main.nf"
process {
"""
input[0] = [ [id:'test2'], [ NCBIREFSEQDOWNLOAD.out.refseq_fasta ] ]
input[1] = '<<ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/prot.accession2taxid.gz>>'
input[2] = DIAMONDPREPARETAXA.out.taxonnodes
input[3] = DIAMONDPREPARETAXA.out.taxonnames
"""
}
}
run("DIAMOND_BLASTP") {
script "../../../../modules/nf-core/diamond/makedb/main.nf"
process {
"""
input[0] = [ [id:'test'], file(params.modules_testdata_base_path + 'genomics/sarscov2/genome/proteome.fasta', checkIfExists: true) ]
input[1] = DIAMOND_MAKEDB.out.db
input[2] = 6
input[3] = 'qseqid qlen'
"""
}
}
}
test("Test Diamond subworkflow succeeds") {
when {
params {
params.refseq_release = 'complete'
params.taxondmp_zip = '<<ftp://ftp.ncbi.nih.gov/pub/taxonomy/taxdump.tar.gz>>'
params.taxonmap = '<<ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/prot.accession2taxid.gz>>'
params.diamond_outfmt = 6
params.diamond_blast_columns = 'qseqid'
}
workflow {
"""
input[0] = file("test1.fasta", checkIfExists: true)
"""
}
}
then {
assertAll(
{ assert workflow.success},
{ assert snapshot(workflow.out).match()}
//TODO nf-core: Add all required assertions to verify the test output.
)
}
}
}
Thank you to any and all help.Grigorii Nos
07/09/2025, 2:08 PMcat .command.out
-- Check '.nextflow.log' file for details
ERROR ~ Pipeline failed. Please refer to troubleshooting docs: https://nf-co.re/docs/usage/troubleshooting
-- Check '.nextflow.log' file for detailsKrista Pipho
07/09/2025, 5:59 PMMichael Beavitt
07/09/2025, 7:33 PMwave.strategy = ['dockerfile','container']
And the only thing in the pipeline is multiqc.
If I run the pipeline using:
nextflow run main.nf -profile wave,test --outdir test
Then I get an error that the multiqc executable was not found.
(nextflow) mbeavitt@ORIGIN-LT-27:~/Code/Nextflow/test$ ./run.sh
N E X T F L O W ~ version 25.04.6
Launching `main.nf` [sad_gilbert] DSL2 - revision: 3b35870dc7
Input/output options
input : <https://raw.githubusercontent.com/nf-core/test-datasets/viralrecon/samplesheet/samplesheet_test_illumina_amplicon.csv>
outdir : test
Institutional config options
config_profile_name : Test profile
config_profile_description: Minimal test dataset to check pipeline function
Generic options
trace_report_suffix : 2025-07-09_20-31-46
Core Nextflow options
runName : sad_gilbert
launchDir : /home/mbeavitt/Code/Nextflow/test
workDir : /home/mbeavitt/Code/Nextflow/test/work
projectDir : /home/mbeavitt/Code/Nextflow/test
userName : mbeavitt
profile : wave,test
configFiles : /home/mbeavitt/Code/Nextflow/test/nextflow.config
!! Only displaying parameters that differ from the pipeline defaults !!
------------------------------------------------------
executor > local (1)
[6b/5aa8b1] ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC [ 0%] 0 of 1
ERROR ~ Error executing process > 'ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC'
executor > local (1)
[6b/5aa8b1] ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC [ 0%] 0 of 1 ✘
Execution cancelled -- Finishing pending tasks before exit
ERROR ~ Error executing process > 'ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC'
Caused by:
Process `ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC` terminated with an error exit status (127)
executor > local (1)
[6b/5aa8b1] ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC [ 0%] 0 of 1 ✘
Execution cancelled -- Finishing pending tasks before exit
-[originsciences/repaq2fastq] Pipeline completed with errors-
ERROR ~ Error executing process > 'ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC'
Caused by:
Process `ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC` terminated with an error exit status (127)
executor > local (1)
[6b/5aa8b1] ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC [ 0%] 0 of 1 ✘
Execution cancelled -- Finishing pending tasks before exit
-[originsciences/repaq2fastq] Pipeline completed with errors-
ERROR ~ Error executing process > 'ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC'
Caused by:
Process `ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC` terminated with an error exit status (127)
Command executed:
multiqc \
--force \
\
--config multiqc_config.yml \
\
\
\
\
\
.
cat <<-END_VERSIONS > versions.yml
"ORIGINSCIENCES_REPAQ2FASTQ:REPAQ2FASTQ:MULTIQC":
multiqc: $( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
Command exit status:
127
Command output:
(empty)
Command error:
.command.sh: line 3: multiqc: command not found
Work dir:
/home/mbeavitt/Code/Nextflow/test/work/6b/5aa8b1d054a0a65276379388988010
Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`
-- Check '.nextflow.log' file for details
ERROR ~ Pipeline failed. Please refer to troubleshooting docs: <https://nf-co.re/docs/usage/troubleshooting>
-- Check '.nextflow.log' file for details
Any ideas? What am I doing wrong? It seems from the .nextflow.log file that the wave container is being requested and returned successfully:
Jul-09 20:31:55.966 [Actor Thread 22] DEBUG io.seqera.wave.plugin.WaveClient - Wave config: WaveConfig(enabled:true, endpoint:<https://wave.seqera.io>, containerConfigUrl:[], tokensCacheMaxDuration:30m, condaOpts:CondaOpts(mambaImage=mambaorg/micromamba:1.5.10-noble; basePackages=conda-forge::procps-ng, commands=null), strategy:[dockerfile, container], bundleProjectResources:null, buildRepository:null, cacheRepository:null, retryOpts:RetryOpts(delay:450ms, maxDelay:1m 30s, maxAttempts:10, jitter:0.25), httpClientOpts:HttpOpts(), freezeMode:false, preserveFileTimestamp:null, buildMaxDuration:40m, mirrorMode:null, scanMode:null, scanAllowedLevels:null)
Jul-09 20:31:56.011 [Actor Thread 22] DEBUG io.seqera.wave.plugin.WaveClient - Request limiter blocked PT0.001S
Jul-09 20:31:56.012 [Actor Thread 22] DEBUG io.seqera.wave.plugin.WaveClient - Wave request: <https://wave.seqera.io/v1alpha2/container>; attempt=1 - request: SubmitContainerTokenRequest(towerAccessToken:<redacted>, towerRefreshToken:null, towerWorkspaceId:null, towerEndpoint:<https://api.cloud.seqera.io>, containerImage:quay.io/biocontainers/multiqc:1.29--pyhdfd78af_0, containerFile:null, containerConfig:ContainerConfig(), condaFile:null, containerPlatform:linux/amd64, buildRepository:null, cacheRepository:null, timestamp:2025-07-09T20:31:56.010626154+01:00, fingerprint:68285522bb7722ef8aea452fcae38e1d, freeze:false, format:null, dryRun:false, workflowId:null, containerIncludes:null, packages:null, nameStrategy:null, mirror:false, scanMode:null, scanLevels:null)
Jul-09 20:31:56.653 [Actor Thread 22] DEBUG io.seqera.wave.plugin.WaveClient - Wave response: statusCode=200; body={"requestId":"8f9ffe7cc83e","containerToken":"8f9ffe7cc83e","targetImage":"wave.seqera.io/wt/8f9ffe7cc83e/biocontainers/multiqc:1.29--pyhdfd78af_0","expiration":"2025-07-11T07:31:56.732752494Z","containerImage":"quay.io/biocontainers/multiqc:1.29--pyhdfd78af_0","freeze":false,"mirror":false,"succeeded":true}
Please find my code at this repo with a run.sh script if you'd like to try and reproduce this: https://github.com/mbeavitt/example_wave_pipelineNour El Houda Barhoumi
07/09/2025, 9:27 PMJaykishan Solanki
07/10/2025, 7:33 AMAnna Norén
07/10/2025, 8:08 AMblastn
module by adding taxid
as input in this pr. I would like to add taxid testdata to properly test the update, in which folder should I place this testdata?Shravan Lingampally
07/13/2025, 6:59 PMSamuel Lampa
07/14/2025, 3:38 PM--help
flag (nextflow run <http://main.nf|main.nf> --help
), does no longer print the available options etc, but rather I get errors from missing parameters etc:
ERROR ~ Validation of pipeline parameters failed!
-- Check '.nextflow.log' file for details
The following invalid input values have been detected:
* Missing required parameter(s): input, outdir
* Missing required parameter(s): db
So, somehow I have managed to mess up the pipeline initialization code I assume.
Are there any well known caveats related to this?
I have been trying to carefully compare our code with the one in the template branch, but haven't found any obvious differences as of yet.Nour El Houda Barhoumi
07/14/2025, 4:51 PMsamtools view
, I noticed that the mapping quality (MAPQ) scores range from 0 to 42.
I would like to ask:
• How should I filter out low-quality reads based on MAPQ values?
• What MAPQ threshold would you recommend for bacterial RNA-seq to ensure reliable alignments?
Thank you in advance for your help.Kanishka Manna
07/14/2025, 11:04 PMlongdownload
labeled processes that take a while to download reference databases. I think these are taking a while to download because all 4 processes are running at the same time, potentially bogging down our internet connection. I would like to force each of these downloads to run sequentially, rather than parallel. I tried changing the label from medium to longdownload
with a set maxForks = 1
. However, after testing, since these processes are all named differently, these 4 jobs are submitted to the SLURM scheduler in parallel. Any suggestions on how to force these jobs to run sequentially? I don't even need true sequential (in a specific order), I just want to ensure only one of these jobs are run at a time. Thank you 🙏🏼
Below is a snippet of the config file ..
executor {
name = 'slurm'
queueSize = 10
}
// Default resource allocations to run the pipeline locally
process {
executor = 'slurm'
// Set global SLURM options
clusterOptions = '--parsable'
// Common SLURM directives
queue = 'cpu-s1-0' // Override per-label with `clusterOptions` below
cluster = {
"-A cpu-s1-0" // Override per-label if needed
}
errorStrategy = 'retry'
maxRetries = 1
// Resource specifications by label
withLabel: 'low' {
cpus = 2
memory = 8.GB
time = '8h'
clusterOptions = '--partition=cpu-s1-0 --account=cpu-s1-0'
}
withLabel: 'longdownload' {
cpus = 2
memory = 8.GB
time = '168h'
maxForks = 1
clusterOptions = '--partition=cpu-s1-0 --account=cpu-s1-0'
}
Jared Andrews
07/15/2025, 2:24 PM-r
is not specified, is the latest tagged release used by default?Michael Heskett
07/15/2025, 5:46 PMMichael Heskett
07/15/2025, 5:47 PMMichael Heskett
07/15/2025, 7:01 PMMichael Heskett
07/15/2025, 7:57 PMAleksandra Vitkovac
07/17/2025, 9:07 AMNour El Houda Barhoumi
07/17/2025, 11:10 AMHannah
07/17/2025, 3:59 PM1.10
to 1.1
. The cause of this might be due to org.yaml.snakeyaml.Yaml
, introduced in 3.2.0, treated version number as float. https://github.com/nf-core/modules/blob/master/subworkflows/nf-core/utils_nfcore_pipeline/main.nf#L81.
The previous code using yaml
work fine as version values remain string (still in quote i.e. gunzip: '1.10'
).
I have locally modified utils_nf-core_pipeline/main.nf
but wondering if is there a sub-workflow patch command to avoid nf-core linting in failing or is there a fix for this already
Thank you for your support!eparisis
07/18/2025, 10:35 AMerrorstrategy
for maxbin to ignore as it stops with an error if it cant create bins. My question now is how can I disable re-executing this module when using -resume
? It messes up all the cashing downstream and tries to re-run the process every time I resume. Whats the best workaround to that?Daniel Lundin
07/18/2025, 1:36 PMJoshua Williams
07/19/2025, 4:44 PMNFCORE_AMPLISEQ:AMPLISEQ:QIIME2_EXPORT:QIIME2_EXPORT_RELASV
terminated with an error exit status (1)
Command executed:
export XDG_CONFIG_HOME=“./xdgconfig”
export MPLCONFIGDIR=“./mplconfigdir”
export NUMBA_CACHE_DIR=“./numbacache”
#convert to relative abundances
qiime feature-table relative-frequency \
--i-table table.qza \
--o-relative-frequency-table relative-table-ASV.qza
#export to biom
qiime tools export \
--input-path relative-table-ASV.qza \
--output-path relative-table-ASV
#convert to tab separated text file “rel-table-ASV.tsv”
biom convert \
-i relative-table-ASV/feature-table.biom \
-o rel-table-ASV.tsv --to-tsv
cat <<-END_VERSIONS > versions.yml
“NFCORE_AMPLISEQAMPLISEQQIIME2_EXPORTQIIME2 EXPORT RELASV”
qiime2: $( qiime --version | sed ‘1!d;s/.* //’ )
END_VERSIONS
Command exit status:
1
Command output:
(empty)
Command error:
QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment.
Matplotlib created a temporary cache directory at /tmp/matplotlib-tnkujqlh because the default path (mplconfigdir) is not a writable directory; it is highly recommended >
Usage: qiime feature-table relative-frequency [OPTIONS]
Convert frequencies to relative frequencies by dividing each frequency in a
sample by the sum of frequencies in that sample.
Inputs:
--i-table ARTIFACT FeatureTable[Frequency]
The feature table to be converted into relative
frequencies. [required]
Outputs:
--o-relative-frequency-table ARTIFACT FeatureTable[RelativeFrequency]
The resulting relative frequency feature table.
[required]
Miscellaneous:
--output-dir PATH Output unspecified results to a directory
--verbose / --quiet Display verbose output to stdout and/or stderr
during execution of this action. Or silence output if
execution is successful (silence is golden).
--example-data PATH Write example data and exit.
--citations Show citations and exit.
--use-cache DIRECTORY Specify the cache to be used for the intermediate
work of this action. If not provided, the default
cache under $TMP/qiime2/<uname> will be used.
IMPORTANT FOR HPC USERS: If you are on an HPC system
and are using parallel execution it is important to
set this to a location that is globally accessible to
all nodes in the cluster.
--help Show this message and exit.
There was a problem with the command:
(1/1) Invalid value for ‘--o-relative-frequency-table’: ‘’ is not a writable
directory, cannot write output to it.
Work dir:
/hpc-home/jowillia/nextflow/ampliseq_pipeline/work/e1/a7386c0c95e8ade59979cd258a0e82
Container:
/hpc-home/jowillia/nextflow/ampliseq_pipeline/nf-core-ampliseq_2.14.0/singularity-images/quay.io-qiime2-amplicon-2024.10.img
Tip: view the complete command output by changing to the process work dir and entering the command cat .command.out
-- Check ‘.nextflow.log’ file for details
• nextflow.log details
Jul-19 173837.373 [TaskFinalizer-7] DEBUG nextflow.processor.TaskProcessor - Handling unexpected condition for
task: name=NFCORE_AMPLISEQAMPLISEQQIIME2_EXPORT:QIIME2_EXPORT_RELASV; work-dir=/hpc-home/jowillia/nextflow/ampliseq_pipeline/work/e1/a7386c0c95e8ade59979cd258a0e82
error [nextflow.exception.ProcessFailedException]: Process NFCORE_AMPLISEQ:AMPLISEQ:QIIME2_EXPORT:QIIME2_EXPORT_RELASV
terminated with an error exit status (1)
Jul-19 173837.377 [TaskFinalizer-8] DEBUG nextflow.processor.TaskProcessor - Handling unexpected condition for
task: name=NFCORE_AMPLISEQAMPLISEQQIIME2_DIVERSITY:QIIME2_TREE; work-dir=/hpc-home/jowillia/nextflow/ampliseq_pipeline/work/63/793d2edf0a5bef30b22d590957a0e4
error [nextflow.exception.ProcessFailedException]: Process NFCORE_AMPLISEQ:AMPLISEQ:QIIME2_DIVERSITY:QIIME2_TREE
terminated with an error exit status (1)
Jul-19 173837.441 [TaskFinalizer-7] ERROR nextflow.processor.TaskProcessor - Error executing process > ‘NFCORE_AMPLISEQAMPLISEQQIIME2_EXPORT:QIIME2_EXPORT_RELASV’
Tip: you can try to figure out what’s wrong by changing to the process work dir and showing the script file named .command.sh
Then just the details from above in the .out file.
Apologies for the long message. I’ve been troubleshooting for 3 days and I’m close to giving up. I would be incredibly grateful if anyone had experienced these issues before and could shed some light on it.
Many thanks,
JoshEric Samorodnitsky
07/20/2025, 8:23 PMHaidong Yi
07/21/2025, 2:44 AMconda
and singularity
environment but having errors in the docker environment. The testing is working well without errors in my macbook pro (m-chip) in the docker environment as well. Only in the x86_64
environment, it gives a 95
exitcode even though the outputs from the program are normal. I also tried compiling the C program using the source code. It didn't give any errors like the docker environment. So, I think this error is related to docker exec environment for this program. I don't know how to fix it.Nour El Houda Barhoumi
07/21/2025, 9:22 AMsamtools view -q XX
to filter my BAM files, does it break the pairing information in paired-end data?
Thank youWillram Scholz
07/21/2025, 9:44 AMsingularity {
enabled = true
autoMounts = true
}
process {
resourceLimits = [
memory: 500.GB,
cpus: 30,
time: 128.h
]
executor = 'lsf'
scratch = '[path_to_my_scratch_folder]/$LSB_JOBID'
}
executor {
name = 'lsf'
perTaskReserve = false
perJobMemLimit = true
queueSize = 10
submitRateLimit = '3 sec'
}
However, when I try to run the pipeline, the session is aborted because the process ‘NFCORE_METHYLSEQMETHYLSEQFASTQ_ALIGN_DEDUP_BISMARK:BISMARK_ALIGN’ gets terminated with an error exit status (130). When looking at the individual command log files I find the following:
TERM_MEMLIMIT: job killed after reaching LSF memory usage limit.
Exited with exit code 130.
Resource usage summary:
CPU time : 1288250.00 sec.
Max Memory : 147456 MB
Average Memory : 92024.89 MB
Total Requested Memory : 147456.00 MB
Delta Memory : 0.00 MB
Max Swap : -
Max Processes : 77
Max Threads : 150
Run time : 59302 sec.
Turnaround time : 59305 sec.
However, I can’t find any mention of a memory setting in the individual .command.sh files either:
#!/usr/bin/env bash
set -e # Exit if a tool returns a non-zero status/exit code
set -u # Treat unset variables and parameters as an error
set -o pipefail # Returns the status of the last command to exit with a non-zero status or zero if all successfully execute
set -C # No clobber - prevent output redirection from overwriting files.
bismark \
-1 [filename_1].fq.gz -2 [filename_2].fq.gz \
--genome BismarkIndex \
--bam \
--bowtie2 --unmapped --multicore 8
cat <<-END_VERSIONS > versions.yml
"NFCORE_METHYLSEQ:METHYLSEQ:FASTQ_ALIGN_DEDUP_BISMARK:BISMARK_ALIGN":
bismark: $(echo $(bismark -v 2>&1) | sed 's/^.*Bismark Version: v//; s/Copyright.*$//')
END_VERSIONS
So, TLTR, I set the memory limit in my config file to 500 GB, but the Bismark alignment terminates after reaching 147456 MB, and I don’t know how to fix this issue. I previously increased the memory limit in the config file from 250 GB to 500 GB but that didn’t change anything. Can anybody help me with this problem?
Many thanks,
WillLouis Le Nézet
07/21/2025, 1:04 PMNARFMAP_ALIGN
on github action ?
https://github.com/nf-core/modules/actions/runs/16416474545/job/46383642317?pr=8724
I've updated the container from:
- '<oras://community.wave.seqera.io/library/narfmap_align:8bad41386eab9997>':
- '<http://community.wave.seqera.io/library/narfmap_align:517a1fed8e4e84c1|community.wave.seqera.io/library/narfmap_align:517a1fed8e4e84c1>' }"
+ '<https://community-cr-prod.seqera.io/docker/registry/v2/blobs/sha256/a9/a9634de8646d72c54319cc5683949929af4b38e2245b4bc8d28a6666a1f702d6/data>':
+ '<http://community.wave.seqera.io/library/narfmap_samtools_pigz:77d0682b7dae0251|community.wave.seqera.io/library/narfmap_samtools_pigz:77d0682b7dae0251>' }"
And I get the following error
> Reading reference input file GRCh38_chr21.fa: 47377349 bytes...
> -07-21 12:08:45 [7f9cbfa44740] Version: 1.4.2
> 2025-07-21 12:08:45 [7f9cbfa44740] argc: 9 argv: dragen-os --build-hash-table true --ht-reference GRCh38_chr21.fa --output-directory narfmap --ht-num-threads 2
> Supressing decoys
>
> Total: 1 sequence, 46709983 bases (42018816 after trimming/padding)
>
> Spawning 1 threads build STR table...
> Encoding binary reference sequence...
> 1 sequence, 46709983 bases (42018816 after trimming/padding)
> .command.sh: line 8: 33 Killed dragen-os --build-hash-table true --ht-reference GRCh38_chr21.fa --output-directory narfmap --ht-num-threads 2
Nick Eckersley
07/21/2025, 6:53 PMexecutor > slurm (429)
[- ] NFC…PREPROCESSING:NANOPLOT_RAW -
[- ] NFC…:MAG:CENTRIFUGE_CENTRIFUGE -
[- ] NFC…MAG:MAG:CENTRIFUGE_KREPORT -
[- ] NFCORE_MAG:MAG:KRAKEN2 -
[bf/7560b6] NFC…(ERZ24813857-contig.fa.gz) | 4 of 4, cached: 4 ✔
[5d/8e6266] NFC…AG:MAG:QUAST (SPAdes-N075) | 4 of 4, cached: 4 ✔
[fa/97da1d] NFCORE_MAG:MAG:PRODIGAL (N072) | 4 of 4, cached: 4 ✔
[c1/5de39e] NFC…SEMBLY_BUILD (SPAdes-N072) | 4 of 4, cached: 4 ✔
[0b/876070] NFC…Y_ALIGN (SPAdes-N072-N074) | 16 of 16, cached: 16 ✔
[d0/6423a9] NFC…RIZEBAMCONTIGDEPTHS (N072) | 4 of 4, cached: 4 ✔
[e9/bc7d2b] NFC…NING:CONVERT_DEPTHS (N072) | 4 of 4, cached: 4 ✔
[8a/083b3f] NFC…G:METABAT2_METABAT2 (N073) | 4 of 4, cached: 4 ✔
[64/a9a862] NFC…MAG:BINNING:MAXBIN2 (N072) | 4 of 4, cached: 4 ✔
[93/af97fd] NFC…_MAXBIN2_EXT (SPAdes-N072) | 4 of 4, cached: 4 ✔
[19/8566af] NFC…INNING:SEQKIT_STATS (N072) | 4 of 4, cached: 4 ✔
[56/187e2b] NFC…ASTA (SPAdes-MaxBin2-N072) | 8 of 8, cached: 8 ✔
[56/cfe9b7] NFC…es-MaxBin2-N072.082.fa.gz) | 428 of 428, cached: 428 ✔
[- ] NFC…:MAG:BINNING:GUNZIP_UNBINS -
[46/c9275a] NFC…PTHS (SPAdes-MaxBin2-N072) | 8 of 8, cached: 8 ✔
[31/929d7b] NFC…PLOT (SPAdes-MaxBin2-N072) | 8 of 8, cached: 8 ✔
[fe/e4fac3] NFC…:DEPTHS:MAG_DEPTHS_SUMMARY | 1 of 1, cached: 1 ✔
[a0/1c3e67] NFC…:BIN_QC:BUSCO_BUSCO (N072) | 8 of 8, cached: 8 ✔
[8c/4d6059] NFC…C:CONCAT_BINQC_TSV (busco) | 1 of 1, cached: 1 ✔
[c9/d1325d] NFC…classified-unrefined-N072) | 8 of 8, cached: 8 ✔
[f6/d2e3a2] NFC…MAG:MAG:QUAST_BINS_SUMMARY | 1 of 1, cached: 1 ✔
[b4/9836a4] NFC… (SPAdes-MetaBAT2-N072.26) | 428 of 428, retries: 1 ✔
Plus 7 more processes waiting for tasks…
(base) neckersl@gruffalo:~/scratch/private/nfcore_mag/logs$ squeue -u $USER
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
2837979 long nfcore_m neckersl R 1-10:13:03 1 n24-64-384-giles
Thanks