GithubHelp home page GithubHelp logo

icbi-lab / nextneopi Goto Github PK

View Code? Open in Web Editor NEW
63.0 5.0 23.0 5.14 MB

nextNEOpi: a comprehensive pipeline for computational neoantigen prediction

License: Other

Perl 3.12% HTML 0.57% R 27.73% Python 18.42% Nextflow 49.52% Shell 0.65%
neoantigens pipeline nextflow

nextneopi's Introduction

nextNEOpi overview

Nextflow NeoEpitope predictions Pipeline

Pipeline takes fastq files from Tumor and Normal samples (WES or WGS) and optionally RNAseq from Tumor to predict neoepitopes.

The pipeline uses the following tools:

  • MuTect2
  • MuTect1
  • VarScan2
  • Strelka2/Manta
  • Sequenza
  • ASCAT
  • CNVkit
  • OptiType
  • HLA-HD
  • pVACseq (netMHCpan, netMHCIIpan, mhcflurry)
  • NeoFuse
  • mixMHC2pred
  • mixcr
  • blastp

It outputs a vcf file with the annotated and filtered SNPs and Indels, which where called with each of the callers and a high confidence vcf file (hc) in which only variants that were called by a minimum of 2 of the callers are listed. All vcf files are annotated with VEP. In addition the germline variants are called using HaploTypeCaller and a phased vcf for pVACseq is generated as well. Copy number variations are analyzed using CNVkit, ASCAT, and sequenza. Tumor purity is estimated by ASCAT and Sequenza and is used to derive the clonality measure for the predicted neoantigens. Tumor mutational burden (TMB) is calculated for all variants over the entire read covered genome and for coding variants on read covered exons. HLA class I and class II alleles are predicted with OptiType and HLA-HD. Class I and Class II neoepitopes are predicted with pVACseq using netMHCpan, netMHCIIpan and mhcflurry. In addition mixMHC2pred is used as complement Class II neoepitope predictor. Fusion neoantigens are predicted with NeoFuse. All predicted neoeptiopes are searched via BLAST against protein databases (UniProt, RefSeq) to ensure that they are not present in a "non-aberrant" protein. CSiN immunogenicity score is reported for Class I, Class II and combined neoepitopes. A GBM model [1] is used to predict immunogenicity scores for MHC class I single nucleotide variant (SNV) neoantigens 8-11 amino acid residues in length. Finally mixcr is run to predict the TCR and BCR repertoire.

[1] https://github.com/vincentlaboratories/neoag/.

nextNEOpi overview

Hardware recommendations

nextNEOpi is designed to run on high memory multi-core servers (recommended > 16 cores, min. 64GB RAM, > 5 TB of free disk space). For analysis of larger cohorts we strongly recommend running nextNEOpi on a HPC cluster with multiple compute nodes that meet these specifications. However, by tuning the memory and CPU parameters in params.config and process.config it should also be possible to run nextNEOpi on systems with lower CPU and memory resources.

1. Installation

1.1 Nextflow

The command below may be used to install Nextflow. Please see also the installation instructions at: https://www.nextflow.io/index.html#GetStarted

curl -s https://get.nextflow.io | bash

1.2 Analysis tools and software packages

The pipeline will install almost all required tools via Singularity images or conda environments. If preferred one can also use local installations of all tools (not recommended, please see Manual installation at the end of this document)

The software that needs to be present on the system is Java (minimum version 8, if running conda java version 17 or higher is needed), Nextflow (see above), Singularity, Conda (optional).

If you intend to run the pipeline with the conda profile instead of singularity, we recommend to install mamba (https://github.com/mamba-org/mamba) to speed up the creation of conda environments. If you can not install mamba please set conda.useMamba = false for the conda profile in conf/profiles.config

Optional but recommended: Due to license restrictions you may also need to download and install HLA-HD by your own, and set the installation path in conf/params.config. If HLA-HD is not available Class II neoepitopes will NOT be predicted

1.3 References

The pipeline requires different reference files, indexes and databases:

please see conf/resources.config

For each nextNEOpi version we prepared a bundle with all needed references, indexes and databases which can be obtained from:

https://apps-01.i-med.ac.at/resources/nextneopi/

the bundle is named to match the release version nextNEOpi_<version>_resources.tar.gz

e.g.:

https://apps-01.i-med.ac.at/resources/nextneopi/nextNEOpi_1.4_resources.tar.gz

download and extract the contents of the archive into the directory you specified for resourcesBaseDir in the conf/params.config file.

The structure should look as shown blow:

├── {resourcesBaseDir}
    ├── databases
    ├── ExomeCaptureKits
    └── references

Notes

  1. You may also provide your own versions of these files. To do so, please change the conf/resources.config accordingly.
  2. Due to license restriction, we do not provide a copy of the optional COSMIC database. If you also want to include COSMIC data, you may get a copy at https://cancer.sanger.ac.uk/cosmic
  3. We provide the region and bait files for two different Exome capturing kits from Agilent:
    • SureSelect Human All Exon V6 exome
    • SureSelect Human All Exon V7 exome
    • Twist Human comprehensive exome

You may add your own region and bait files by defining an entry in conf/resources.config

Refs:

1.4 Testdata

If you want to test the pipeline using a working minimal test dataset you may download one from

https://apps-01.i-med.ac.at/resources/nextneopi/nextNEOpi_testdata.tar.gz

Please note that due to the limited read coverage CNVkit will not run successfully using this test dataset. Please run the pipeline using the parameter --CNVkit false when testing with this dataset.

2. Usage

Before running the pipeline, the config files in the conf/ directory may need to be edited. In the params.config parameters default settings are defined. The process.config is a template for the configuration of the single processes, you may check the number of CPUs assigned for each process and adjust according to your systems capabilities.

Most pipeline parameters can be edited in the params.config file or changed on run time with command line options by using --NameOfTheParameter given in the params.config. References, databases should be edited in the resources.config file.

Note: nextNEOpi is currently written in nextflow DSL 1, which is only supported up to nextflow version 22.10.8, this means you need to pin the nextflow version by setting the environment variable NXF_VER=22.10.8, in case you have installed a newer nextflow version.

NXF_VER=22.10.8 nextflow run nextNEOpi.nf --batchFile <batchFile_FASTQ.csv | batchFile_BAM.csv> -profile singularity|conda,[cluster] [-resume] -config conf/params.config

Profiles: conda or singularity

We highly recommend to use either the singularity or conda profile. You can specify one of the two profiles using the option -profile singularity or -profile conda. This way you do not have to care about installing all the required software including all its dependencies.

Profiles: cluster

We strongly recommend to run the pipeline on a HPC cluster. You can enable runs in cluster mode by using a profile named e.g. cluster and the option -profile singularity,cluster or -profile conda,cluster

For an example SGE cluster profile, please see profiles in conf/profiles.config. You may uncomment and adjust the cluster profile to your scheduling system.

Sequencing data input:

Besides raw reads in FASTQ fromated files, input data may also be provided in BAM format.

Note: Per default DNA sequencing data will be treated as exome sequencing WES data. If whole genome sequencing is used, please specify --WES false on the command line or set WES = false in params.config.

RNA reads from tag seq library i.e. 3-prime end sequencing protocol

--RNA_tag_seq turns off the "--trna-vaf" and "--trna-cov" filter from pVACseq epitope filtering. It also turns of HLA typing from RNAseq data. 3-prime end sequencing does not cover the entire transcript.

Mandatory arguments:

-config nextNEOpi config file, e.g. -config conf/params.config

--accept_license accept the license terms

--MIXCR_lic /path/to/mi.license OR --TCR false : a MiXCR license may be requested at https://licensing.milaboratories.com

--batchFile

Make sure that your batchFile CSV includes the column names as shown in the examples below as header line. See also example_batchFile_FASTQ.csv or example_batchFile_BAM.csv

FASTQ raw reads

  • e.g.: CSV-file with Tumor/Normal WES/WGS, and optional RNAseq reads
sampleName reads1 reads2 sampleType HLAfile sex
sample1 reads_s1_t_1.fastq.gz reads_s1_t_2.fastq.gz tumor_DNA female
sample1 reads_s1_n_1.fastq.gz reads_s1_n_2.fastq.gz normal_DNA female
sample1 reads_s1_r_1.fastq.gz reads_s1_r_2.fastq.gz tumor_RNA female
sample2 reads_s2_t_1.fastq.gz reads_s2_t2.fastq.gz tumor_DNA /data/sample2_hla.txt male
sample2 reads_s2_n_1.fastq.gz reads_s2_n_2.fastq.gz normal_DNA male
sample2 reads_s2_r_1.fastq.gz tumor_RNA male
sample3 reads_s3_t_2.fastq.gz reads_s3_t_2.fastq.gz tumor_DNA female
sample3 reads_s3_n_1.fastq.gz reads_s3_n_2.fastq.gz normal_DNA female
sample4 reads_s4_t_1_1.fastq.gz reads_s4_t_1_2.fastq.gz tumor_DNA NA
sample4 reads_s4_t_2_1.fastq.gz reads_s4_t_2_2.fastq.gz tumor_DNA NA
sample4 reads_s4_n_1.fastq.gz reads_s4_n_2.fastq.gz normal_DNA NA
sample4 reads_s4_r_1.fastq.gz reads_s4_r_2.fastq.gz tumor_RNA NA
...
  • CSV fields of the batchFile
  • reads1: forward reads (can be fastq or gzipped fastq)
  • reads2: reverse reads (if paired end sequencing was used, empty for single end)
  • sampleType: one of tumor_DNA, normal_DNA. tumor_RNA
  • HLAfile: optional file with HLA types (default: empty)
  • sex: gender of the sample if known (female, male, xx, xy) or NA if unknown

A sample may have multiple read files for a single sampleType, nextNEOpi will merge them accordingly. As shown in the above example sample4 has 2 fastq files for the tumor_DNA, in this cases reads_1_1.fastq.gz will be merged with reads_2_1.fastq.gz. The same applies to reads2.

BAM files

Note: If BAM files are used it is very much recommended that they also include also the unmapped and multimapping reads. These reads can be helpful for HLA-typing.

  • e.g.: CSV-file with Tumor/Normal WES/WGS, and optional RNAseq data:
sampleName bam sampleType HLAfile sex
sample1 sample1_t.bam tumor_DNA female
sample1 sample1_n.bam normal_DNA female
sample1 sample1_r.bam tumor_RNA female
sample2 sample2_t.bam tumor_DNA male
sample2 sample2_n.bam normal_DNA male

Notes

  • You can not use samples that have mixed single-end and paired-end DNA reads in tumor and normal. Both, tumor and normal DNA library types need to be either SE or PE for a given sample

  • in the HLAfile coulumn a user suppiled HLA types file may be specified for a given sample

  • the sex column can be "XX", "female" or "Female", "XY", "male" or "Male". If not specified or "NA" the gender is inferred from the data.

  • Please make sure your /tmp (or the directory set as $TMPDIR) has a lot of free space (> 50GB, depending on input data size). You may change the tmp dir used by nextNEOpi by setting the parameter tmpDir in params.config or on the command line --tmpDir to a directory of your choice.

Example run command with batchfile:

nextflow run nextNEOpi.nf \
    --batchFile batchfile.csv \
    -config conf/params.config \
    --outputDir /data/results/nextNEOpi/myResults \
    --trim_adapters true \
    --trim_adapters_RNAseq true \
    --use_NetChop false \
    --tmpDir /my/huge/tmpDir \
    -profile singularity,cluster \
    -resume

Optional argument:

--trim_adapters If true adapter sequences are automatically determined and will be trimmed from reads. If --adapterSeq (string of adapter sequence) or --adapterSeqFile (fasta file with adapter sequences) is provided then adapters will be used as specified (no automatic detection). Default: false

--trim_adapters_RNAseq If true adapter sequences are automatically determined and will be trimmed from RNAseq reads. If --adapterSeqRNAseq (string of adapter sequence) or --adapterSeqFileRNAseq (fasta file with adapter sequences) is provided then adapters will be used as specified (no automatic detection). Default: false

--adapterSeq String of adapter sequence (see --trim_adapters) --adapterSeqFile Fasta file with adapter sequence(s) (see --trim_adapters)

--adapterSeqRNAseq String of adapter sequence (see --trim_adapters_RNAseq) --adapterSeqFileRNAseq Fasta file with adapter sequence(s) (see --trim_adapers_RNAseq)

--mutect2ponFile Panel of Normals file for Mutect2 (https://gatk.broadinstitute.org/hc/en-us/articles/360035890631-Panel-of-Normals-PON-) Default: false

--priorityCaller Set the variant caller used as base for the hc variants. Only variants that are confirmed by any of the two confirming callers (e..g. mutect1, varscan) will be retained. M2 = mutect2, M1 = mutect1, VS = varscan, ST = strelka Default: M2

--minAD Minimum allelic depth (reads covering a variant) Default: 5

--use_NetChop Use NetChop to generate peptides Default: false

--use_NetMHCstab Use NetMHCstab to predict the stability of peptide binding to MHC molecules Default: true

--TCR Run mixcr for TCR prediction Default: true

--CNVkit Run CNVkit for detecting CNAs. Default: true

--HLAHD_DIR Specify the path to your HLA-HD installation. Needed if Class II neoantigens should be predicted.

--HLA_force_RNA Use only RNAseq for HLA typing. Default: false

--HLA_force_DNA Use only WES/WGS for HLA typing. Default: false

--run_HLAHD_RNA Run HLA-HD also on RNAseq. Highly accurate but can be very slow on larger fastq files. Default: false

--disable_OptiType Disable OptiType for HLA typing. If set, HLA-HD or a user supplied custom HLA file must be available (see --HLAHD_DIR and/or HLAfile field in the batchFile)

--pVACseq_filter_set Can be one of [standard, relaxed, custom]. The standard filter set is using the pVACseq default filters. The relaxed filter set is filtering only for ic50 < 500 & rank < 2 & expn-val > 2. With filter set custom users can define a custom set of filters by providing the desired filters (space separated) using the --pVACseq_custom_filters option. E.g. --pVACseq_filter_set custom --pVACseq_custom_filters "--binding-threshold 250 --percentile-threshold 1". For filter options please see also the pVACseq manual. Default: standard

--pVACseq_custom_filters See --pVACseq_filter_set

Further options: There are many more options that can be set in the params.conf file or specified on the commandline (see conf/params.config)

3. Output

The Pipeline stores its ouput in the following structure:

RESULTS
├── analyses
│   ├── Subject_01
│   │   ├── 01_preprocessing
│   │   ├── 02_alignments
│   │   ├── 03_baserecalibration
│   │   ├── 03_realignment
│   │   ├── 04_expression
│   │   ├── 04_variations
│   │   ├── 05_vep
│   │   ├── 06_proteinseq
│   │   ├── 07_MutationalBurden
│   │   ├── 08_CNVs
│   │   ├── 09_CCF
│   │   ├── 10_HLA_typing
│   │   ├── 11_Fusions
│   │   ├── 12_pVACseq
│   │   ├── 13_mixMHC2pred
│   │   ├── 14_CSiN
│   │   ├── 14_IGS
│   │   ├── 15_BCR_TCR
│   │   └── QC
│   ├── Subject_02
│   │   ├── [...]
│   ├── [...]
│   │   ├── [...]
│   ├── Subject_n
│   │   ├── [...]
├── Documentation
├── neoantigens
│   ├── Subject_ID
│   │   ├── Class_I
│   │   ├── Class_II
│   │   └── Final_HLAcalls
│   ├── Subject_02
│   │   ├── [...]
│   ├── [...]
│   │   ├── [...]
│   ├── Subject_n
│   │   ├── [...]
├── pipeline_info
│   └── icbi
└── supplemental
    ├── 00_prepare_Intervals
    └── 01_prepare_CNVkit

4. [Manual installaton: Not recommended]:

If you prefer local installation of the analysis tools please install the following software:

  • FASTQC (Version >= 0.11.8)
  • FASTP (Version >= v0.20.1)
  • JAVA7 (Version 1.7)
  • JAVA8 (Version 1.8)
  • BWA (Version >= 0.7.17)
  • SAMTOOLS (Version >= 1.9)
  • GATK3 (Version 3.8-0)
  • GATK4 (Version >= 4.4.0.0)
  • VARSCAN (Version 2.4.6)
  • MUTECT1 (Version 1.1.7) ---- optional
  • BAMREADCOUNT (Version 0.8.0)
  • VEP (Version v110)
  • BGZIP
  • TABIX
  • BCFTOOLS
  • MANTA
  • STRELKA
  • SAMBAMBA
  • OPTITYPE
  • PYTHON
  • PERL
  • CONDA
  • YARA
  • HLA-HD
  • ALLELECOUNT
  • RSCRIPT (R > 3.6.2)
  • SEQUENZA (3.0)
  • CNVkit

all these tools need be available via the $PATH environment variable. However, you still need Java, Nextflow, Conda and Singularity installed on your system.

nextneopi's People

Contributors

abyssum avatar cavei avatar gri11 avatar methylnick avatar riederd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

nextneopi's Issues

Bwa terminated with an error exit status (137) -- Execution is retried

Hi again, I am running the pipeline again but this time I am encountering the error as stated in the title. The logs are below

This is my slurm logs

[-        ] process > merge_fastq                    -
[2b/15d707] process > RegionsBedToIntervalList (R... [100%] 1 of 1, cached: 1 ✔
[0e/398463] process > BaitsBedToIntervalList (Bai... [100%] 1 of 1, cached: 1 ✔
[b9/31a70e] process > preprocessIntervalList (pre... [100%] 1 of 1, cached: 1 ✔
[9c/873265] process > SplitIntervals (SplitInterv... [100%] 1 of 1 ✔
[89/c01f38] process > IntervalListToBed (BedFromI... [100%] 1 of 1, cached: 1 ✔
[1b/11c840] process > ScatteredIntervalListToBed ... [100%] 40 of 40 ✔
[09/3b3516] process > FastQC (sample1 : normal_DNA)  [100%] 6 of 6, cached: 6 ✔
[eb/2c43dc] process > fastp (sample1 : tumor_DNA)    [100%] 6 of 6, cached: 6 ✔
[df/114487] process > FastQC_trimmed (sample1 : n... [100%] 6 of 6, cached: 6 ✔
[-        ] process > make_uBAM                      [  0%] 0 of 4
[43/c4e6c7] process > Bwa (sample1 : tumor_DNA)      [ 20%] 1 of 5, failed: 1...
[-        ] process > merge_uBAM_BAM                 -
[-        ] process > MarkDuplicates                 -
[-        ] process > alignmentMetrics               -
[-        ] process > scatterBaseRecalGATK4          -
[-        ] process > gatherGATK4scsatteredBQSRta... -
[-        ] process > scatterGATK4applyBQSRS         -
[-        ] process > GatherRecalBamFiles            -
[-        ] process > GetPileup                      -
[-        ] process > Mutect2                        -
[-        ] process > gatherMutect2VCFs              -
[-        ] process > FilterMutect2                  -
[-        ] process > HaploTypeCaller                -
[-        ] process > CNNScoreVariants               -
[-        ] process > MergeHaploTypeCallerGermlin... -
[-        ] process > FilterGermlineVariantTranches  -
[-        ] process > IndelRealignerIntervals        -
[-        ] process > GatherRealignedBamFiles        -
[-        ] process > VarscanSomaticScattered        -
[-        ] process > gatherVarscanVCFs              -
[-        ] process > ProcessVarscan                 -
[-        ] process > FilterVarscan                  -
[-        ] process > MergeAndRenameSamplesInVars... -
[-        ] process > MantaSomaticIndels             -
[-        ] process > StrelkaSomatic                 -
[-        ] process > finalizeStrelkaVCF             -
[-        ] process > mkHCsomaticVCF                 -
[-        ] process > VepTab                         -
[-        ] process > mkCombinedVCF                  -
[-        ] process > VEPvcf                         -
[-        ] process > ReadBackedphasing              -
[-        ] process > AlleleCounter                  -
[-        ] process > ConvertAlleleCounts            -
[-        ] process > Ascat                          -
[-        ] process > SequenzaUtils                  -
[-        ] process > gatherSequenzaInput            -
[-        ] process > Sequenza                       -
[7c/3377e8] process > make_CNVkit_access_file (mk... [100%] 1 of 1, cached: 1 ✔
[-        ] process > CNVkit                         -
[-        ] process > Clonality                      -
[-        ] process > MutationalBurden               -
[-        ] process > MutationalBurdenCoding         -
[-        ] process > mhc_extract                    -
[-        ] process > pre_map_hla                    -
[-        ] process > OptiType                       -
[06/30971f] process > pre_map_hla_RNA (sample2)      [100%] 2 of 2, cached: 2 ✔
[bf/65d76b] process > OptiType_RNA (sample1)         [100%] 2 of 2, cached: 2 ✔
[-        ] process > run_hla_hd                     -
[-        ] process > get_vhla                       -
[-        ] process > Neofuse                        -
[-        ] process > publish_NeoFuse                -
[-        ] process > add_geneID                     -
[-        ] process > gene_annotator                 -
[-        ] process > pVACseq                        -
[-        ] process > concat_pVACseq_files           -
[-        ] process > aggregated_reports             -
[-        ] process > pVACtools_generate_protein_seq -
[-        ] process > pepare_mixMHC2_seq             -
[-        ] process > mixMHC2pred                    -
[-        ] process > addCCF                         -
[-        ] process > make_epitopes_fasta            -
[-        ] process > blast_epitopes                 -
[-        ] process > add_blast_hits                 -
[-        ] process > csin                           -
[-        ] process > immunogenicity_scoring         -
[2e/46c154] process > mixcr (sample1 : tumor_RNA)    [ 33%] 2 of 6, cached: 2
[-        ] process > collectSampleInfo              -
[-        ] process > multiQC                        -
[29/6fb93c] NOTE: Process `Bwa (sample2 : tumor_DNA)` terminated with an error exit status (137) -- Execution is retried (1)
```

This is the nextflow logs:
```
~> TaskHandler[id: 28; name: Bwa (sample1 : normal_DNA); status: RUNNING; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/f3/15a0370e23d72ed33088db22c6a446]
~> TaskHandler[id: 20; name: Bwa (sample2 : tumor_DNA); status: RUNNING; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/29/6fb93cda2f5fe0f762967a3d788ae8]
Sep-25 15:39:33.952 [Task submitter] DEBUG n.processor.TaskPollingMonitor - %% executor local > tasks in the submission queue: 10 -- tasks to be submitted are shown below
~> TaskHandler[id: 26; name: make_uBAM (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/79/143b9f1f74e8db822efe77d3f7c653]
~> TaskHandler[id: 29; name: make_uBAM (sample1 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/a2/a089c415fad2d56cf6abc00f657ffe]
~> TaskHandler[id: 18; name: make_uBAM (sample1 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/d5/664efa51d24be72c37b38855b69b6a]
~> TaskHandler[id: 19; name: make_uBAM (sample2 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/c7/73aaf11c16275e3ef7678d1810ac0e]
~> TaskHandler[id: 17; name: Bwa (sample1 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/43/c4e6c79262aaab65bb5f5df8930360]
~> TaskHandler[id: 25; name: Bwa (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/46/9bea87330f9e6abb241f98f73bc426]
~> TaskHandler[id: 38; name: mixcr (sample2 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/52/ecf74a82aff60aa6a7321807b1d01c]
~> TaskHandler[id: 42; name: mixcr (sample1 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/f3/fd7b1b2f102fcb967e9d6ef665ae4f]
~> TaskHandler[id: 41; name: mixcr (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/e3/ac5451a02a26825da75ea085a49583]
~> TaskHandler[id: 37; name: mixcr (sample1 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/0b/db949982d4b866e1a4ef77e75dc083]
Sep-25 15:44:21.547 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[id: 20; name: Bwa (sample2 : tumor_DNA); status: COMPLETED; exit: 137; error: -; workDir: /QRISdata/Q6373/results/results_1/work/29/6fb93cda2f5fe0f762967a3d788ae8]
Sep-25 15:44:21.554 [Task monitor] INFO  nextflow.processor.TaskProcessor - [29/6fb93c] NOTE: Process `Bwa (sample2 : tumor_DNA)` terminated with an error exit status (137) -- Execution is retried (1)
Sep-25 15:44:21.560 [Task submitter] DEBUG n.executor.local.LocalTaskHandler - Launch cmd line: /bin/bash -ue .command.run
Sep-25 15:44:21.561 [Task submitter] INFO  nextflow.Session - [43/c4e6c7] Submitted process > Bwa (sample1 : tumor_DNA)
Sep-25 15:44:33.140 [Task monitor] DEBUG n.processor.TaskPollingMonitor - !! executor local > tasks to be completed: 2 -- submitted tasks are shown below
~> TaskHandler[id: 28; name: Bwa (sample1 : normal_DNA); status: RUNNING; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/f3/15a0370e23d72ed33088db22c6a446]
~> TaskHandler[id: 17; name: Bwa (sample1 : tumor_DNA); status: RUNNING; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/43/c4e6c79262aaab65bb5f5df8930360]
Sep-25 15:44:34.563 [Task submitter] DEBUG n.processor.TaskPollingMonitor - %% executor local > tasks in the submission queue: 10 -- tasks to be submitted are shown below
~> TaskHandler[id: 26; name: make_uBAM (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/79/143b9f1f74e8db822efe77d3f7c653]
~> TaskHandler[id: 29; name: make_uBAM (sample1 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/a2/a089c415fad2d56cf6abc00f657ffe]
~> TaskHandler[id: 18; name: make_uBAM (sample1 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/d5/664efa51d24be72c37b38855b69b6a]
~> TaskHandler[id: 19; name: make_uBAM (sample2 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/c7/73aaf11c16275e3ef7678d1810ac0e]
~> TaskHandler[id: 25; name: Bwa (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/46/9bea87330f9e6abb241f98f73bc426]
~> TaskHandler[id: 38; name: mixcr (sample2 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/52/ecf74a82aff60aa6a7321807b1d01c]
~> TaskHandler[id: 42; name: mixcr (sample1 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/f3/fd7b1b2f102fcb967e9d6ef665ae4f]
~> TaskHandler[id: 41; name: mixcr (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/e3/ac5451a02a26825da75ea085a49583]
~> TaskHandler[id: 37; name: mixcr (sample1 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/0b/db949982d4b866e1a4ef77e75dc083]
~> TaskHandler[id: 83; name: Bwa (sample2 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/a6/cb11d2e03fcbad604abc04ac742277]
Sep-25 15:49:33.151 [Task monitor] DEBUG n.processor.TaskPollingMonitor - !! executor local > tasks to be completed: 2 -- submitted tasks are shown below
~> TaskHandler[id: 28; name: Bwa (sample1 : normal_DNA); status: RUNNING; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/f3/15a0370e23d72ed33088db22c6a446]
~> TaskHandler[id: 17; name: Bwa (sample1 : tumor_DNA); status: RUNNING; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/43/c4e6c79262aaab65bb5f5df8930360]
Sep-25 15:49:35.150 [Task submitter] DEBUG n.processor.TaskPollingMonitor - %% executor local > tasks in the submission queue: 10 -- tasks to be submitted are shown below
~> TaskHandler[id: 26; name: make_uBAM (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/79/143b9f1f74e8db822efe77d3f7c653]
~> TaskHandler[id: 29; name: make_uBAM (sample1 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/a2/a089c415fad2d56cf6abc00f657ffe]
~> TaskHandler[id: 18; name: make_uBAM (sample1 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/d5/664efa51d24be72c37b38855b69b6a]
~> TaskHandler[id: 19; name: make_uBAM (sample2 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/c7/73aaf11c16275e3ef7678d1810ac0e]
~> TaskHandler[id: 25; name: Bwa (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/46/9bea87330f9e6abb241f98f73bc426]
~> TaskHandler[id: 38; name: mixcr (sample2 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/52/ecf74a82aff60aa6a7321807b1d01c]
~> TaskHandler[id: 42; name: mixcr (sample1 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/f3/fd7b1b2f102fcb967e9d6ef665ae4f]
~> TaskHandler[id: 41; name: mixcr (sample2 : normal_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/e3/ac5451a02a26825da75ea085a49583]
~> TaskHandler[id: 37; name: mixcr (sample1 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/0b/db949982d4b866e1a4ef77e75dc083]
~> TaskHandler[id: 83; name: Bwa (sample2 : tumor_DNA); status: NEW; exit: -; error: -; workDir: /QRISdata/Q6373/results/results_1/work/a6/cb11d2e03fcbad604abc04ac742277]
Sep-25 15:49:39.034 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[id: 28; name: Bwa (sample1 : normal_DNA); status: COMPLETED; exit: 137; error: -; workDir: /QRISdata/Q6373/results/results_1/work/f3/15a0370e23d72ed33088db22c6a446]
Sep-25 15:49:39.035 [Task monitor] INFO  nextflow.processor.TaskProcessor - [f3/15a037] NOTE: Process `Bwa (sample1 : normal_DNA)` terminated with an error exit status (137) -- Execution is retried (1)
Sep-25 15:49:39.062 [Task submitter] DEBUG n.executor.local.LocalTaskHandler - Launch cmd line: /bin/bash -ue .command.run
Sep-25 15:49:39.062 [Task submitter] INFO  nextflow.Session - [46/9bea87] Submitted process > Bwa (sample2 : normal_DNA)
```

I went through these working directories but there's no other helpful logs available. Much thanks!

cnvkit fails on TESLA dataset patient1 and patient3

Hi,
I am running the entire nextNEOpi pipeline on the TESLA dataset https://doi.org/10.1016/j.cell.2020.09.015 In your paper, it is mentioned you run it to validate the pipeline. Have you encountered any issues with cnvkit? I run it before on WES data of clear cell renal cell carcinoma tumor and matched normal (https://www.ebi.ac.uk/ena/browser/view/SAMEA4074323) and I did not encounter any issues. With patient1 and patient3 from TESLA I am getting the following error:

Command error:
    File "/usr/local/lib/python3.9/site-packages/pandas/core/computation/expressions.py", line 235, in evaluate
      return _evaluate(op, op_str, a, b)  # type: ignore[misc]
    File "/usr/local/lib/python3.9/site-packages/pandas/core/computation/expressions.py", line 69, in _evaluate_standard
      return op(a, b)
  TypeError: unsupported operand type(s) for -: 'int' and 'str'
  
  During handling of the above exception, another exception occurred:
  
  Traceback (most recent call last):
    File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
      r = call_item.fn(*call_item.args, **call_item.kwargs)
    File "/usr/local/lib/python3.9/site-packages/cnvlib/batch.py", line 157, in batch_write_coverage
      cnarr = coverage.do_coverage(bed_fname, bam_fname, by_count, 0, processes, fasta)
    File "/usr/local/lib/python3.9/site-packages/cnvlib/coverage.py", line 27, in do_coverage
      cnarr = interval_coverages(bed_fname, bam_fname, by_count, min_mapq,
    File "/usr/local/lib/python3.9/site-packages/cnvlib/coverage.py", line 57, in interval_coverages
      table = interval_coverages_pileup(bed_fname, bam_fname, min_mapq,
    File "/usr/local/lib/python3.9/site-packages/cnvlib/coverage.py", line 170, in interval_coverages_pileup
      spans = table.end - table.start
    File "/usr/local/lib/python3.9/site-packages/pandas/core/ops/common.py", line 65, in new_method
      return method(self, other)
    File "/usr/local/lib/python3.9/site-packages/pandas/core/arraylike.py", line 97, in __sub__
      return self._arith_method(other, operator.sub)
    File "/usr/local/lib/python3.9/site-packages/pandas/core/series.py", line 4998, in _arith_method
      result = ops.arithmetic_op(lvalues, rvalues, op)
    File "/usr/local/lib/python3.9/site-packages/pandas/core/ops/array_ops.py", line 189, in arithmetic_op
      res_values = _na_arithmetic_op(lvalues, rvalues, op)
    File "/usr/local/lib/python3.9/site-packages/pandas/core/ops/array_ops.py", line 149, in _na_arithmetic_op
      result = _masked_arith_op(left, right, op)
    File "/usr/local/lib/python3.9/site-packages/pandas/core/ops/array_ops.py", line 91, in _masked_arith_op
      result[mask] = op(xrav[mask], yrav[mask])
  TypeError: unsupported operand type(s) for -: 'int' and 'str'
  """
  
  The above exception was the direct cause of the following exception:
  
  Traceback (most recent call last):
    File "/usr/local/bin/cnvkit.py", line 9, in <module>
      args.func(args)
    File "/usr/local/lib/python3.9/site-packages/cnvlib/commands.py", line 110, in _cmd_batch
      args.reference, args.targets, args.antitargets = batch.batch_make_reference(
    File "/usr/local/lib/python3.9/site-packages/cnvlib/batch.py", line 139, in batch_make_reference
      target_fnames = [tf.result() for tf in tgt_futures]
    File "/usr/local/lib/python3.9/site-packages/cnvlib/batch.py", line 139, in <listcomp>
      target_fnames = [tf.result() for tf in tgt_futures]
    File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 438, in result
      return self.__get_result()
    File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
      raise self._exception
  TypeError: unsupported operand type(s) for -: 'int' and 'str'

command.err attached
err.txt

In terms of debugging, I started running the commands one by one in the depot.galaxyproject.org-singularity-cnvkit-0.9.9--pyhdfd78af_0.img image using the singularity options provided in the config files. CNVkit fails on the first command already which for me was:

cnvkit.py \
    batch \
    patient1_tumor_DNA_aligned_sort_mkdp.bam \
    --normal patient1_normal_DNA_aligned_sort_mkdp.bam \
    --method hybrid \
    --targets S07604514_Covered.bed \
    --fasta GRCh38.d1.vd1.fa \
    --annotate gencode.v33.primary_assembly.annotation.gtf \
    --access access-5kb.GRCh38.bed \
     \
    -p 16 \
    --output-reference output_reference.cnn \
    --output-dir ./

Would you happen to know where to start investigating this issue? Have you come across something similar? Alternatively, if I checked the nextNEOpi.nf file correctly the output channel of CNVkit (CNVkit_out_ch0) is not used anywhere else (if I understand correctly ploidy is being calculated using ASCAT and Sequenza output). And currently we are only interested in pVACseq results from SNVs/short INDELs, expression of the corresponding genes, maybe purity/ploidy, but not CNVs. If it's too difficult to investigate and make the CNVkit working, I would be keen not to use it for now but could you recommend the best practice to do that? I have not see any available settings in the config files that would take care of that.

Error Unknown method

Hi! I tried to use nextNEOpi and received the following error:

Unknown method div on ConfigObject type

My code:
nextflow run nextNEOpi.nf --readsTumor /work/Projects/Neoantigens_nextNEOpi/testdata/Tumor_ERR8313719_1.fastq /work/Projects/Neoantigens_nextNEOpi/testdata/Tumor_ERR8313719_2.fastq --readsNormal /work/Projects/Neoantigens_nextNEOpi/testdata/Normal_ERR8313721_1.fastq /work/Projects/Neoantigens_nextNEOpi/testdata/Normal_ERR8313721_2.fastq -config conf/params.config --outputDir /data/results/nextNEOpi/myResults --trim_adapters true --use_NetChop false -profile conda,cluster -resume

Can you please help me to solve this issue?

Thank you!! Best, GH

immunogenicity_scoring

Good evening,

pipeline completed but immunogenicity_scoring vails... I was wondering if you've been able to resolve " // TODO: check why sometimes this fails: workaround ignore errors
errorStrategy 'ignore'"
?

I suspect we're stuck with the same bit here. Happy to send more Information if useful.
BW

SV

Pipeline complete but missing output file(s) for immunogenicity scoring

Hello, I managed to run the pipeline on the sample test data. But unfortunately it is missing one of the output files expected by the process immunogencity scoring

[f4/f6a60c] NOTE: Missing output file(s) sample1_Class_I_immunogenicity.tsvexpected by processimmunogenicity_scoring (sample1) -- Error is ignored

Sep-15 11:12:19.578 [Task submitter] INFO  nextflow.Session - [f4/f6a60c] Submitted process > immunogenicity_scoring (sample1)
Sep-15 11:12:20.528 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[id: 557; name: immunogenicity_scoring (sample1); status: COMPLETED; exit: 0; error: -; workDir: /home/uqachoo1/nextNEOpi-1.4.0/work/f4/f6a60c58002a7237fff929c7d11a25]
Sep-15 11:12:20.528 [Task monitor] DEBUG nextflow.processor.TaskProcessor - Process `immunogenicity_scoring (sample1)` is unable to find [UnixPath]: `/home/uqachoo1/nextNEOpi-1.4.0/work/f4/f6a60c58002a7237fff929c7d11a25/sample1_Class_I_immunogenicity.tsv` (pattern: `sample1_Class_I_immunogenicity.tsv`)
Sep-15 11:12:20.533 [Task monitor] INFO  nextflow.processor.TaskProcessor - [f4/f6a60c] NOTE: Missing output file(s) `sample1_Class_I_immunogenicity.tsv` expected by process `immunogenicity_scoring (sample1)` -- Error is ignored

Installation fails due to unsatisfiable constraints

The pipeline fails on a newly set-up Ubuntu single node server. (AWS EC2 instance)
The execution command:

nextflow run nextNEOpi.nf --batchFile batchFile_s3_FASTQ.csv -config conf/params.config --outputDir /home/ubuntu/nextNEOpi_results -profile conda --accept_license 

The error message:

Error executing process > 'make_CNVkit_access_file (mkCNVkitaccess)' 
...
UnsatisfiableError: The following specifications were found to be incompatible with each other:
...

The required software and OS versions:

(base) Thu Dec 09 ~/nextNEOpi $ nextflow -version

      N E X T F L O W
      version 21.10.5 build 5658
      created 08-12-2021 14:39 UTC 
      cite doi:10.1038/nbt.3820
      http://nextflow.io

(base) Thu Dec 09 ~/nextNEOpi $ java --version
openjdk 11.0.11 2021-04-20
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode, sharing)
(base) Thu Dec 09 ~/nextNEOpi $ conda --version
conda 4.10.3
(base) Thu Dec 09 ~/nextNEOpi $ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Output file discriptions

Hi,

I have 2 questions:

1- Is there a better documantion of the output files?

2- In the "neoantigens/sample1/Class_II/" diretory the following files are missing:
sample1_MHC_Class_I_all_epitopes_ccf_ref_match.tsv, sample1_MHC_Class_I_filtered_ccf_ref_match.tsv.
Do you know why is that?

Rerunning the pipeline from pVACseq

Hi,

I was wondering if there is a way to run the nextNEOpi pipeline just from the pVACseq command? I want to run it first with standard and then relaxed filtering. I changed the setting in params.config and used the resume option with lenient cache but it went all the way back to alignment.
I would appreciate your input!

Best wishes,

Magda

Issue with

Hi,
Thanks for your tool!
I'm having trouble running a small test batch where I get the error:

[main] DEBUG nextflow.Session - Session aborted -- Cause: No such property: merged_fastq_ch for class: Script_39ee2431

when supplied paired end fastq files. The supplied fastq files seem to be properly formatted and properly paired.
Attached is the log file. Any help would be appreciated! Thank you!
Ryan

nextflow.log

pVACsec fails on calculating manufacturability for MHC class II

Hi,
I am running the entire nextNEOpi pipeline on the TESLA dataset https://doi.org/10.1016/j.cell.2020.09.015 Currently, I'm experiencing issues with pVACsec for data from patient1, specifically some MHC class II alleles. I run it before on WES data of clear cell renal cell carcinoma tumor and matched normal (https://www.ebi.ac.uk/ena/browser/view/SAMEA4074323) and I did not encounter any issues.

For the patient I, MHC class I alleles worked fine. Some MHC class II were skipped (e.g. Allele DQA1*05:05 not valid for Method NetMHCIIpan. Skipping.) but for the ones that weren't skipped, pVACsec ended in error (below). I tested separately in a container, on 150GB disk.

  File "/opt/conda/bin/pvacseq", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.8/site-packages/pvactools/tools/pvacseq/main.py", line 116, in main
    args[0].func.main(args[1])
  File "/opt/conda/lib/python3.8/site-packages/pvactools/tools/pvacseq/run.py", line 158, in main
    pipeline.execute()
  File "/opt/conda/lib/python3.8/site-packages/pvactools/lib/pipeline.py", line 506, in execute
    PostProcessor(**post_processing_params).execute()
  File "/opt/conda/lib/python3.8/site-packages/pvactools/lib/post_processor.py", line 32, in execute
    self.calculate_manufacturability()
  File "/opt/conda/lib/python3.8/site-packages/pvactools/lib/post_processor.py", line 55, in calculate_manufacturability
    CalculateManufacturability(self.input_file, self.manufacturability_fh.name, self.file_type).execute()
  File "/opt/conda/lib/python3.8/site-packages/pvactools/lib/calculate_manufacturability.py", line 63, in execute
    writer.writerow(line)
  File "/opt/conda/lib/python3.8/csv.py", line 154, in writerow
    return self.writer.writerow(self._dict_to_list(rowdict))
OSError: [Errno 28] No space left on device

Logs from the nextNEOpi pipeline attached
command.err.txt
command.log.txt
command.out.txt
command.run.txt
command.sh.txt

Error running NeoFuse- cannot find models_class1_pan/models.combined/manifest.csv

Hi,
Thank you for this amazing pipeline! I am currently running it WES and RNA-seq data and I'm having trouble with running NeoFuse.
The content of command.log is as following:

INFO:    Environment variable SINGULARITYENV_TMPDIR is set, but APPTAINERENV_TMPDIR is preferred
INFO:    Environment variable SINGULARITYENV_NXF_DEBUG is set, but APPTAINERENV_NXF_DEBUG is preferred
INFO:    fuse: warning: library too old, some operations may not work
[-------------------------------- [NeoFuse] --------------------------------]

[NeoFuse]  Paired End (PE) Reads detected: commencing processing
[NeoFuse]  Processing files TESLA_3_1.fastq.gz - TESLA_3_2.fastq.gz
[NeoFuse]  STAR Run started at: 16:13:03
[NeoFuse]  Arriba Run started at: 16:13:03
[NeoFuse]  Parsing custom HLA list: 18:08:02
[NeoFuse]  featureCounts Run started at: 18:08:02
[NeoFuse]  Converting Raw Counts to TPM and FPKM: 18:09:38
[NeoFuse]  Searching for MHC I peptides of length 8 9 10 11 : 18:09:39
[NeoFuse]  Searching for MHC II peptides of length 15 16 17 18 19 20 21 22 23 24 25 : 18:09:39
[NeoFuse]  MHCFlurry Run started at: 18:09:39
An error occured while creating the MHCFlurry temp files, check ./patient1/LOGS/patient1_MHCI_final.log for more details

./patient1/LOGS/patient1_MHCI_final.log 's content:

Traceback (most recent call last):
  File "/usr/local/bin/source/build_temp.py", line 122, in <module>
    final_out(inFile, outFile)
  File "/usr/local/bin/source/build_temp.py", line 61, in final_out
    with open(assoc_file) as csv_file:
FileNotFoundError: [Errno 2] No such file or directory: './patient1/NeoFuse/tmp/MHC_I/patient1_8_NUP133_ABCB10_1_8.tsv'

I also checked the content of the patient1_X_MHCFlurry.log. The all say:

Traceback (most recent call last):
  File "/usr/local/bin//mhcflurry-predict", line 8, in <module>
    sys.exit(run())
  File "/usr/local/lib/python3.6/dist-packages/mhcflurry/predict_command.py", line 207, in run
    affinity_predictor = Class1AffinityPredictor.load(models_dir)
  File "/usr/local/lib/python3.6/dist-packages/mhcflurry/class1_affinity_predictor.py", line 480, in load
    manifest_df = pandas.read_csv(manifest_path, nrows=max_models)
  File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 688, in read_csv
    return _read(filepath_or_buffer, kwds)
  File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 454, in _read
    parser = TextFileReader(fp_or_buf, **kwds)
  File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 948, in __init__
    self._make_engine(self.engine)
  File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 1180, in _make_engine
    self._engine = CParserWrapper(self.f, **self.options)
  File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 2010, in __init__
    self._reader = parsers.TextReader(src, **kwds)
  File "pandas/_libs/parsers.pyx", line 382, in pandas._libs.parsers.TextReader.__cinit__
  File "pandas/_libs/parsers.pyx", line 674, in pandas._libs.parsers.TextReader._setup_parser_source
FileNotFoundError: [Errno 2] No such file or directory: '/home/neofuse/.local/share/mhcflurry/4/2.0.0/models_class1_pan/models.combined/manifest.csv'

What I did to debug it is as I downloaded the current NeoFuse container from https://github.com/icbi-lab/NeoFuse but the run with it resulted in the same errors. Is it possible that the default mhcflurry changed? I'm not that familiar with mhcflurry or NeoFuse - maybe you could point me into the right direction?

Yara DNA vs RNA DBs

``Hi!

I am trying to utlize your pipeline with hg19 instead of GRCh38. Because of this I am trying to recreate the yara references, but I have never used this tool before so I do not know what exactly to do. Mainly, my concern is the difference between the RNA and DNA references for yara? I had assumed I could simply run

yara_indexer $RefFasta

but this would only give me one database, i'd assume for DNA not RNA. Why are there two databases needed and how can I make them for hg19?

-resume goes all the way back to SplitIntervals, make_uBAM and Bwa

Hi,
I was wondering if there is anything that could be done to improve Nextflow's -resume functionality in the pipeline. I have resumed a run that failed on pVACseq but still the pipeline went all the way back to SplitIntervals, make_uBAM and Bwa. It used cached tasks only for some quality control processes and HLA typing (see below). Happy to provide more details if necessary.
Best wishes,
Magda

[-        ] process > merge_fastq                    -
[3e/eb6cb3] process > RegionsBedToIntervalList (R... [100%] 1 of 1, cached: 1 ✔
[6f/a525c5] process > BaitsBedToIntervalList (Bai... [100%] 1 of 1, cached: 1 ✔
[4f/0880a3] process > preprocessIntervalList (pre... [100%] 1 of 1, cached: 1 ✔
[3f/f24619] process > SplitIntervals (SplitInterv... [100%] 1 of 1 ✔
[ff/b46504] process > IntervalListToBed (BedFromI... [100%] 1 of 1, cached: 1 ✔
[99/4b767f] process > ScatteredIntervalListToBed ... [100%] 40 of 40 ✔
[e6/87b50e] process > FastQC (patient1 : tumor_RNA)  [100%] 3 of 3, cached: 3 ✔
[a3/c6d8da] process > fastp (patient1 : tumor_DNA)   [100%] 2 of 2, cached: 2 ✔
[ab/6e40c4] process > FastQC_trimmed (patient1 : ... [100%] 2 of 2, cached: 2 ✔
[56/7cbddd] process > make_uBAM (patient1 : norma... [  0%] 0 of 2
[e8/2a5ab0] process > Bwa (patient1 : normal_DNA)    [  0%] 0 of 2
[-        ] process > merge_uBAM_BAM                 -
[-        ] process > MarkDuplicates                 -
[-        ] process > alignmentMetrics               -
[-        ] process > scatterBaseRecalGATK4          -
[-        ] process > gatherGATK4scsatteredBQSRta... -
[-        ] process > scatterGATK4applyBQSRS         -
[-        ] process > GatherRecalBamFiles            -
[-        ] process > GetPileup                      -
[c9/8805c3] process > install_conda_GATK3 (instal... [100%] 1 of 1, cached: 1 ✔
[-        ] process > Mutect2                        -
[-        ] process > gatherMutect2VCFs              -
[-        ] process > FilterMutect2                  -
[-        ] process > HaploTypeCaller                -
[-        ] process > CNNScoreVariants               -
[-        ] process > MergeHaploTypeCallerGermlin... -
[-        ] process > FilterGermlineVariantTranches  -
[-        ] process > IndelRealignerIntervals        -
[-        ] process > GatherRealignedBamFiles        -
[-        ] process > VarscanSomaticScattered        -
[-        ] process > gatherVarscanVCFs              -
[-        ] process > ProcessVarscan                 -
[-        ] process > FilterVarscan                  -
[-        ] process > MergeAndRenameSamplesInVars... -
[-        ] process > MantaSomaticIndels             -
[-        ] process > StrelkaSomatic                 -
[-        ] process > finalizeStrelkaVCF             -
[-        ] process > mkHCsomaticVCF                 -
[-        ] process > VepTab                         -
[-        ] process > mkCombinedVCF                  -
[-        ] process > VEPvcf                         -
[-        ] process > ReadBackedphasing              -
[-        ] process > AlleleCounter                  -
[-        ] process > ConvertAlleleCounts            -
[-        ] process > Ascat                          -
[-        ] process > SequenzaUtils                  -
[-        ] process > gatherSequenzaInput            -
[-        ] process > Sequenza                       -
[4e/66fcfd] process > make_CNVkit_access_file (mk... [100%] 1 of 1, cached: 1 ✔
[-        ] process > Clonality                      -
[-        ] process > MutationalBurden               -
[-        ] process > MutationalBurdenCoding         -
[-        ] process > mhc_extract                    -
[-        ] process > pre_map_hla                    -
[-        ] process > OptiType                       -
[77/556919] process > pre_map_hla_RNA (patient1)     [100%] 1 of 1, cached: 1 ✔
[14/000db6] process > OptiType_RNA (patient1)        [100%] 1 of 1, cached: 1 ✔
[-        ] process > run_hla_hd                     -
[-        ] process > get_vhla                       -
[-        ] process > Neofuse                        -
[-        ] process > publish_NeoFuse                -
[-        ] process > add_geneID                     -
[-        ] process > gene_annotator                 -
[-        ] process > pVACseq                        -
[-        ] process > concat_pVACseq_files           -
[-        ] process > aggregated_reports             -
[-        ] process > pVACtools_generate_protein_seq -
[-        ] process > pepare_mixMHC2_seq             -
[-        ] process > mixMHC2pred                    -
[-        ] process > addCCF                         -
[-        ] process > make_epitopes_fasta            -
[-        ] process > blast_epitopes                 -
[-        ] process > add_blast_hits                 -
[-        ] process > csin                           -
[-        ] process > immunogenicity_scoring         -
[-        ] process > collectSampleInfo              -
[-        ] process > multiQC                        -

Error executing process

Hi,

I am a fresh user of nextNEOpi and exciting to explore this well-built pipeline! But I met some errors when run my test dataset.

The Error:

Error executing process > 'fastp (Patient353_T1star : tumor_RNA)'

Caused by:
  No signature of method: java.lang.Boolean.getVal() is applicable for argument types: () values: []
Possible solutions: getAt(java.lang.String), getClass(), equals(java.lang.Object), equals(java.lang.Object), tap(groovy.lang.Closure), putAt(java.lang.String, java.lang.Object)

Source block:
  def reads_R1         = "--in1 " + reads[0]
  def trimmed_reads_R1 = "--out1 " + meta.sampleName + "_" + meta.sampleType + "_trimmed_R1.fastq.gz"
  def reads_R2         = ""
  def trimmed_reads_R2 = ""
  if(meta.libType == "PE") {
              reads_R2          = "--in2 " + reads[1]
              trimmed_reads_R2  = "--out2 " + meta.sampleName + "_" + meta.sampleType + "_trimmed_R2.fastq.gz"
          }
  def fastpAdapter = ''
  def adapterSeqFile
  def aseq = false
  def aseqR2 = false
  def afile = false
  if (meta.sampleType.indexOf("DNA") > 0) {
              afile = params.adapterSeqFile
              aseq = params.adapterSeq
              aseqR2 = params.adapterSeqR2
          } else {
               afile = params.adapterSeqFileRNAseq
               aseq = params.adapterSeqRNAseq
               aseqR2 = params.adapterSeqR2RNAseq
          }
  if(afile != false) {
              adapterSeqFile = Channel.fromPath(afile)
              fastpAdapter = "--adapter_fasta " + adapterSeqFile
          } else {
              if(aseq != false) {
                  adapterSeq   = Channel.value(aseq)
                  fastpAdapter = "--adapter_sequence " + aseq.getVal()
  
                  if(aseqR2 != false && meta.libType == "PE") {
                      adapterSeqR2   = Channel.value(aseqR2)
                      fastpAdapter += " --adapter_sequence_r2 " + adapterSeqR2.getVal()
                  }
              }
          }
  """
          fastp --thread ${task.cpus} \\
              ${reads_R1} \\
              ${reads_R2} \\
              ${trimmed_reads_R1} \\
              ${trimmed_reads_R2} \\
              --json ${meta.sampleName}_${meta.sampleType}_fastp.json \\
              ${fastpAdapter} \\
              ${params.fastpOpts}
          """

Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`

The command I used:

nextflow run /scratch/u/kfang/Software/nextNEOpi/nextNEOpi.nf --batchFile test2.csv -config conf/params.config --trim_adapters true --trim_adapters_RNAseq true -profile conda -resume --accept_license --MIXCR_lic /scratch/u/kfang/Software/nextNEOpi/resources/mi.license

The files in current folder:

(nextflow-22.04.0)[kfang@ln01 test]$ ls
conf
OSU13053-0000353-N-0192630-HYB729-IDTExome-20180816-NCHHiSeq_S1_L001002_R1_P001.fastq.gz
OSU13053-0000353-N-0192630-HYB729-IDTExome-20180816-NCHHiSeq_S1_L001002_R2_P001.fastq.gz
OSU13053-0000353-T-0194071-HYB729-IDTExome-20180816-NCHHiSeq_S6_L001002_R1_P001.fastq.gz
OSU13053-0000353-T-0194071-HYB729-IDTExome-20180816-NCHHiSeq_S6_L001002_R2_P001.fastq.gz
OSU13053-0000353-T-0204980-HYB00853-Txome-20190612-Illumina_S1_L004_R1_001.fastq.gz
OSU13053-0000353-T-0204980-HYB00853-Txome-20190612-Illumina_S1_L004_R2_001.fastq.gz
test2.csv

(nextflow-22.04.0)[kfang@ln01 test]$ ls conf/
params.config  process.config  profiles.config  resources.config

The context in conf/params.config:

params {

  help  = false
  email = false
  name  = false
  plaintext_email = false

  //
  // resourcesBaseDir: Default is "resources" in the pipelines directory
  //
  // Please change to a directory that has space for at least 60 GB of data
  // Download the resources file from https://apps-01.i-med.ac.at/resources/nextneopi/nextNEOpi_resources.tar.gz
  // and uncompress the archive into "resourcesBaseDir"
  // default: <nextNEOpi_dir>/resources
  resourcesBaseDir = projectDir.toRealPath() + "/resources"

  // RNA sequencing library type
  RNA_tag_seq   = false     // is RNA data from tag sequencing i.e. 3-prime seq
                            // if true then pVACseq tRNA_vaf filter is set to 0.0

  WES             = true  // if false assume WGS, attention long runtimes
  exomeCaptureKit = "sureSelectV6" // default exomeCaptureKit sure select V6, see resources.config to add more
  readLength      = 150

  trim_adapters        = true
  adapterSeq           = true // "AGATCGGAAGAG" Illumina Universal Adapter
  adapterSeqR2         = true
  adapterSeqFile       = false // fasta file with adapter seqs

  trim_adapters_RNAseq = true
  adapterSeqRNAseq     = true // "AGATCGGAAGAG"  Illumina Universal Adapter
  adapterSeqR2RNAseq   = true
  adapterSeqFileRNAseq = false // fasta file with adapter seqs

  // extra options for fastp
  fastpOpts    = ""

  // HLA typing options
  disable_OptiType  = false // Disable OptiType for HLA typing. If set to true, HLA-HD or a user
                            // supplied custom HLA file must be available.
                            // (see --HLAHD_DIR and/or --customHLA)
  run_HLAHD_RNA = false     // run HLA-HD on RNA data.
                            // It is highly accurate but tends to be very slow on larger fastq files

  HLA_force_RNA = false  // use only RNAseq for HLA typing
  HLA_force_DNA = false  // use only WES/WGS for HLA typing


  // run controlFREEC
  controlFREEC = false
// Panel of normals (see: https://gatk.broadinstitute.org/hc/en-us/articles/360040510131-CreateSomaticPanelOfNormals-BETA-)
  mutect2ponFile = 'NO_FILE'

  primaryCaller = "M2" // set the variant caller used as base for the hc variants.
                        // Only variants that are confirmed by any of the two confirming
                        // callers (e..g. mutect1, varscan) will be retained
                        // any of: M2 = mutect2, M1 = mutect1, VS = varscan, ST = strelka

  // CCF estimation
  use_sequenza_cnvs = false // use CNVs and purity from Sequenza for CCF estimation
                            // default: ASCAT with fall back to Sequenza
  CCFthreshold = 0.95       // threshold clonality
  pClonal = 0.9             // min probability for clonality


  // Directories (need to be in quotes)
  tmpDir          = "/scratch/u/kfang/ChenHZ_lab/Neoantigen/test/tmp"  // Please make sure that there is enough free space (~ 50G)
  workDir         = "$PWD"
  outputDir       = "${workDir}/RESULTS"

  // Result publishing method
  publishDirMode  = "auto" // Choose between:
                           // "auto" - if possible use link otherwise copy
                           // "symlink" - absolute path
                           // "rellink" -relative path
                           // "link " - hard link
                           // "copy"
                           // "copyNoFollow" - copying without following symlink

  fullOutput      = false  // enable full output in outputDir, adds some intermediate results, for debug mainly: default false

  tracedir         = "${params.outputDir}/pipeline_info"
  manifest.version = '1.0'

  // Software locations

  // ONLY if not using conda or singularity, please specify the path to the VarScan2 jar
  VARSCAN         = "" // /path to VarScan jar  // Version: 2.4.3

  // optional: specify path to mutect1 jar and the JAVA7 executable.
  // https://software.broadinstitute.org/cancer/cga/mutect_download
  // https://download.java.net/openjdk/jdk7u75/ri/jdk_ri-7u75-b13-linux-x64-18_dec_2014.tar.gz
  MUTECT1         = "" //  path to mutect-1.1.7.jar  // Version: 1.1.7
  JAVA7           = "" // path to jdk7 bin java // Version 1.7

  // optional but highly recommended GATK3
  // ONLY if not using conda or singularity, please specify the path to the GATK3 jar
  // https://console.cloud.google.com/storage/browser/_details/gatk-software/package-archive/gatk/GenomeAnalysisTK-3.8-1-0-gf15c1c3ef.tar.bz2
  GATK3           = "" // path to GATK3 GenomeAnalysisTK.jar // Version 3.8-0

  // ONLY if not using conda or singularity, please specify the path to the JAVA8 executable
  // https://download.java.net/openjdk/jdk8u41/ri/openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
  JAVA8           = "" // path to jdk8 bin java // Version 1.8

  // REQUIRED: Path to the installation directory of HLA-HD
  // Please install HLA-HD locally, you can get your own copy of HLA-HD at:
  // https://www.genome.med.kyoto-u.ac.jp/HLA-HD/
  HLAHD_DIR             = "/scratch/u/kfang/Software/hlahd.1.5.0/bin" //  path to HLA_HD hlahd.1.5.0
  HLA_HD_genome_version = "hg38"

  // URL to the installation package of MIXCRC, will be installed automatically.
  MIXCR_url       = "https://github.com/milaboratory/mixcr/releases/download/v4.0.0/mixcr-4.0.0.zip"
  MIXCR_lic       = "" // path to MiXCR license file
  MIXCR           = "" // Optional: specify path to mixcr directory if already installed, will be installed automatically otherwise
  // analyze TCRs using mixcr
  TCR = true

  // MixMHC2pred
  MiXMHC2PRED_url = "https://github.com/GfellerLab/MixMHC2pred/releases/download/v1.2/MixMHC2pred-1.2.zip"
  MiXMHC2PRED     = "" // Optional: specify path to MixMHC2pred_unix directory if already installed, will be installed automatically otherwise

  // Immunogenicity score: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6774822/
  // https://github.com/vincentlaboratories/neoag
  IGS_script_url = "https://github.com/vincentlaboratories/neoag/raw/master/NeoAg_immunogenicity_predicition_GBM.R"
  IGS_model_url = "https://github.com/vincentlaboratories/neoag/raw/master/Final_gbm_model.rds"
  IGS = "" // optional path to IGS

  // IEDB tools urls for MHCI and MHCII. These will be used for IEDB installation into resources.databases.IEDB_dir
  IEDB_MHCI_url  = "https://downloads.iedb.org/tools/mhci/3.1.2/IEDB_MHC_I-3.1.2.tar.gz"
  IEDB_MHCII_url = "https://downloads.iedb.org/tools/mhcii/3.1.6/IEDB_MHC_II-3.1.6.tar.gz"


  // Java settings: please adjust to your memory available
  JAVA_Xmx = "-Xmx64G"

  // samtools memory: please adjust to your memory available
  STperThreadMem = "8G"

  // sambamba settings: please adjust to your memory available
  SB_hash_table_size = "1048576"
  SB_overflow_list_size = "1000000"
  SB_io_buffer_size = "1024"
  SB_sort_mem = "64G"


  // Filter variants (FilterMutect2Tumor): set minimum allele depth
  minAD = 5

  // Picard common options
  maxRecordsInRam = "4194304"
  maxRecordsInRamMerge = "10485760"

  // CNNScoreVariants batch size
  // in case CNNScoreVariants runs out of memory try to set both to 64
  transferBatchSize = 256
  inferenceBatchSize = 128

  // VarScan
  // Process VarscanSomatic
  min_cov                 = "10"
  min_cov_tumor           = "10"
  min_cov_normal          = "10"
  min_freq_for_hom        = "0.75"
  somatic_pvalue          = "0.99"
  somatic_somaticpvalue   = "0.05"
  strand_filter           = "1"
  // Process: ProcessSomatic
  processSomatic_pvalue   = "0.05"
  max_normal_freq         = "0.05"
  min_tumor_freq          = "0.1"

  // BAMREADCOUNT
  min_map_q               = "10"
  min_base_q              = "20"


  // VEP
  vep_version             = "106.1"
  vep_assembly            = "GRCh38"
  vep_cache_version       = "106"
  vep_species             = "homo_sapiens"
  vep_options             = "--everything" // "--af --af_1kg --af_gnomad --appris --biotype --check_existing --distance 5000 --failed 1 --merged --numbers --polyphen b --protein --pubmed --regulatory --sift b --symbol --xref_refseq --tsl --gene_phenotype"


  // Scatter Count (for parallel processing over geneomi regions)
  scatter_count           = "40"


  // NeoFuse settings
  pepMin_length  = "8"
  pepMax_length  = "11"
  out_ID         = ""
  IC50_Threshold = "500"
  rank           = "2"
  conf_lvl       = "L"
  netMHCpan      = "false"

  // Process: NeoFuse_build
  build           = "false"
  buildRef        = "false"
  version         = "GRCh38"

  // pVACseq settings
  mhci_epitope_len         = "8,9,10,11"
  mhcii_epitope_len        = "15,16,17,18,19,20,21,22,23,24,25" // minimum length has to be at least 15 (see pVACtools /opt/iedb/mhc_ii/mhc_II_binding.py line 246)
  epitope_prediction_tools = "NetMHCpan MHCflurry NetMHCIIpan"
  use_NetChop              = false
  use_NetMHCstab           = true

  pVACseq_filter_set = "standard"
  pVACseq_custom_filters = ""

  pVACseq_filter_sets {
    standard = "--binding-threshold 500 --top-score-metric median --minimum-fold-change 0.0 --normal-cov 5 --tdna-cov 10 --trna-cov 10 --normal-vaf 0.02 --tdna-vaf 0.25 --trna-vaf 0.25 --expn-val 1 --maximum-transcript-support-level 1"
    relaxed = "--binding-threshold 500 --percentile-threshold 2 --top-score-metric lowest --expn-val 2 --maximum-transcript-support-level 5 --normal-vaf 0.01 --trna-vaf 0.02 --tdna-vaf 0.02"
    custom = "${params.pVACseq_custom_filters}"
  }

  // CSiN
  csin_rank      = "0.375 0.5 0.625 0.75 1.25 1.75 2"
  csin_ic50      = "500"
  csin_gene_exp  = "1"

}

// include config
includeConfig './process.config'
includeConfig './resources.config'
includeConfig './profiles.config'

timeline {
  enabled = true
  file = "${params.tracedir}/icbi/nextNEOpi_timeline.html"
}
report {
  enabled = true
  file = "${params.tracedir}/icbi/nextNEOpi_report.html"
}
trace {
  enabled = true
  file = "${params.tracedir}/icbi/nextNEOpi_trace.txt"
}
dag {
  enabled = true
  file = "${params.tracedir}/icbi/nextNEOpi_dag.svg"
}

manifest {
  name = 'icbi/nextNEOpi'
  author = 'Dietmar Rieder, Georgios Fotakis, Francesca Finotello'
  homePage = 'https://github.com/icbi-lab/nextNEOpi'
  description = 'Nextflow pipeline for neoantigen prediction'
  mainScript = 'nextNEOpi.nf'
  nextflowVersion = '>=20.10.0'
  version = '1.0'
}


//
// import plain java classes Paths
// and get realpaths for bind mounts
//
import java.nio.file.Paths;

new File(params.tmpDir).mkdirs()
params.singularityTmpMount = params.tmpDir.startsWith("/tmp/") ? "/tmp" : Paths.get(params.tmpDir).toRealPath()
params.singularityHLAHDmount = (params.HLAHD_DIR != "") ? " -B " + Paths.get(params.HLAHD_DIR).toRealPath() : ""
params.singularityAssetsMount = projectDir.toRealPath() + "/assets"

singularity {
    enabled = true
    autoMounts = true
    runOptions =  "--no-home" + " -H " + params.singularityTmpMount + " -B " +  params.singularityAssetsMount + " -B " + params.singularityTmpMount + " -B " + params.resourcesBaseDir + params.singularityHLAHDmount + " -B " + params.databases.IEDB_dir + ":/opt/iedb" + " -B " + params.databases.MHCFLURRY_dir + ":/opt/mhcflurry_data"

The directory where nextNEOpi installed:

(nextflow-22.04.0)[kfang@ln01 test]$ ls /scratch/u/kfang/Software/nextNEOpi/
assets  bin  conf  example_batchFile_BAM.csv  example_batchFile_FASTQ.csv  img  LICENSE  nextNEOpi.nf  README.html  README.md  resources

I assumed that the error arises from wrong directory but not sure how to correct it. Please let me know if further information is needed.

Thanks in advance!
Kun

yara_mapper doesn't run properly

Hi,

I've been having some issues with yara_mapper. I've started having them just recently (not sure why, possibly singularity was updated on my HPC and it's requiring different options now?). I'd be extremely grateful if you could have a look and provide some workaround or ideas for further investigation.

The step pre_map_hla which uses yara_mapper comes out as passed, however the dna_mapped_1.bam and dna_mapped_2.bam files are not ok. When then parsed to Optitype, the following error is generated:

Traceback (most recent call last):
    File "/opt/conda/bin/OptiTypePipeline.py", line 309, in <module>
      pos, read_details = ht.pysam_to_hdf(bam_paths[0])
    File "/opt/conda/bin/hlatyper.py", line 186, in pysam_to_hdf
      sam = pysam.AlignmentFile(samfile, sam_or_bam)
    File "pysam/libcalignmentfile.pyx", line 742, in pysam.libcalignmentfile.AlignmentFile.__cinit__
    File "pysam/libcalignmentfile.pyx", line 991, in pysam.libcalignmentfile.AlignmentFile._open
  ValueError: file has no sequences defined (mode='rb') - is it SAM/BAM format? Consider opening with check_sq=False.

samtools quickcheck run on the bam files gives the following message: dna_mapped_1.bam had no targets in header.
Logs from yara_mapper actually have an error (it looks like related to using the image?)
Couldn't create temporary file /scratch/temp/5427141/SQNseNL7c. (No such file or directory) /home/mi/dadi/workspace/development/seqan/include/seqan/file/string_mmap.h:635 FAILED! (Memory Mapped String couldn't open temporary file)
I launched the same command manually and it worked. I entered the image env the following way:

set +u; env - PATH="$PATH" ${TMP:+SINGULARITYENV_TMP="$TMP"} ${TMPDIR:+SINGULARITYENV_TMPDIR="$TMPDIR"} SINGULARITYENV_NXF_DEBUG=${NXF_DEBUG:=0}  \
singularity shell  \
-B /scratch/project_mnt/S0091/mantczak  \
-B /QRISdata/Q5952/data/nextNEOpi_1.3_resources/references/yara --no-home --containall -H /scratch/project_mnt/S0091/mantczak/.tmp  \
-B /scratch/project_mnt/S0091/mantczak/pipelines/nextNEOpi/assets  \
-B /scratch/project_mnt/S0091/mantczak/.tmp  \
-B /QRISdata/Q5952/data/nextNEOpi_1.3_resources  \
-B /scratch/project_mnt/S0091/mantczak/soft/hlahd.1.7.0  \
-B /QRISdata/Q5952/data/nextNEOpi_1.3_resources/databases/iedb:/opt/iedb  \
-B /QRISdata/Q5952/data/nextNEOpi_1.3_resources/databases/mhcflurry_data:/opt/mhcflurry_data /scratch/project_mnt/S0091/mantczak/.nextflow/NXF_SINGULARITY_CACHEDIR/apps-01.i-med.ac.at-images-singularity-nextNEOpi_1.3.2_18734d43.sif

Then I executed:

/scratch/project_mnt/S0091/mantczak/tests/nextneopi_validation/work/d8/9567afce930be0fd79060e6f7c9ad8
bash .command.sh

I'm attaching

  • the logs and commands from the pipeline executing yara_mapper and then optitype
  • dna_mapped_X.bam files generated by the pipeline (prefix notok)
  • dna_mapped_X.bam files generated manually (prefix ok)
    yara.zip

Best wishes,

Magda

Nextflow installation error

Hi,

I'm attempting to install & run the nextNEOpi pipeline on my institution's HPC.

I'm receiving the following error message:

"$ nextflow run icbi-lab/nextNEOpi --help

N E X T F L O W ~ version 22.10.8
Pulling icbi-lab/nextNEOpi ...
WARN: Cannot read project manifest -- Cause: Remote resource not found: https://api.github.com/repos/icbi-lab/nextNEOpi/contents/nextflow.config
Remote resource not found: https://api.github.com/repos/icbi-lab/nextNEOpi/contents/main.nf"

How do I address this?

Thanks in advance!

work folder

Hi, I'm running into disk quota error in my hpc while running the pipeline. I checked in my HPC, turns out while the pipeline is running, nextNEOpi is writing data into the folder named work that is enormously large. How do I prevent this from using up my disk quota? I've already set up $TMPDIR (there's a default tmp directory created by our HPC everytime a batch script is submitted) as shown below but the work folder still gets written in my HPC.

export JAVA_CMD=/home/uqachoo1/mambaforge/envs/env/bin/java
export NXF_OPTS='-Xms1g -Xmx4g'

nextflow run nextNEOpi.nf \
    --batchFile /home/uqachoo1/nextNEOpi-1.4.0/wesmRNA_batchFile_fastq.csv \
    -config conf/params.config \
    --outputDir /QRISdata/Q6373/results/test_results \
    --trim_adapters true \
    --trim_adapters_RNAseq true \
    --use_NetChop false \
    --tmpDir $TMPDIR \
    -profile singularity,cluster \
    --accept_license \
    --MIXCR_lic /home/uqachoo1/nextNEOpi-1.4.0/mi.license  \

Much thanks!

make_uBAM process error

At first, I can run with testdata provided on this repo, but when use with my data (Normal ~12 GB, Tumor ~60 GB) got this error in process make_uBAM.

Sep-24 04:18:10.026 [Task monitor] ERROR nextflow.processor.TaskProcessor - Error executing process > 'make_uBAM (sample1 : tumor_DNA)'

Caused by:
  Process `make_uBAM (sample1 : tumor_DNA)` terminated with an error exit status (247)

Command executed:

  mkdir -p /tmp/ubuntu/nextNEOpi
  gatk --java-options "-Xmx64G -XX:ParallelGCThreads=1" FastqToSam \
      --TMP_DIR /tmp/ubuntu/nextNEOpi \
      --MAX_RECORDS_IN_RAM 2097152 \
      -F1 72379_QS2023-002_50GB_1.fq.gz -F2 72379_QS2023-002_50GB_2.fq.gz \
      --READ_GROUP_NAME sample1_tumor \
      --SAMPLE_NAME sample1_tumor \
      --LIBRARY_NAME sample1_tumor \
      --PLATFORM ILLUMINA \
      -O sample1_tumor_DNA_unaligned.bam

Command exit status:
  247

Command output:
  (empty)

Command error:
  INFO	2023-09-24 04:15:18	FastqToSam	Processed     8,000,000 records.  Elapsed time: 00:00:47s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:15:28	FastqToSam	Processed     9,000,000 records.  Elapsed time: 00:00:57s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:15:30	FastqToSam	Processed    10,000,000 records.  Elapsed time: 00:00:59s.  Time for last 1,000,000:    2s.  Last read position: */*
  INFO	2023-09-24 04:15:40	FastqToSam	Processed    11,000,000 records.  Elapsed time: 00:01:09s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:15:43	FastqToSam	Processed    12,000,000 records.  Elapsed time: 00:01:12s.  Time for last 1,000,000:    2s.  Last read position: */*
  INFO	2023-09-24 04:15:53	FastqToSam	Processed    13,000,000 records.  Elapsed time: 00:01:22s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:15:56	FastqToSam	Processed    14,000,000 records.  Elapsed time: 00:01:25s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:16:05	FastqToSam	Processed    15,000,000 records.  Elapsed time: 00:01:35s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:16:09	FastqToSam	Processed    16,000,000 records.  Elapsed time: 00:01:38s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:16:18	FastqToSam	Processed    17,000,000 records.  Elapsed time: 00:01:47s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:16:21	FastqToSam	Processed    18,000,000 records.  Elapsed time: 00:01:50s.  Time for last 1,000,000:    2s.  Last read position: */*
  INFO	2023-09-24 04:16:30	FastqToSam	Processed    19,000,000 records.  Elapsed time: 00:02:00s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:16:34	FastqToSam	Processed    20,000,000 records.  Elapsed time: 00:02:03s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:16:43	FastqToSam	Processed    21,000,000 records.  Elapsed time: 00:02:13s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:16:47	FastqToSam	Processed    22,000,000 records.  Elapsed time: 00:02:17s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:16:51	FastqToSam	Processed    23,000,000 records.  Elapsed time: 00:02:20s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:17:01	FastqToSam	Processed    24,000,000 records.  Elapsed time: 00:02:30s.  Time for last 1,000,000:   10s.  Last read position: */*
  INFO	2023-09-24 04:17:05	FastqToSam	Processed    25,000,000 records.  Elapsed time: 00:02:34s.  Time for last 1,000,000:  ecords.  Elapsed time: 00:00:03s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:14:37	FastqToSam	Processed     2,000,000 records.  Elapsed time: 00:00:06s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:14:49	FastqToSam	Processed     3,000,000 records.  Elapsed time: 00:00:18s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2023-09-24 04:14:53	FastqToSam	Processed     4,000,000 records.  Elapsed time: 00:00:22s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:15:02	FastqToSam	Processed     5,000,000 records.  Elapsed time: 00:00:31s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:15:05	FastqToSam	Processed     6,000,000 records.  Elapsed time: 00:00:34s.  Time for last 1,000,000:    2s.  Last read position: */*
  INFO	2023-09-24 04:15:14	FastqToSam	Processed     7,000,000 records.  Elapsed time: 00:00:44s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:15:18	FastqToSam	Processed     8,000,000 records.  Elapsed time: 00:00:47s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:15:28	FastqToSam	Processed     9,000,000 records.  Elapsed time: 00:00:57s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:15:30	FastqToSam	Processed    10,000,000 records.  Elapsed time: 00:00:59s.  Time for last 1,000,000:    2s.  Last read position: */*
  INFO	2023-09-24 04:15:40	FastqToSam	Processed    11,000,000 records.  Elapsed time: 00:01:09s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:15:43	FastqToSam	Processed    12,000,000 records.  Elapsed time: 00:01:12s.  Time for last 1,000,000:    2s.  Last read position: */*
  INFO	2023-09-24 04:15:53	FastqToSam	Processed    13,000,000 records.  Elapsed time: 00:01:22s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:15:56	FastqToSam	Processed    14,000,000 records.  Elapsed time: 00:01:25s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:16:05	FastqToSam	Processed    15,000,000 records.  Elapsed time: 00:01:35s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:16:09	FastqToSam	Processed    16,000,000 records.  Elapsed time: 00:01:38s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:16:18	FastqToSam	Processed    17,000,000 records.  Elapsed time: 00:01:47s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:16:21	FastqToSam	Processed    18,000,000 records.  Elapsed time: 00:01:50s.  Time for last 1,000,000:    2s.  Last read position: */*
  INFO	2023-09-24 04:16:30	FastqToSam	Processed    19,000,000 records.  Elapsed time: 00:02:00s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:16:34	FastqToSam	Processed    20,000,000 records.  Elapsed time: 00:02:03s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:16:43	FastqToSam	Processed    21,000,000 records.  Elapsed time: 00:02:13s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:16:47	FastqToSam	Processed    22,000,000 records.  Elapsed time: 00:02:17s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:16:51	FastqToSam	Processed    23,000,000 records.  Elapsed time: 00:02:20s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:17:01	FastqToSam	Processed    24,000,000 records.  Elapsed time: 00:02:30s.  Time for last 1,000,000:   10s.  Last read position: */*
  INFO	2023-09-24 04:17:05	FastqToSam	Processed    25,000,000 records.  Elapsed time: 00:02:34s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:17:14	FastqToSam	Processed    26,000,000 records.  Elapsed time: 00:02:43s.  Time for last 1,000,000:    9s.  Last read position: */*
  INFO	2023-09-24 04:17:18	FastqToSam	Processed    27,000,000 records.  Elapsed time: 00:02:47s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:17:30	FastqToSam	Processed    28,000,000 records.  Elapsed time: 00:02:59s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2023-09-24 04:17:33	FastqToSam	Processed    29,000,000 records.  Elapsed time: 00:03:02s.  Time for last 1,000,000:    3s.  Last read position: */*
  INFO	2023-09-24 04:17:42	FastqToSam	Processed    30,000,000 records.  Elapsed time: 00:03:11s.  Time for last 1,000,000:    8s.  Last read position: */*
  Using GATK jar /opt/gatk/gatk-package-4.4.0.0-local.jar
  Running:
      java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx64G -XX:ParallelGCThreads=1 -jar /opt/gatk/gatk-package-4.4.0.0-local.jar FastqToSam --TMP_DIR /tmp/ubuntu/nextNEOpi --MAX_RECORDS_IN_RAM 2097152 -F1 72379_QS2023-002_50GB_1.fq.gz -F2 72379_QS2023-002_50GB_2.fq.gz --READ_GROUP_NAME sample1_tumor --SAMPLE_NAME sample1_tumor --LIBRARY_NAME sample1_tumor --PLATFORM ILLUMINA -O sample1_tumor_DNA_unaligned.bam

Work dir:
  /home/ubuntu/nextNEOpi.1.4.0/work/48/c4158935efd3a22b4a39030ccccc3d

Is there any way to fix this error?
Computer Spec:

  • 16 CPUs
  • 64 GBs RAM

Config:

withName:make_uBAM {
        beforeScript = "ulimit -n 4096 -m unlimited"
}
// Picard common options
maxRecordsInRam = "2097152"
maxRecordsInRamMerge = "2097152"

Error in SetNmMdAndUqTags: Input must be coordinate-sorted

I attempt to use nextNEOpi with testdata. I got an error on this MarkDuplicates process and this loop for long time(3-4 days):

~> TaskHandler[id: 14; name: MarkDuplicates (test_sample : normal_DNA); status: NEW; exit: -; error: -; workDir: /home/ubuntu/nextNEOpi.1.4.0/work/bd/678fd86a7fdaa95335401fad874cdd]
Aug-26 03:27:54.564 [Task monitor] DEBUG n.processor.TaskPollingMonitor - !! executor local > tasks to be completed: 1 -- submitted tasks are shown below
~> TaskHandler[id: 13; name: MarkDuplicates (test_sample : tumor_DNA); status: RUNNING; exit: -; error: -; workDir: /home/ubuntu/nextNEOpi.1.4.0/work/ee/3588ef274fc34327ab2ee6a459d07a]

And got this error inside the workDir of MarkDuplicates:

INFO:    Environment variable SINGULARITYENV_NXF_DEBUG is set, but APPTAINERENV_NXF_DEBUG is preferred

sambamba 0.8.1
 by Artem Tarasov and Pjotr Prins (C) 2012-2021
    LDC 1.20.0 / DMD v2.090.1 / LLVM7.0.0 / bootstrap LDC - the LLVM D compiler (0.17.6)

finding positions of the duplicate reads in the file...
15:17:17.421 INFO  NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/opt/gatk/gatk-package-4.4.0.0-local.jar!/com/intel/gkl/native/libgkl_compression.so
[Tue Aug 22 15:17:17 UTC 2023] SetNmMdAndUqTags --INPUT /dev/stdin --OUTPUT test_sample_tumor_DNA_aligned_sort_mkdp.bam --TMP_DIR /tmp/ubuntu/nextNEOpi --VALIDATION_STRINGENCY LENIENT --MAX_RECORDS_IN_RAM 4194304 --CREATE_INDEX true --REFERENCE_SEQUENCE GRCh38.d1.vd1.fa --IS_BISULFITE_SEQUENCE false --SET_ONLY_UQ false --VERBOSITY INFO --QUIET false --COMPRESSION_LEVEL 2 --CREATE_MD5_FILE false --help false --version false --showHidden false --USE_JDK_DEFLATER false --USE_JDK_INFLATER false
[Tue Aug 22 15:17:17 UTC 2023] Executing as mambauser@ip-172-31-34-222 on Linux 5.19.0-1025-aws amd64; OpenJDK 64-Bit Server VM 17.0.7+7-Debian-1deb11u1; Deflater: Intel; Inflater: Intel; Provider GCS is available; Picard version: Version:4.4.0.0
  sorted 1159001 end pairs
     and 15 single ends (among them 0 unmatched pairs)
  collecting indices of duplicate reads...   done in 1083 ms
  found 74860 duplicates
collected list of positions in 0 min 18 sec
marking duplicates...
collected list of positions in 0 min 32 sec
samtools sort: couldn't allocate memory for bam_mem
[Tue Aug 22 15:17:45 UTC 2023] picard.sam.SetNmMdAndUqTags done. Elapsed time: 0.48 minutes.
Runtime.totalMemory()=335544320
To get help, see http://broadinstitute.github.io/picard/index.html#GettingHelp
htsjdk.samtools.SAMException: Input must be coordinate-sorted for this program to run. Found: unsorted
	at picard.sam.SetNmMdAndUqTags.doWork(SetNmMdAndUqTags.java:125)
	at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:289)
	at org.broadinstitute.hellbender.cmdline.PicardCommandLineProgramExecutor.instanceMain(PicardCommandLineProgramExecutor.java:37)
	at org.broadinstitute.hellbender.Main.runCommandLineProgram(Main.java:160)
	at org.broadinstitute.hellbender.Main.mainEntry(Main.java:203)
	at org.broadinstitute.hellbender.Main.main(Main.java:289)
Using GATK jar /opt/gatk/gatk-package-4.4.0.0-local.jar
Running:
    java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx64G -jar /opt/gatk/gatk-package-4.4.0.0-local.jar SetNmMdAndUqTags --TMP_DIR /tmp/ubuntu/nextNEOpi -R GRCh38.d1.vd1.fa -I /dev/stdin -O test_sample_tumor_DNA_aligned_sort_mkdp.bam --CREATE_INDEX true --MAX_RECORDS_IN_RAM 4194304 --VALIDATION_STRINGENCY LENIENT

Is that the problem with computer's memory? So, I have to upgrade the computer' memory. Or there other problem that I should fix?

Spec:
CPU: 4 cores
RAM: 16 GB
Storage: 1 TB

Environment:
OS: Linux 5.19.0-1025-aws amd64
Java Version: OpenJDK 64-Bit Server VM 17.0.7+7-Debian-1deb11u1

Slow process in get_vhla step

Hi

I am trying to run the pipeline locally. However, the pipeline run very slow in get_vhla step. This step already took more than 5 days. I wondered if this is normal situation or the pipeline fall into some weird loop...I attached nf.log and .nextflow.log file. Thanks for your help~
nf.log
nextflow.log

Best,
Kun

Check if files exist, better error message than 'Cannot get property 'simpleName' on null object'

Dear nextNEOpi maintainers,

Here I submit a suggestion as both me and a colleague independently ran into the same problem and both of us would enjoy a better error message.

Below I supply:

  • Short description of the problem
  • Suggested solution
  • Detailed description with reprex

I hope you'll agree that this would make nextNEIpi even better :-)

Thanks and cheers, Richel Bilderbeek

Short description of the problem

Line L4696 in nextNEOpi.nf ...

val yaraIdx from Channel.value(reference.YaraIndexDNA[0].simpleName)

... does not check if the array YaraIndexDNA has actual elements, which gives an uninformative error if the reference files needed do not exists:

Cannot get property 'simpleName' on null object

 -- Check script 'nextNEOpi.nf' at line: 4696 or see '.nextflow.log' file for more details

(from this GitHub Actions error log, need to be logged in into GitHub before being able to see it)

Suggested solution

Make Line L4696 in nextNEOpi.nf check if all the files it expects are in place. Maybe that line is not the best place to do so; you are the experts here.

Detailed description with reprex

To reproduce this example, do (and all code is in this GitHub repository):

git clone https://github.com/richelbilderbeek/nextNEOpi_example_1
cd nextNEOpi_example_1
./scripts/run.sh

(which is the same as done in the GitHub Actions script)

As there is no small/testing data yet, I try to create the reference folders myself, using the script create_fake_references.sh, which only copies the a file called hg38.len to the right folder.

The error caused is this approach is, of course, that the actual reference data is not created. And here I would expect a proper error message.

One cause argue that 'one should not do that', but my colleague had the same error from trying out to get nextNEOpi running. I am unsure what he did, but he was not creating a reprex like I did. Also for him, I would expect a proper error message.

Installation error with mamba

Hi @riederd,
Thanks for your great end-to-end tool.

I encountered an installation problem using commands below:
mamba env create -f assets/nextNEOpi.yml

And the error message is like:
image
Would you please give me some advice for this? Really looking forward to hearing from you and thanks in advance.

All the best,
He

Error executing process > 'gatherMutect2VCFs (sample2)'

Hello again, now the pipeline is getting this error:

executor >  local (821)
[-        ] process > merge_fastq                    -
[44/a23509] process > RegionsBedToIntervalList (R... [100%] 1 of 1, cached: 1 ✔
[fe/1d4ae2] process > BaitsBedToIntervalList (Bai... [100%] 1 of 1, cached: 1 ✔
[5e/263dc4] process > preprocessIntervalList (pre... [100%] 1 of 1, cached: 1 ✔
[11/95224e] process > SplitIntervals (SplitInterv... [100%] 1 of 1 ✔
[aa/41e120] process > IntervalListToBed (BedFromI... [100%] 1 of 1, cached: 1 ✔
[15/e07cf2] process > ScatteredIntervalListToBed ... [100%] 40 of 40 ✔
[73/d08e65] process > FastQC (sample2 : normal_DNA)  [100%] 6 of 6, cached: 6 ✔
[72/921d8d] process > fastp (sample1 : normal_DNA)   [100%] 6 of 6, cached: 6 ✔
[76/a4856d] process > FastQC_trimmed (sample1 : n... [100%] 6 of 6, cached: 6 ✔
[db/6b4ce8] process > make_uBAM (sample1 : tumor_... [100%] 4 of 4 ✔
[7f/d6ef3b] process > Bwa (sample1 : normal_DNA)     [100%] 4 of 4 ✔
[a2/8dc9f3] process > merge_uBAM_BAM (sample1 : n... [100%] 4 of 4 ✔
[87/debcda] process > MarkDuplicates (sample1 : t... [100%] 4 of 4 ✔
[9e/8ca93a] process > alignmentMetrics (sample1 :... [100%] 4 of 4 ✔
[18/10cd02] process > scatterBaseRecalGATK4 (samp... [100%] 160 of 160 ✔
[15/d3c28d] process > gatherGATK4scsatteredBQSRta... [100%] 4 of 4 ✔
[61/824d8b] process > scatterGATK4applyBQSRS (sam... [100%] 160 of 160 ✔
[5b/97632f] process > GatherRecalBamFiles (sample... [100%] 4 of 4 ✔
[93/ecf78f] process > GetPileup (sample1 : tumor_... [100%] 4 of 4 ✔
[1e/4fdf9c] process > Mutect2 (sample1)              [100%] 80 of 80 ✔
[32/4f5f39] process > gatherMutect2VCFs (sample2)    [  0%] 0 of 2
[-        ] process > FilterMutect2                  -
[dd/bb6fd8] process > HaploTypeCaller (sample2)      [100%] 80 of 80 ✔
[4d/c359f2] process > CNNScoreVariants (sample2)     [100%] 80 of 80 ✔
[c6/485f54] process > MergeHaploTypeCallerGermlin... [100%] 2 of 2 ✔
[6e/c5352d] process > FilterGermlineVariantTranch... [100%] 2 of 2 ✔
[fa/4e56ca] process > IndelRealignerIntervals (sa... [100%] 160 of 160 ✔
[a4/70f418] process > GatherRealignedBamFiles (sa... [  0%] 0 of 4
[-        ] process > VarscanSomaticScattered        -
[-        ] process > gatherVarscanVCFs              -
[-        ] process > ProcessVarscan                 -
[-        ] process > FilterVarscan                  -
[-        ] process > MergeAndRenameSamplesInVars... -
[69/2cda8b] process > MantaSomaticIndels (sample1)   [  0%] 0 of 2
[-        ] process > StrelkaSomatic                 -
[-        ] process > finalizeStrelkaVCF             -
[-        ] process > mkHCsomaticVCF                 -
[-        ] process > VepTab                         -
[-        ] process > mkCombinedVCF                  -
[-        ] process > VEPvcf                         -
[-        ] process > ReadBackedphasing              -
[-        ] process > AlleleCounter                  -
[-        ] process > ConvertAlleleCounts            -
[-        ] process > Ascat                          -
[-        ] process > SequenzaUtils                  -
[-        ] process > gatherSequenzaInput            -
[-        ] process > Sequenza                       -
[d8/ddb69d] process > make_CNVkit_access_file (mk... [100%] 1 of 1, cached: 1 ✔
[-        ] process > CNVkit                         -
[-        ] process > Clonality                      -
[-        ] process > MutationalBurden               -
[-        ] process > MutationalBurdenCoding         -
[c9/0a8c38] process > mhc_extract (sample1)          [100%] 2 of 2 ✔
[61/02d086] process > pre_map_hla (sample1)          [100%] 2 of 2 ✔
[d2/d9df9d] process > OptiType (sample1)             [100%] 2 of 2 ✔
[00/6f1a3c] process > pre_map_hla_RNA (sample1)      [100%] 2 of 2, cached: 2 ✔
[90/95cd23] process > OptiType_RNA (sample1)         [100%] 2 of 2, cached: 2 ✔
[55/8407cb] process > run_hla_hd (sample1)           [100%] 2 of 2 ✔
[2e/3948a1] process > get_vhla (sample1)             [100%] 2 of 2 ✔
[-        ] process > Neofuse                        -
[-        ] process > publish_NeoFuse                -
[-        ] process > add_geneID                     -
[-        ] process > gene_annotator                 -
[-        ] process > pVACseq                        -
[-        ] process > concat_pVACseq_files           -
[-        ] process > aggregated_reports             -
[-        ] process > pVACtools_generate_protein_seq -
[-        ] process > pepare_mixMHC2_seq             -
[-        ] process > mixMHC2pred                    -
[-        ] process > addCCF                         -
[-        ] process > make_epitopes_fasta            -
[-        ] process > blast_epitopes                 -
[-        ] process > add_blast_hits                 -
[-        ] process > csin                           -
[-        ] process > immunogenicity_scoring         -
[94/155f86] process > mixcr (sample1 : tumor_RNA)    [100%] 6 of 6 ✔
[-        ] process > collectSampleInfo              -
[cb/7c36bf] process > multiQC (sample1)              [100%] 2 of 2 ✔
Error executing process > 'gatherMutect2VCFs (sample2)'

Caused by:
  Process `gatherMutect2VCFs (sample2)` terminated with an error exit status (2)

Command executed:

  mkdir -p /scratch/temp/5859185
  
  gatk --java-options "-Xmx32G -XX:ParallelGCThreads=1" MergeVcfs \
      --TMP_DIR /scratch/temp/5859185 \
      -I sample2_0001-scattered.interval_list.vcf.gz -I sample2_0033-scattered.interval_list.vcf.gz -I sample2_0016-scattered.interval_list.vcf.gz -I sample2_0000-scattered.interval_list.vcf.gz -I sample2_0031-scattered.interval_list.vcf.gz -I sample2_0002-scattered.interval_list.vcf.gz -I sample2_0038-scattered.interval_list.vcf.gz -I sample2_0023-scattered.interval_list.vcf.gz -I sample2_0011-scattered.interval_list.vcf.gz -I sample2_0037-scattered.interval_list.vcf.gz -I sample2_0030-scattered.interval_list.vcf.gz -I sample2_0012-scattered.interval_list.vcf.gz -I sample2_0032-scattered.interval_list.vcf.gz -I sample2_0014-scattered.interval_list.vcf.gz -I sample2_0007-scattered.interval_list.vcf.gz -I sample2_0015-scattered.interval_list.vcf.gz -I sample2_0028-scattered.interval_list.vcf.gz -I sample2_0005-scattered.interval_list.vcf.gz -I sample2_0008-scattered.interval_list.vcf.gz -I sample2_0027-scattered.interval_list.vcf.gz -I sample2_0004-scattered.interval_list.vcf.gz -I sample2_0021-scattered.interval_list.vcf.gz -I sample2_0003-scattered.interval_list.vcf.gz -I sample2_0035-scattered.interval_list.vcf.gz -I sample2_0018-scattered.interval_list.vcf.gz -I sample2_0020-scattered.interval_list.vcf.gz -I sample2_0017-scattered.interval_list.vcf.gz -I sample2_0019-scattered.interval_list.vcf.gz -I sample2_0039-scattered.interval_list.vcf.gz -I sample2_0013-scattered.interval_list.vcf.gz -I sample2_0029-scattered.interval_list.vcf.gz -I sample2_0024-scattered.interval_list.vcf.gz -I sample2_0036-scattered.interval_list.vcf.gz -I sample2_0006-scattered.interval_list.vcf.gz -I sample2_0025-scattered.interval_list.vcf.gz -I sample2_0026-scattered.interval_list.vcf.gz -I sample2_0010-scattered.interval_list.vcf.gz -I sample2_0009-scattered.interval_list.vcf.gz -I sample2_0022-scattered.interval_list.vcf.gz -I sample2_0034-scattered.interval_list.vcf.gz \
      -O sample2_mutect2_raw.vcf.gz
  
  gatk MergeMutectStats \
      --tmp-dir /scratch/temp/5859185 \
      --stats sample2_0001-scattered.interval_list.vcf.gz.stats --stats sample2_0033-scattered.interval_list.vcf.gz.stats --stats sample2_0016-scattered.interval_list.vcf.gz.stats --stats sample2_0000-scattered.interval_list.vcf.gz.stats --stats sample2_0031-scattered.interval_list.vcf.gz.stats --stats sample2_0002-scattered.interval_list.vcf.gz.stats --stats sample2_0038-scattered.interval_list.vcf.gz.stats --stats sample2_0023-scattered.interval_list.vcf.gz.stats --stats sample2_0011-scattered.interval_list.vcf.gz.stats --stats sample2_0037-scattered.interval_list.vcf.gz.stats --stats sample2_0030-scattered.interval_list.vcf.gz.stats --stats sample2_0012-scattered.interval_list.vcf.gz.stats --stats sample2_0032-scattered.interval_list.vcf.gz.stats --stats sample2_0014-scattered.interval_list.vcf.gz.stats --stats sample2_0007-scattered.interval_list.vcf.gz.stats --stats sample2_0015-scattered.interval_list.vcf.gz.stats --stats sample2_0028-scattered.interval_list.vcf.gz.stats --stats sample2_0005-scattered.interval_list.vcf.gz.stats --stats sample2_0008-scattered.interval_list.vcf.gz.stats --stats sample2_0027-scattered.interval_list.vcf.gz.stats --stats sample2_0004-scattered.interval_list.vcf.gz.stats --stats sample2_0021-scattered.interval_list.vcf.gz.stats --stats sample2_0003-scattered.interval_list.vcf.gz.stats --stats sample2_0035-scattered.interval_list.vcf.gz.stats --stats sample2_0018-scattered.interval_list.vcf.gz.stats --stats sample2_0020-scattered.interval_list.vcf.gz.stats --stats sample2_0017-scattered.interval_list.vcf.gz.stats --stats sample2_0019-scattered.interval_list.vcf.gz.stats --stats sample2_0039-scattered.interval_list.vcf.gz.stats --stats sample2_0013-scattered.interval_list.vcf.gz.stats --stats sample2_0029-scattered.interval_list.vcf.gz.stats --stats sample2_0024-scattered.interval_list.vcf.gz.stats --stats sample2_0036-scattered.interval_list.vcf.gz.stats --stats sample2_0006-scattered.interval_list.vcf.gz.stats --stats sample2_0025-scattered.interval_list.vcf.gz.stats --stats sample2_0026-scattered.interval_list.vcf.gz.stats --stats sample2_0010-scattered.interval_list.vcf.gz.stats --stats sample2_0009-scattered.interval_list.vcf.gz.stats --stats sample2_0022-scattered.interval_list.vcf.gz.stats --stats sample2_0034-scattered.interval_list.vcf.gz.stats \
      -O sample2_mutect2_raw.vcf.gz.stats
  
  gatk LearnReadOrientationModel \
      --tmp-dir /scratch/temp/5859185 \
      -I sample2_0001-scattered.interval_list-f1r2.tar.gz -I sample2_0033-scattered.interval_list-f1r2.tar.gz -I sample2_0016-scattered.interval_list-f1r2.tar.gz -I sample2_0000-scattered.interval_list-f1r2.tar.gz -I sample2_0031-scattered.interval_list-f1r2.tar.gz -I sample2_0002-scattered.interval_list-f1r2.tar.gz -I sample2_0038-scattered.interval_list-f1r2.tar.gz -I sample2_0023-scattered.interval_list-f1r2.tar.gz -I sample2_0011-scattered.interval_list-f1r2.tar.gz -I sample2_0037-scattered.interval_list-f1r2.tar.gz -I sample2_0030-scattered.interval_list-f1r2.tar.gz -I sample2_0012-scattered.interval_list-f1r2.tar.gz -I sample2_0032-scattered.interval_list-f1r2.tar.gz -I sample2_0014-scattered.interval_list-f1r2.tar.gz -I sample2_0007-scattered.interval_list-f1r2.tar.gz -I sample2_0015-scattered.interval_list-f1r2.tar.gz -I sample2_0028-scattered.interval_list-f1r2.tar.gz -I sample2_0005-scattered.interval_list-f1r2.tar.gz -I sample2_0008-scattered.interval_list-f1r2.tar.gz -I sample2_0027-scattered.interval_list-f1r2.tar.gz -I sample2_0004-scattered.interval_list-f1r2.tar.gz -I sample2_0021-scattered.interval_list-f1r2.tar.gz -I sample2_0003-scattered.interval_list-f1r2.tar.gz -I sample2_0035-scattered.interval_list-f1r2.tar.gz -I sample2_0018-scattered.interval_list-f1r2.tar.gz -I sample2_0020-scattered.interval_list-f1r2.tar.gz -I sample2_0017-scattered.interval_list-f1r2.tar.gz -I sample2_0019-scattered.interval_list-f1r2.tar.gz -I sample2_0039-scattered.interval_list-f1r2.tar.gz -I sample2_0013-scattered.interval_list-f1r2.tar.gz -I sample2_0029-scattered.interval_list-f1r2.tar.gz -I sample2_0024-scattered.interval_list-f1r2.tar.gz -I sample2_0036-scattered.interval_list-f1r2.tar.gz -I sample2_0006-scattered.interval_list-f1r2.tar.gz -I sample2_0025-scattered.interval_list-f1r2.tar.gz -I sample2_0026-scattered.interval_list-f1r2.tar.gz -I sample2_0010-scattered.interval_list-f1r2.tar.gz -I sample2_0009-scattered.interval_list-f1r2.tar.gz -I sample2_0022-scattered.interval_list-f1r2.tar.gz -I sample2_0034-scattered.interval_list-f1r2.tar.gz \
      -O sample2_read-orientation-model.tar.gz

Command exit status:
  2

Command output:
  Tool returned:
  0
  Tool returned:
  SUCCESS

Command error:
  14:38:58.653 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.654 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:58.677 INFO  IOUtils - Extracting data from archive: file://sample2_0002-scattered.interval_list-f1r2.tar.gz
  14:38:58.709 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:58.710 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.710 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:58.728 INFO  IOUtils - Extracting data from archive: file://sample2_0038-scattered.interval_list-f1r2.tar.gz
  14:38:58.744 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:58.745 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.745 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:58.785 INFO  IOUtils - Extracting data from archive: file://sample2_0023-scattered.interval_list-f1r2.tar.gz
  14:38:58.800 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:58.801 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.802 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:58.820 INFO  IOUtils - Extracting data from archive: file://sample2_0011-scattered.interval_list-f1r2.tar.gz
  14:38:58.836 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:58.836 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.837 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:58.870 INFO  IOUtils - Extracting data from archive: file://sample2_0037-scattered.interval_list-f1r2.tar.gz
  14:38:58.886 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:58.887 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.887 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:58.910 INFO  IOUtils - Extracting data from archive: file://sample2_0030-scattered.interval_list-f1r2.tar.gz
  14:38:58.918 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:58.919 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.919 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:58.936 INFO  IOUtils - Extracting data from archive: file://sample2_0012-scattered.interval_list-f1r2.tar.gz
  14:38:58.942 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:58.943 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.943 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:58.981 INFO  IOUtils - Extracting data from archive: file://sample2_0032-scattered.interval_list-f1r2.tar.gz
  14:38:58.990 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:58.991 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:58.991 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:59.014 INFO  IOUtils - Extracting data from archive: file://sample2_0014-scattered.interval_list-f1r2.tar.gz
  14:38:59.020 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_histogram
  14:38:59.021 INFO  IOUtils - Extracting file: ./sample2_tumor.ref_histogram
  14:38:59.022 INFO  IOUtils - Extracting file: ./sample2_tumor.alt_table
  14:38:59.033 INFO  LearnReadOrientationModel - Shutting down engine
  [September 28, 2023 at 2:38:59 PM UTC] org.broadinstitute.hellbender.tools.walkers.readorientation.LearnReadOrientationModel done. Elapsed time: 0.01 minutes.
  Runtime.totalMemory()=285212672
  ***********************************************************************
  
  A USER ERROR has occurred: Could not extract data from: file://sample2_0014-scattered.interval_list-f1r2.tar.gz
  
  ***********************************************************************
  Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--java-options '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace.
  Using GATK jar /opt/gatk/gatk-package-4.4.0.0-local.jar
  Running:
      java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -jar /opt/gatk/gatk-package-4.4.0.0-local.jar LearnReadOrientationModel --tmp-dir /scratch/temp/5859185 -I sample2_0001-scattered.interval_list-f1r2.tar.gz -I sample2_0033-scattered.interval_list-f1r2.tar.gz -I sample2_0016-scattered.interval_list-f1r2.tar.gz -I sample2_0000-scattered.interval_list-f1r2.tar.gz -I sample2_0031-scattered.interval_list-f1r2.tar.gz -I sample2_0002-scattered.interval_list-f1r2.tar.gz -I sample2_0038-scattered.interval_list-f1r2.tar.gz -I sample2_0023-scattered.interval_list-f1r2.tar.gz -I sample2_0011-scattered.interval_list-f1r2.tar.gz -I sample2_0037-scattered.interval_list-f1r2.tar.gz -I sample2_0030-scattered.interval_list-f1r2.tar.gz -I sample2_0012-scattered.interval_list-f1r2.tar.gz -I sample2_0032-scattered.interval_list-f1r2.tar.gz -I sample2_0014-scattered.interval_list-f1r2.tar.gz -I sample2_0007-scattered.interval_list-f1r2.tar.gz -I sample2_0015-scattered.interval_list-f1r2.tar.gz -I sample2_0028-scattered.interval_list-f1r2.tar.gz -I sample2_0005-scattered.interval_list-f1r2.tar.gz -I sample2_0008-scattered.interval_list-f1r2.tar.gz -I sample2_0027-scattered.interval_list-f1r2.tar.gz -I sample2_0004-scattered.interval_list-f1r2.tar.gz -I sample2_0021-scattered.interval_list-f1r2.tar.gz -I sample2_0003-scattered.interval_list-f1r2.tar.gz -I sample2_0035-scattered.interval_list-f1r2.tar.gz -I sample2_0018-scattered.interval_list-f1r2.tar.gz -I sample2_0020-scattered.interval_list-f1r2.tar.gz -I sample2_0017-scattered.interval_list-f1r2.tar.gz -I sample2_0019-scattered.interval_list-f1r2.tar.gz -I sample2_0039-scattered.interval_list-f1r2.tar.gz -I sample2_0013-scattered.interval_list-f1r2.tar.gz -I sample2_0029-scattered.interval_list-f1r2.tar.gz -I sample2_0024-scattered.interval_list-f1r2.tar.gz -I sample2_0036-scattered.interval_list-f1r2.tar.gz -I sample2_0006-scattered.interval_list-f1r2.tar.gz -I sample2_0025-scattered.interval_list-f1r2.tar.gz -I sample2_0026-scattered.interval_list-f1r2.tar.gz -I sample2_0010-scattered.interval_list-f1r2.tar.gz -I sample2_0009-scattered.interval_list-f1r2.tar.gz -I sample2_0022-scattered.interval_list-f1r2.tar.gz -I sample2_0034-scattered.interval_list-f1r2.tar.gz -O sample2_read-orientation-model.tar.gz

Work dir:
  /QRISdata/Q6373/results/results_1/work/32/4f5f39ab6af08ba9cc90eed3361958

Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`

Patch

Why is patch used in the pipeline? Would it not be easier for your team to provide a patched file with each release of the pipeline. It would remove a dependency that many users wouldn't normally have installed.

Thanks

Error executing process > 'installVEPcache (installVEPcache)'

Good evening,

thank you very much for sharing this tool, very useful.
However, I am stuck. I am trying to run it on an HPC (unix, LMOD) system, with these settings:
$PBS_O_WORKDIR/nextflow run $PBS_O_WORKDIR/nextNEOpi.nf
--batchFile $PBS_O_WORKDIR/batchFile_FASTQ.csv
-profile singularity,cluster
-config $PBS_O_WORKDIR/conf/params.config
--accept_license
--outputDir $PBS_O_WORKDIR/nextNEOpi_results
--tmpDir $PBS_O_WORKDIR/tmp
--HLAHD_DIR $PBS_O_WORKDIR/hlahd.1.5.0
-resume

It starts, and after few hours it terminates with this:

Error executing process > 'installVEPcache (installVEPcache)'

Caused by:
Process installVEPcache (installVEPcache) terminated with an error exit status (30)

Command executed:

mkdir -p /xxx/resources/databases/vep_cache
vep_install
-a cf
-s homo_sapiens
-y GRCh38
-c /xxx/resources/databases/vep_cache
--CACHE_VERSION 105
--CONVERT 2> vep_errors.txt &&
echo "OK" > .homo_sapiens_GRCh38_105_cache_ok.chck &&
cp -f .homo_sapiens_GRCh38_105_cache_ok.chck /mnt/panfs1/scratch/wsspaces/svalpione-Kim2018-0/WES/resources/databases/vep_cache/.homo_sapiens_GRCh38_105_cache_ok.chck

Command exit status:
30

Command output:
curl failed (000), trying to fetch using LWP::Simple
LWP::Simple failed (500), trying to fetch using HTTP::Tiny

Command wrapper:
curl failed (000), trying to fetch using LWP::Simple
LWP::Simple failed (500), trying to fetch using HTTP::Tiny

Work dir:
/xxx/work/cc/81435e84c23743ca08bca184d1ce9c

Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run

I the server is connected to Internet, I have tried to install VEP and populate vep_cache prior to launching the job but it makes no difference.
Any suggestions, please?

Thank you a lot

S

nextNEOpi in DSL2/tool upgrades?

Hi,

Thanks again for all the support with running the pipeline - I hope this is one of the last of my questions. Do you have any plans rewriting it into DSL2 and upgrading versions of tools?

Best wishes,

Magda Antczak

Not able to pull the images from apps-01.i-med.ac.at

Hi,

I'm getting the following error when nextNEOpi tries to pull the image from apps-01.i-med.ac.at (example is from Neofuse but it doesn't happen only for Neofuse). Images from depot.galaxy were downloaded fine.

Error executing process > 'Neofuse (patient1)'

Caused by:
  Failed to pull singularity image
  command: singularity pull  --name apps-01.i-med.ac.at-images-singularity-NeoFuse_dev_0d1d4169.sif.pulling.1688026251334 https://apps-01.i-med.ac.at/images/singularity/NeoFuse_dev_0d1d4169.sif > /dev/null
  status : 255
  message:
    FATAL:   Error making http request: Head "https://apps-01.i-med.ac.at/images/singularity/NeoFuse_dev_0d1d4169.sif": x509: certificate signed by unknown authority

Provide test data

Dear nextNEOpi maintainers,

Thanks for nextNEOpi, it seems like a useful tool!

However, I would have enjoy to have seen some more simulated/public example data, so anyone can verify that everything works as it should be. I see that there are some example files, but running it fails.

For example, when running ...

nextflow run nextNEOpi.nf --batchFile example_batchFile_FASTQ.csv \
  -profile singularity \
  -config conf/params.config \
  --exomeCaptureKit "Twist" \
  --trim_adapters true \
  --trim_adapters_RNAseq true \
  --mutect2ponFile 1000g_pon.hg38.vcf.gz \
  --use_sequenza_cnvs true \
  --TCR false \
  --accept_license

... then these files are needed as well:

  • reads_s1_t_1.fastq.gz
  • reads_s1_t_2.fastq.gz
  • reads_s1_n_1.fastq.gz
  • reads_s1_n_2.fastq.gz
  • reads_s1_r_1.fastq.gz
  • reads_s1_r_2.fastq.gz
  • data/sample2_hla.txt

It would be great to have simulated/public data added, among others to add Continuous Integration and/or help users report bugs.

For me, this script really looks a bit too clumsy :-/

Thanks and cheers, Richel Bilderbeek

Filelock safeguard error when run mixcr

Hi,

After solving patch error (issue#14) by installing patch in our computation node, the pipeline run into a new error

Caused by:
  Process `mixcr (Patient353_T1star : tumor_DNA)` terminated with an error exit status (24)

Command executed:

  mixcr analyze shotgun \
      --threads 40 \
      --species hs \
      --starting-material dna \
      --only-productive \
      Patient353_T1star_tumor_DNA_trimmed_R1.fastq.gz Patient353_T1star_tumor_DNA_trimmed_R2.fastq.gz \
      Patient353_T1star_tumor_DNA_mixcr

Command exit status:
  24

Command output:
  ERROR: File lock safeguard was triggered. Please report this error to [email protected].

Command wrapper:
  ERROR: File lock safeguard was triggered. Please report this error to [email protected].

Work dir:
  /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/c3/5a4b303cc544c8f790079d0754d082

I wondered if your have any idea how to solve this problem?

Best,
Kun

Error executing process > 'mixMHC2pred

Good afternoon,
thanks for sharing nextNEOpi, it's a very useful resource.
I am working on HPC (LMOD), using singularity, nextflow version 22.04.4., nextNEOpi v.1.3 & 1.3.2, hla-hd v.1.5.0, VEP v.105 & 106.1, hg38.

It runs smoothly up to mixMHC2pred, when it gets stuck with the following error:

_Error executing process > 'mixMHC2pred (PB-16-002)'

Caused by:
Process mixMHC2pred (PB-16-002) terminated with an error exit status (1)

Command executed:

/XXX/work/MixMHC2pred/MixMHC2pred_unix
-i PB-16-002_peptides.fasta
-o PB-16-002_mixMHC2pred.tsv
-a DRB1_15_01 DRB1_12_01 DRB3_01_01 DRB5_01_01
parse_mixMHC2pred.py
--vep_vcf PB-16-002_vep_somatic_gx.vcf.gz
--pep_fasta PB-16-002_peptides.fasta
--mixMHC2pred_result PB-16-002_mixMHC2pred.tsv
--out PB-16-002_mixMHC2pred_all.tsv
--sample_name PB-16-002_tumor
--normal_name PB-16-002_normal
awk
'{
if ($0 ~ /#/) { print }
else { if ($18 <= 2) { print } }
}' PB-16-002_mixMHC2pred_all.tsv > PB-16-002_mixMHC2pred_filtered.tsv

Command exit status:
1

Command output:
Runing MixMHC2pred (v1.2) for peptide file: PB-16-002_peptides.fasta
Imported 27848 peptides. Computing now the scores from each peptide.
Finished the computations.

Command error:
Traceback (most recent call last):
File "/XXX/bin/parse_mixMHC2pred.py", line 181, in
out_file.write("\t".join(map(str, vcf_map[key].values())) + "\t" + line)
KeyError: 'ENSG00000285238_ENST00000644480.2_911-912GAGAA/G'

Work dir:
/XXX/work/89/b85145dc36f106bca16fa3f0c7007d

Tip: when you have fixed the problem you can continue the execution adding the option -resume to the run command line_

I have been trying for weeks to find out to understand where the problem is, my sys admins think it could be a glitch with parse_mixMHC2pred.py but I am not a python person to try and do this myself. I would really benefit from some help, please.
Thanks in advance

SV

NeoFuse fails because output is not stored in MHC_I and MHC_II folders

NeoFuse fails on the step where it's supposed to change the names of the files, i.e.

mv patient1/NeoFuse/MHC_I/patient1_MHCI_filtered.tsv patient1/NeoFuse/MHC_I/patient1_NeoFuse_MHC_Class_I_filtered.tsv
mv patient1/NeoFuse/MHC_I/patient1_MHCI_unfiltered.tsv patient1/NeoFuse/MHC_I/patient1_NeoFuse_MHC_Class_I_unfiltered.tsv
mv patient1/NeoFuse/MHC_I/patient1_unsupported.txt patient1/NeoFuse/MHC_I/patient1_NeoFuse_MHC_Class_I_unsupported.txt
mv patient1/NeoFuse/MHC_II/patient1_MHCII_filtered.tsv patient1/NeoFuse/MHC_II/patient1_NeoFuse_MHC_Class_II_filtered.tsv
mv patient1/NeoFuse/MHC_II/patient1_MHCII_unfiltered.tsv patient1/NeoFuse/MHC_II/patient1_NeoFuse_MHC_Class_II_unfiltered.tsv
mv patient1/NeoFuse/MHC_II/patient1_unsupported.txt patient1/NeoFuse/MHC_II/patient1_NeoFuse_MHC_Class_II_unsupported.txt
mv patient1/NeoFuse/MHC_II/patient1_mixMHC2pred_conf.txt patient1/NeoFuse/MHC_II/patient1_NeoFuse_MHC_Class_II_mixMHC2pred_conf.txt

These files are not created. The NeoFuse folder contains two files only and no folders (which is consistent with what described in the current NeoFuse manual https://github.com/icbi-lab/NeoFuse#325-neofuse):

patient1_filtered.tsv
patient1_unfiltered.tsv

So the error I'm getting is:
mv: cannot stat 'patient1/NeoFuse/MHC_I/patient1_MHCI_filtered.tsv': No such file or directory

nextNEOpi fails

I'm trying to track down the source of an error in the nextNEOpi pipeline. It's not immediately obvious what the issue is (memory, a singularity issue, something else?)

Any help would be greatly appreciated.

I'm encountering an error at :

Error executing process > 'make_uBAM (P016 : tumor_DNA)'

Caused by:
  Process `make_uBAM (P016 : tumor_DNA)` terminated with an error exit status (247)

Command executed:

  mkdir -p /tmp/16525322
  gatk --java-options "-Xmx256G -XX:ParallelGCThreads=1" FastqToSam \
      --TMP_DIR /tmp/16525322 \
      --MAX_RECORDS_IN_RAM 4194304 \
      -F1 nR217-L4-G2-P098-TATTGAAT-TGCGTCGG-READ1-Sequences.txt.gz -F2 nR217-L4-G2-P098-TATTGAAT-TGCGTCGG-READ2-Sequences.txt.gz \
      --READ_GROUP_NAME P016_tumor \
      --SAMPLE_NAME P016_tumor \
      --LIBRARY_NAME P016_tumor \
      --PLATFORM ILLUMINA \
      -O P016_tumor_DNA_unaligned.bam

Command exit status:
  247

Command output:
  (empty)

Command error:
  INFO	2022-10-22 00:47:35	FastqToSam	Processed     8,000,000 records.  Elapsed time: 00:01:50s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2022-10-22 00:48:08	FastqToSam	Processed     9,000,000 records.  Elapsed time: 00:02:23s.  Time for last 1,000,000:   32s.  Last read position: */*
  INFO	2022-10-22 00:48:21	FastqToSam	Processed    10,000,000 records.  Elapsed time: 00:02:36s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2022-10-22 00:48:32	FastqToSam	Processed    11,000,000 records.  Elapsed time: 00:02:47s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:48:44	FastqToSam	Processed    12,000,000 records.  Elapsed time: 00:02:59s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:49:08	FastqToSam	Processed    13,000,000 records.  Elapsed time: 00:03:22s.  Time for last 1,000,000:   23s.  Last read position: */*
  INFO	2022-10-22 00:49:18	FastqToSam	Processed    14,000,000 records.  Elapsed time: 00:03:32s.  Time for last 1,000,000:   10s.  Last read position: */*
  INFO	2022-10-22 00:49:31	FastqToSam	Processed    15,000,000 records.  Elapsed time: 00:03:46s.  Time for last 1,000,000:   13s.  Last read position: */*
  INFO	2022-10-22 00:49:43	FastqToSam	Processed    16,000,000 records.  Elapsed time: 00:03:58s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:50:05	FastqToSam	Processed    17,000,000 records.  Elapsed time: 00:04:20s.  Time for last 1,000,000:   22s.  Last read position: */*
  INFO	2022-10-22 00:50:18	FastqToSam	Processed    18,000,000 records.  Elapsed time: 00:04:33s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2022-10-22 00:50:31	FastqToSam	Processed    19,000,000 records.  Elapsed time: 00:04:46s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2022-10-22 00:51:04	FastqToSam	Processed    20,000,000 records.  Elapsed time: 00:05:19s.  Time for last 1,000,000:   33s.  Last read position: */*
  INFO	2022-10-22 00:51:27	FastqToSam	Processed    21,000,000 records.  Elapsed time: 00:05:42s.  Time for last 1,000,000:   22s.  Last read position: */*
  INFO	2022-10-22 00:51:39	FastqToSam	Processed    22,000,000 records.  Elapsed time: 00:05:53s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:51:54	FastqToSam	Processed    23,000,000 records.  Elapsed time: 00:06:09s.  Time for last 1,000,000:   15s.  Last read position: */*
  INFO	2022-10-22 00:52:06	FastqToSam	Processed    24,000,000 records.  Elapsed time: 00:06:21s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:52:18	FastqToSam	Processed    25,000,000 records.  Elapsed time: 00:06:33s.  Time for last 1,000,000:   11s.  Last readion: */*
  INFO	2022-10-22 00:50:18	FastqToSam	Processed    18,000,000 records.  Elapsed time: 00:04:33s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2022-10-22 00:50:31	FastqToSam	Processed    19,000,000 records.  Elapsed time: 00:04:46s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2022-10-22 00:51:04	FastqToSam	Processed    20,000,000 records.  Elapsed time: 00:05:19s.  Time for last 1,000,000:   33s.  Last read position: */*
  INFO	2022-10-22 00:51:27	FastqToSam	Processed    21,000,000 records.  Elapsed time: 00:05:42s.  Time for last 1,000,000:   22s.  Last read position: */*
  INFO	2022-10-22 00:51:39	FastqToSam	Processed    22,000,000 records.  Elapsed time: 00:05:53s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:51:54	FastqToSam	Processed    23,000,000 records.  Elapsed time: 00:06:09s.  Time for last 1,000,000:   15s.  Last read position: */*
  INFO	2022-10-22 00:52:06	FastqToSam	Processed    24,000,000 records.  Elapsed time: 00:06:21s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:52:18	FastqToSam	Processed    25,000,000 records.  Elapsed time: 00:06:33s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:52:43	FastqToSam	Processed    26,000,000 records.  Elapsed time: 00:06:58s.  Time for last 1,000,000:   24s.  Last read position: */*
  INFO	2022-10-22 00:52:55	FastqToSam	Processed    27,000,000 records.  Elapsed time: 00:07:10s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2022-10-22 00:53:11	FastqToSam	Processed    28,000,000 records.  Elapsed time: 00:07:26s.  Time for last 1,000,000:   15s.  Last read position: */*
  INFO	2022-10-22 00:53:24	FastqToSam	Processed    29,000,000 records.  Elapsed time: 00:07:38s.  Time for last 1,000,000:   12s.  Last read position: */*
  INFO	2022-10-22 00:53:57	FastqToSam	Processed    30,000,000 records.  Elapsed time: 00:08:12s.  Time for last 1,000,000:   33s.  Last read position: */*
  INFO	2022-10-22 00:54:16	FastqToSam	Processed    31,000,000 records.  Elapsed time: 00:08:31s.  Time for last 1,000,000:   18s.  Last read position: */*
  INFO	2022-10-22 00:54:31	FastqToSam	Processed    32,000,000 records.  Elapsed time: 00:08:46s.  Time for last 1,000,000:   15s.  Last read position: */*
  INFO	2022-10-22 00:54:47	FastqToSam	Processed    33,000,000 records.  Elapsed time: 00:09:02s.  Time for last 1,000,000:   16s.  Last read position: */*
  INFO	2022-10-22 00:55:15	FastqToSam	Processed    34,000,000 records.  Elapsed time: 00:09:29s.  Time for last 1,000,000:   27s.  Last read position: */*
  INFO	2022-10-22 00:55:26	FastqToSam	Processed    35,000,000 records.  Elapsed time: 00:09:40s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:55:37	FastqToSam	Processed    36,000,000 records.  Elapsed time: 00:09:51s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:55:48	FastqToSam	Processed    37,000,000 records.  Elapsed time: 00:10:03s.  Time for last 1,000,000:   11s.  Last read position: */*
  INFO	2022-10-22 00:56:13	FastqToSam	Processed    38,000,000 records.  Elapsed time: 00:10:28s.  Time for last 1,000,000:   25s.  Last read position: */*
  INFO	2022-10-22 00:56:24	FastqToSam	Processed    39,000,000 records.  Elapsed time: 00:10:39s.  Time for last 1,000,000:   10s.  Last read position: */*
  INFO	2022-10-22 00:56:35	FastqToSam	Processed    40,000,000 records.  Elapsed time: 00:10:50s.  Time for last 1,000,000:   10s.  Last read position: */*
  INFO	2022-10-22 00:56:46	FastqToSam	Processed    41,000,000 records.  Elapsed time: 00:11:00s.  Time for last 1,000,000:   10s.  Last read position: */*
  INFO	2022-10-22 00:57:06	FastqToSam	Processed    42,000,000 records.  Elapsed time: 00:11:21s.  Time for last 1,000,000:   20s.  Last read position: */*
  INFO	2022-10-22 00:57:20	FastqToSam	Processed    43,000,000 records.  Elapsed time: 00:11:35s.  Time for last 1,000,000:   13s.  Last read position: */*
  INFO	2022-10-22 00:57:25	FastqToSam	Processed    44,000,000 records.  Elapsed time: 00:11:39s.  Time for last 1,000,000:    4s.  Last read position: */*
  INFO	2022-10-22 00:57:29	FastqToSam	Processed    45,000,000 records.  Elapsed time: 00:11:43s.  Time for last 1,000,000:    4s.  Last read position: */*
  INFO	2022-10-22 00:57:33	FastqToSam	Processed    46,000,000 records.  Elapsed time: 00:11:48s.  Time for last 1,000,000:    4s.  Last read position: */*
  Using GATK jar /opt/conda/share/gatk4-4.2.6.1-1/gatk-package-4.2.6.1-local.jar
  Running:
      java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx256G -XX:ParallelGCThreads=1 -jar /opt/conda/share/gatk4-4.2.6.1-1/gatk-package-4.2.6.1-local.jar FastqToSam --TMP_DIR /tmp/16525322 --MAX_RECORDS_IN_RAM 4194304 -F1 nR217-L4-G2-P098-TATTGAAT-TGCGTCGG-READ1-Sequences.txt.gz -F2 nR217-L4-G2-P098-TATTGAAT-TGCGTCGG-READ2-Sequences.txt.gz --READ_GROUP_NAME P016_tumor --SAMPLE_NAME P016_tumor --LIBRARY_NAME P016_tumor --PLATFORM ILLUMINA -O P016_tumor_DNA_unaligned.bam

Work dir:
  /dfs5/bio/shachey/data/wes_2022/analysis/work/a9/828d4392552c7e4700f7e9ce5163b4

Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`

My slurm error log says :

slurmstepd: error: *** JOB 16525322 ON hpc3-20-22 CANCELLED AT 2022-10-21T18:55:47 ***
slurmstepd: error: Detected 2 oom-kill event(s) in StepId=16525322.batch. Some of your processes may have been killed by the cgroup out-of-memory handler.

Looking into .command.err in the work directory I see:

FATAL:   could not open image /tmp/16525322/apps-01.i-med.ac.at-images-singularity-nextNEOpi_1.3.2_18734d43.sif: failed to retrieve path for /tmp/16525322/apps-01.i-med.ac.at-images-singularity-nextNEOpi_1.3.2_18734d43.sif: lstat /tmp/16525322: no such file or directory

My configurations and inputs are

-------------------------------------------------------------------------
C O N F I G U R A T I O N

Command Line: 		 nextflow run /data/homezvol0/shachey/bin/nextNEOpi/nextNEOpi.nf --batchFile /dfs5/bio/shachey/data/wes_2022/analysis/lungCancer_batchFile_FastQ.csv -profile singularity,cluster -config /data/homezvol0/shachey/bin/nextNEOpi/conf/params.config --tmpDir /tmp/16525322 --outputDir /dfs5/bio/shachey/data/wes_2022/analysis/NextNEOpi_Results --accept-license
Working Directory: 	 /data/homezvol0/shachey/dfs5/data/wes_2022/analysis
Output Directory: 	 /dfs5/bio/shachey/data/wes_2022/analysis/NextNEOpi_Results

I N P U T

batch file: 		 /dfs5/bio/shachey/data/wes_2022/analysis/lungCancer_batchFile_FastQ.csv

Please check --help for further instruction
-------------------------------------------------------------------------
IEDB_dir: /dfs6/pub/share/nextNEOpi/resources/databases/iedb
MHCFLURRY_dir: /dfs6/pub/share/nextNEOpi/resources/databases/mhcflurry_data
tmpDir: /tmp/16525322
Pipeline Name                 : icbi/nextNEOpi
Pipeline Version              : 1.0
Batch file                    : /dfs5/bio/shachey/data/wes_2022/analysis/lungCancer_batchFile_FastQ.csv
Read length                   : 150
Exome capture kit             : sureSelectV6
Fasta Ref                     : /dfs6/pub/share/nextNEOpi/resources/references/hg38/gdc/GRCh38.d1.vd1/fasta/GRCh38.d1.vd1.fa
MillsGold                     : /dfs6/pub/share/nextNEOpi/resources/databases/GATKresourceBundle/Mills_and_1000G_gold_standard.indels.hg38.vcf
hcSNPS1000G                   : /dfs6/pub/share/nextNEOpi/resources/databases/GATKresourceBundle/1000G_phase1.snps.high_confidence.hg38.vcf.gz
HapMap                        : /dfs6/pub/share/nextNEOpi/resources/databases/GATKresourceBundle/hapmap_3.3.hg38.vcf.gz
Cosmic                        : /dfs6/pub/share/nextNEOpi/resources/databases/cosmic/hg38/v84/CosmicCodingMuts.hg38.v84.vcf
DBSNP                         : /dfs6/pub/share/nextNEOpi/resources/databases/GATKresourceBundle/Homo_sapiens_assembly38.dbsnp138.vcf
GnomAD                        : /dfs6/pub/share/nextNEOpi/resources/databases/GATKresourceBundle/Mutect2/GetPileupSummaries/small_exac_common_3.hg38.vcf
GnomADfull                    : /dfs6/pub/share/nextNEOpi/resources/databases/GATKresourceBundle/Mutect2/gnomAD/af-only-gnomad.hg38.vcf.gz
KnownIndels                   : /dfs6/pub/share/nextNEOpi/resources/databases/GATKresourceBundle/Homo_sapiens_assembly38.known_indels.vcf
BlastDB                       : /dfs6/pub/share/nextNEOpi/resources/references/blast/
priority variant Caller       : M2
Mutect 1 and 2 minAD          : 5
VarScan min_cov               : 10
VarScan min_cov_tumor         : 10
VarScan min_cov_normal        : 10
VarScan min_freq_for_hom      : 0.75
VarScan somatic_pvalue        : 0.99
VarScan somatic_somaticpvalue : 0.05
VarScan strand_filter         : 1
VarScan processSomatic_pvalue : 0.05
VarScan max_normal_freq       : 0.05
VarScan min_tumor_freq        : 0.1
VarScan min_map_q             : 10
VarScan min_base_q            : 20
VEP assembly                  : GRCh38
VEP species                   : homo_sapiens
VEP options                   : --everything
Number of scatters            : 40
Output dir                    : /dfs5/bio/shachey/data/wes_2022/analysis/NextNEOpi_Results
Working dir                   : /dfs5/bio/shachey/data/wes_2022/analysis/work
TMP dir                       : /tmp/16525322
Current home                  : /data/homezvol0/shachey
Current user                  : shachey
Current path                  : /data/homezvol0/shachey/dfs5/data/wes_2022/analysis
JAVA_Xmx                      : -Xmx256G
Picard maxRecordsInRam        : 4194304
Script dir                    : /data/homezvol0/shachey/bin/nextNEOpi
Config Profile                : singularity,cluster
-------------------------------------------------------------------------
WARNING: P016 sex not specified will infer from data
WARNING: P016 sex not specified will infer from data
Found /data/homezvol0/shachey/bin/hlahd.1.5.0/bin/hlahd.sh at: /data/homezvol0/shachey/bin/hlahd.1.5.0/bin/hlahd.sh
Found /data/homezvol0/shachey/bin/mixcr/mi.license at: /data/homezvol0/shachey/bin/mixcr/mi.license

Target regions bed file for TESLA patients 4,8,9

Hi,
I was wondering if you would be able to advise which bed file you used to process patients 4,8,9 from the TESLA dataset. The file provided in the dataset is not compatible with the GRCh38 that is used in the pipeline by default (some regions are out of bounds). The file says Agilent SureSelect DNA - Human All Exon v.2 Kit, it doesn't say which genome version it should be compatible with. It's not available on Agilent website anymore and it's difficult to find a file like on the Internet - I'm not sure which resources I could trust.
I would highly appreciate any help on this. I also contacted Agilent and Parker Institute where the TESLA project was run but I am still waiting for their responses.
Best wishes,
Magda

Fail to run pipeline in SLURM

Hi,

Sorry for bothering again. I found that

nextflow run /scratch/u/kfang/Software/nextNEOpi/nextNEOpi.nf -bg --batchFile test2.csv -config conf/params.config -profile singularity -resume --accept_license --MIXCR_lic /scratch/u/kfang/Software/nextNEOpi/resources/mi.license

works in our login node of HPC cluster.

However, the pipeline failed when I tried to submit the pipeline to SLURM. The command I used:

sbatch nextflow.bash

The context in nextflow.bash

#!/usr/bin/env bash
#
#!!! This is not officially supported by 10x
#
# =============================================================================
# Job Script
# =============================================================================
#
#SBATCH -J nextNEOpi-test2
#SBATCH --export=ALL
#SBATCH --nodes=1 --ntasks-per-node=40
#SBATCH --signal=2
#SBATCH --no-requeue
#SBATCH --time=100:00:00
#SBATCH --account=vjin
#SBATCH --partition=normal
### Alternatively: --ntasks=1 --cpus-per-task={NUM_THREADS}
###   Consult with your cluster administrators to find the combination that
###   works best for single-node, multi-threaded applications on your system.
#SBATCH --mem=300G
#SBATCH -o /scratch/u/kfang/slurm_sh/%x-%j.out
#SBATCH -e /scratch/u/kfang/slurm_sh/%x-%j.err
#SBATCH [email protected] ### OPTIONAL

module load nextflow
module load singularity

nextflow run /scratch/u/kfang/Software/nextNEOpi/nextNEOpi.nf --batchFile test2.csv -config conf/params.config -profile singularity -resume --accept_license --MIXCR_lic /scratch/u/kfang/Software/nextNEOpi/resources/mi.license

date 

The .nextflow.log

......
Oct-24 09:27:19.138 [main] INFO  nextflow.Nextflow - -------------------------------------------------------------------------
Oct-24 09:27:19.169 [main] INFO  nextflow.script.BaseScript - WARNING: Patient353_T1star sex not specified will infer from data
Oct-24 09:27:19.176 [main] INFO  nextflow.script.BaseScript - WARNING: Patient353_T1star sex not specified will infer from data
Oct-24 09:27:19.181 [main] INFO  nextflow.script.BaseScript - WARNING: Patient353_T1star sex not specified will infer from data
Oct-24 09:27:19.330 [main] INFO  nextflow.script.BaseScript - Found /scratch/u/kfang/Software/hlahd.1.5.0/bin/hlahd.sh at: /scratch/u/kfang/Software/hlahd.1.5.0/bin/hlahd.sh
Oct-24 09:27:19.332 [main] INFO  nextflow.script.BaseScript - Found /scratch/u/kfang/Software/nextNEOpi/resources/mi.license at: /scratch/u/kfang/Software/nextNEOpi/resources/mi.license
Oct-24 09:27:19.396 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.397 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.400 [main] DEBUG nextflow.executor.Executor - [warm up] executor > local
Oct-24 09:27:19.405 [main] DEBUG n.processor.LocalPollingMonitor - Creating local task monitor for executor 'local' > cpus=40; memory=300 GB; capacity=40; pollInterval=100ms; dumpInterval=5m
Oct-24 09:27:19.491 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name RegionsBedToIntervalList
Oct-24 09:27:19.493 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.493 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.502 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name BaitsBedToIntervalList
Oct-24 09:27:19.504 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.504 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.511 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name preprocessIntervalList
Oct-24 09:27:19.513 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.513 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.525 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name SplitIntervals
Oct-24 09:27:19.527 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.530 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.540 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name IntervalListToBed
Oct-24 09:27:19.544 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.545 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.580 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name ScatteredIntervalListToBed
Oct-24 09:27:19.582 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.582 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.594 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name FastQC
Oct-24 09:27:19.595 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:FastQC` matches process FastQC
Oct-24 09:27:19.596 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.596 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.609 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name fastp
Oct-24 09:27:19.610 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:fastp` matches process fastp
Oct-24 09:27:19.611 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.611 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.620 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name FastQC_trimmed
Oct-24 09:27:19.620 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:FastQC_trimmed` matches process FastQC_trimmed
Oct-24 09:27:19.622 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.622 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.639 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name make_uBAM
Oct-24 09:27:19.640 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:make_uBAM` matches process make_uBAM
Oct-24 09:27:19.640 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.640 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.649 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name Bwa
Oct-24 09:27:19.649 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:Bwa` matches process Bwa
Oct-24 09:27:19.650 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.650 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.664 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name merge_uBAM_BAM
Oct-24 09:27:19.664 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.664 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.669 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name MarkDuplicates
Oct-24 09:27:19.669 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:MarkDuplicates` matches process MarkDuplicates
Oct-24 09:27:19.669 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.669 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.678 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name alignmentMetrics
Oct-24 09:27:19.678 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:alignmentMetrics` matches process alignmentMetrics
Oct-24 09:27:19.678 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.678 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.685 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name scatterBaseRecalGATK4
Oct-24 09:27:19.686 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.686 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.694 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name gatherGATK4scsatteredBQSRtables
Oct-24 09:27:19.695 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.695 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.699 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name scatterGATK4applyBQSRS
Oct-24 09:27:19.699 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:scatterGATK4applyBQSRS` matches process scatterGATK4applyBQSRS
Oct-24 09:27:19.699 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.699 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.710 [Actor Thread 37] WARN  nextflow.container.SingularityCache - Singularity cache directory has not been defined -- Remote image will be stored in the path: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/singularity -- Use env variable NXF_SINGULARITY_CACHEDIR to specify a different location
Oct-24 09:27:19.714 [Actor Thread 37] INFO  nextflow.container.SingularityCache - Pulling Singularity image https://apps-01.i-med.ac.at/images/singularity/nextNEOpi_1.3.2_18734d43.sif [cache /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/singularity/apps-01.i-med.ac.at-images-singularity-nextNEOpi_1.3.2_18734d43.sif]
Oct-24 09:27:19.719 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name GatherRecalBamFiles
Oct-24 09:27:19.719 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:GatherRecalBamFiles` matches process GatherRecalBamFiles
Oct-24 09:27:19.719 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.719 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.723 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name GetPileup
Oct-24 09:27:19.723 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.723 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.728 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:GATK3` matches labels `GATK3` for process with name install_conda_GATK3
Oct-24 09:27:19.729 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.729 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.736 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name Mutect2
Oct-24 09:27:19.736 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:Mutect2` matches process Mutect2
Oct-24 09:27:19.737 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.737 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.741 [Actor Thread 30] INFO  nextflow.container.SingularityCache - Pulling Singularity image https://depot.galaxyproject.org/singularity/gatk:3.8--hdfd78af_11 [cache /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/singularity/depot.galaxyproject.org-singularity-gatk-3.8--hdfd78af_11.img]
Oct-24 09:27:19.742 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name gatherMutect2VCFs
Oct-24 09:27:19.742 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.742 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.748 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name FilterMutect2
Oct-24 09:27:19.749 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.749 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.753 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name HaploTypeCaller
Oct-24 09:27:19.754 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:HaploTypeCaller` matches process HaploTypeCaller
Oct-24 09:27:19.754 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.754 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.757 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name CNNScoreVariants
Oct-24 09:27:19.757 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:CNNScoreVariants` matches process CNNScoreVariants
Oct-24 09:27:19.758 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.758 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.761 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name MergeHaploTypeCallerGermlineVCF
Oct-24 09:27:19.761 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.761 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.764 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name FilterGermlineVariantTranches
Oct-24 09:27:19.765 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.765 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.771 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:GATK3` matches labels `GATK3` for process with name IndelRealignerIntervals
Oct-24 09:27:19.771 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:IndelRealignerIntervals` matches process IndelRealignerIntervals
Oct-24 09:27:19.771 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.771 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.772 [main] DEBUG nextflow.processor.TaskProcessor - Creating *combiner* operator for each param(s) at index(es): [4]
Oct-24 09:27:19.779 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name GatherRealignedBamFiles
Oct-24 09:27:19.779 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:GatherRealignedBamFiles` matches process GatherRealignedBamFiles
Oct-24 09:27:19.780 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.780 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.786 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name VarscanSomaticScattered
Oct-24 09:27:19.786 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:VarscanSomaticScattered` matches process VarscanSomaticScattered
Oct-24 09:27:19.787 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.787 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.791 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name gatherVarscanVCFs
Oct-24 09:27:19.792 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.792 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.797 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name ProcessVarscan
Oct-24 09:27:19.797 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.797 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.802 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name FilterVarscan
Oct-24 09:27:19.803 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.803 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.805 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name MergeAndRenameSamplesInVarscanVCF
Oct-24 09:27:19.806 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.806 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.807 [main] INFO  nextflow.Nextflow - INFO: Mutect1 not available, skipping....
Oct-24 09:27:19.812 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:Manta` matches labels `Manta` for process with name MantaSomaticIndels
Oct-24 09:27:19.812 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:MantaSomaticIndels` matches process MantaSomaticIndels
Oct-24 09:27:19.812 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.812 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.817 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:Strelka` matches labels `Strelka` for process with name StrelkaSomatic
Oct-24 09:27:19.817 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:StrelkaSomatic` matches process StrelkaSomatic
Oct-24 09:27:19.818 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.818 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.821 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name finalizeStrelkaVCF
Oct-24 09:27:19.821 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.821 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.828 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name mkHCsomaticVCF
Oct-24 09:27:19.829 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.829 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.846 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:VEP` matches labels `VEP` for process with name VepTab
Oct-24 09:27:19.847 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.847 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.850 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name mkCombinedVCF
Oct-24 09:27:19.850 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:mkCombinedVCF` matches process mkCombinedVCF
Oct-24 09:27:19.850 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.850 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.857 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:VEP` matches labels `VEP` for process with name VEPvcf
Oct-24 09:27:19.858 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.858 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.863 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:GATK3` matches labels `GATK3` for process with name ReadBackedphasing
Oct-24 09:27:19.863 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:ReadBackedphasing` matches process ReadBackedphasing
Oct-24 09:27:19.863 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.863 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.865 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:AlleleCounter` matches labels `AlleleCounter` for process with name AlleleCounter
Oct-24 09:27:19.866 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.866 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.871 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name ConvertAlleleCounts
Oct-24 09:27:19.871 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.871 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.874 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name Ascat
Oct-24 09:27:19.874 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.874 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.882 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name SequenzaUtils
Oct-24 09:27:19.882 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:SequenzaUtils` matches process SequenzaUtils
Oct-24 09:27:19.883 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.883 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.883 [main] DEBUG nextflow.processor.TaskProcessor - Creating *combiner* operator for each param(s) at index(es): [1]
Oct-24 09:27:19.886 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name gatherSequenzaInput
Oct-24 09:27:19.886 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:gatherSequenzaInput` matches process gatherSequenzaInput
Oct-24 09:27:19.887 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.887 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.889 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name Sequenza
Oct-24 09:27:19.890 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.890 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.896 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:CNVkit` matches labels `CNVkit` for process with name make_CNVkit_access_file
Oct-24 09:27:19.896 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.896 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.899 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:CNVkit` matches labels `CNVkit` for process with name CNVkit
Oct-24 09:27:19.899 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:CNVkit` matches process CNVkit
Oct-24 09:27:19.900 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.900 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.905 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name Clonality
Oct-24 09:27:19.906 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.906 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.910 [Actor Thread 26] INFO  nextflow.container.SingularityCache - Pulling Singularity image https://depot.galaxyproject.org/singularity/cnvkit:0.9.9--pyhdfd78af_0 [cache /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/singularity/depot.galaxyproject.org-singularity-cnvkit-0.9.9--pyhdfd78af_0.img]
Oct-24 09:27:19.911 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name MutationalBurden
Oct-24 09:27:19.911 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:MutationalBurden` matches process MutationalBurden
Oct-24 09:27:19.911 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.912 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.916 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name MutationalBurdenCoding
Oct-24 09:27:19.916 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:MutationalBurdenCoding` matches process MutationalBurdenCoding
Oct-24 09:27:19.916 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.916 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.919 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name mhc_extract
Oct-24 09:27:19.919 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:mhc_extract` matches process mhc_extract
Oct-24 09:27:19.919 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.919 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.922 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name pre_map_hla
Oct-24 09:27:19.923 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:pre_map_hla` matches process pre_map_hla
Oct-24 09:27:19.923 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.923 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.925 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name OptiType
Oct-24 09:27:19.926 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.926 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.929 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name pre_map_hla_RNA
Oct-24 09:27:19.929 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:pre_map_hla_RNA` matches process pre_map_hla_RNA
Oct-24 09:27:19.929 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.929 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.932 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name OptiType_RNA
Oct-24 09:27:19.932 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.932 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.936 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:HLAHD` matches labels `HLAHD` for process with name run_hla_hd
Oct-24 09:27:19.936 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:run_hla_hd` matches process run_hla_hd
Oct-24 09:27:19.936 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.936 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.948 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name get_vhla
Oct-24 09:27:19.949 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.949 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.959 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:Neofuse` matches process Neofuse
Oct-24 09:27:19.959 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.959 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.963 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.963 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.966 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name add_geneID
Oct-24 09:27:19.967 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.967 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.973 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:pVACtools` matches labels `pVACtools` for process with name gene_annotator
Oct-24 09:27:19.974 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.974 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.988 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:pVACtools` matches labels `pVACtools` for process with name pVACseq
Oct-24 09:27:19.988 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:pVACseq` matches process pVACseq
Oct-24 09:27:19.988 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.988 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:19.993 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:19.994 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.003 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:pVACtools` matches labels `pVACtools` for process with name aggregated_reports
Oct-24 09:27:20.003 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.003 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.013 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:pVACtools` matches labels `pVACtools` for process with name pVACtools_generate_protein_seq
Oct-24 09:27:20.014 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.014 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.024 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name pepare_mixMHC2_seq
Oct-24 09:27:20.025 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.025 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.028 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.028 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.034 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name mixMHC2pred
Oct-24 09:27:20.035 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.035 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.041 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name addCCF
Oct-24 09:27:20.043 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.043 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.057 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name make_epitopes_fasta
Oct-24 09:27:20.057 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.057 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.062 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:Blast` matches labels `Blast` for process with name blast_epitopes
Oct-24 09:27:20.062 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:blast_epitopes` matches process blast_epitopes
Oct-24 09:27:20.063 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.063 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.069 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name add_blast_hits
Oct-24 09:27:20.069 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.069 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.075 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name csin
Oct-24 09:27:20.075 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.075 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.076 [Actor Thread 27] DEBUG nextflow.util.CacheHelper - Unable to get file attributes file: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/.mixmhc2pred_install_ok.chck -- Cause: java.nio.file.NoSuchFileException: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/.mixmhc2pred_install_ok.chck
Oct-24 09:27:20.077 [Actor Thread 27] DEBUG nextflow.util.CacheHelper - Unable to get file attributes file: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/MixMHC2pred -- Cause: java.nio.file.NoSuchFileException: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/MixMHC2pred
Oct-24 09:27:20.082 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.082 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.088 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:IGS` matches labels `IGS` for process with name immunogenicity_scoring
Oct-24 09:27:20.089 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.089 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.098 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withName:mixcr` matches process mixcr
Oct-24 09:27:20.099 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.099 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.107 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name collectSampleInfo
Oct-24 09:27:20.107 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.107 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.114 [Actor Thread 22] DEBUG nextflow.util.CacheHelper - Unable to get file attributes file: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/.igs_install_ok.chck -- Cause: java.nio.file.NoSuchFileException: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/.igs_install_ok.chck
Oct-24 09:27:20.114 [Actor Thread 22] DEBUG nextflow.util.CacheHelper - Unable to get file attributes file: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/IGS -- Cause: java.nio.file.NoSuchFileException: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/IGS
Oct-24 09:27:20.116 [main] DEBUG nextflow.script.ProcessConfig - Config settings `withLabel:nextNEOpiENV` matches labels `nextNEOpiENV` for process with name multiQC
Oct-24 09:27:20.116 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Oct-24 09:27:20.116 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Oct-24 09:27:20.118 [main] DEBUG nextflow.Session - Workflow process names [dsl1]: link_IGS, mhc_extract, mkHCsomaticVCF, AlleleCounter, run_hla_hd, link_mixcr, Neofuse, GatherRecalBamFiles, add_blast_hits, check_PE, merge_uBAM_BAM, gatherSequenzaInput, concat_pVACseq_files, get_vhla, collectSampleInfo, StrelkaSomatic, add_geneID, gatherGATK4scsatteredBQSRtables, install_mixMHC2pred, pVACtools_generate_protein_seq, pepare_mixMHC2_seq, link_mixMHC2pred, ControlFREECviz, FilterMutect2, installVEPplugins, finalizeStrelkaVCF, SequenzaUtils, install_mixcr, fastp, mkCombinedVCF, CNVkit, IntervalListToBed, gene_annotator, make_uBAM, Mutect2, BaitsBedToIntervalList, ReadBackedphasing, IndelRealignerIntervals, Mpileup4ControFREEC, gatherVarscanVCFs, FilterVarscan, gatherMutect2VCFs, GetPileup, Clonality, scatterBaseRecalGATK4, install_IEDB, MantaSomaticIndels, OptiType, VarscanSomaticScattered, VEPvcf, Bwa, ProcessVarscan, Ascat, FastQC, addCCF, publish_NeoFuse, MutationalBurden, make_epitopes_fasta, scatterGATK4applyBQSRS, bam2fastq, install_IGS, OptiType_RNA, ControlFREEC, gatherMpileups, ScatteredIntervalListToBed, alignmentMetrics, aggregated_reports, CNNScoreVariants, FilterGermlineVariantTranches, csin, RegionsBedToIntervalList, preprocessIntervalList, installVEPcache, SplitIntervals, GatherRealignedBamFiles, MutationalBurdenCoding, pre_map_hla_RNA, VepTab, MarkDuplicates, multiQC, immunogenicity_scoring, merge_fastq, ConvertAlleleCounts, MergeHaploTypeCallerGermlineVCF, pre_map_hla, HaploTypeCaller, Sequenza, run_hla_hd_RNA, MergeAndRenameSamplesInVarscanVCF, make_CNVkit_access_file, blast_epitopes, install_conda_GATK3, pVACseq, mixcr, Mutect1scattered, FastQC_trimmed, gatherMutect1VCFs, mixMHC2pred
Oct-24 09:27:20.122 [main] DEBUG nextflow.script.ScriptRunner - > Await termination 
Oct-24 09:27:20.122 [main] DEBUG nextflow.Session - Session await
Oct-24 09:27:20.167 [Task submitter] DEBUG nextflow.executor.LocalTaskHandler - Launch cmd line: /bin/bash -ue .command.run
Oct-24 09:27:20.174 [Task submitter] INFO  nextflow.Session - [55/52f40d] Submitted process > install_mixMHC2pred (install mixMHC2pred)
Oct-24 09:27:20.194 [Task submitter] DEBUG nextflow.executor.LocalTaskHandler - Launch cmd line: /bin/bash -ue .command.run
Oct-24 09:27:20.195 [Task submitter] INFO  nextflow.Session - [d3/f339e4] Submitted process > install_IGS (install IGS)
Oct-24 09:27:21.355 [Actor Thread 30] DEBUG nextflow.container.SingularityCache - Singularity pull complete image=https://depot.galaxyproject.org/singularity/gatk:3.8--hdfd78af_11 path=/scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/singularity/depot.galaxyproject.org-singularity-gatk-3.8--hdfd78af_11.img
Oct-24 09:27:21.470 [Task submitter] DEBUG nextflow.executor.LocalTaskHandler - Launch cmd line: /bin/bash -ue .command.run
Oct-24 09:27:21.470 [Task submitter] INFO  nextflow.Session - [53/441b16] Submitted process > install_conda_GATK3 (install GATK3)
Oct-24 09:27:21.527 [Actor Thread 26] DEBUG nextflow.container.SingularityCache - Singularity pull complete image=https://depot.galaxyproject.org/singularity/cnvkit:0.9.9--pyhdfd78af_0 path=/scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/singularity/depot.galaxyproject.org-singularity-cnvkit-0.9.9--pyhdfd78af_0.img
Oct-24 09:27:21.558 [Task submitter] DEBUG nextflow.executor.LocalTaskHandler - Launch cmd line: /bin/bash -ue .command.run
Oct-24 09:27:21.559 [Task submitter] INFO  nextflow.Session - [a0/476d14] Submitted process > make_CNVkit_access_file (mkCNVkitaccess)
Oct-24 09:27:22.008 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[id: 12; name: install_IGS (install IGS); status: COMPLETED; exit: 127; error: -; workDir: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/d3/f339e46faa15594316aff49b0ce08b]
Oct-24 09:27:22.038 [Task monitor] ERROR nextflow.processor.TaskProcessor - Error executing process > 'install_IGS (install IGS)'

Caused by:
  Process `install_IGS (install IGS)` terminated with an error exit status (127)

Command executed:

  mkdir -p /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/IGS && \
  curl -sLk https://github.com/vincentlaboratories/neoag/raw/master/NeoAg_immunogenicity_predicition_GBM.R -o /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/IGS/NeoAg_immunogenicity_predicition_GBM.R && \
  curl -sLk https://github.com/vincentlaboratories/neoag/raw/master/Final_gbm_model.rds -o /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/IGS/Final_gbm_model.rds && \
  patch -p0 /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/IGS/NeoAg_immunogenicity_predicition_GBM.R /scratch/u/kfang/Software/nextNEOpi/assets/NeoAg_immunogenicity_predicition_GBM.patch && \
  chmod +x /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/IGS/NeoAg_immunogenicity_predicition_GBM.R  && \
  echo "OK" > .igs_install_ok.chck && \
  cp -f .igs_install_ok.chck /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/.igs_install_ok.chck

Command exit status:
  127

Command output:
  (empty)

Command error:
  .command.sh: line 5: patch: command not found

Work dir:
  /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/d3/f339e46faa15594316aff49b0ce08b

Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
Oct-24 09:27:22.069 [Task monitor] INFO  nextflow.Session - Execution cancelled -- Finishing pending tasks before exit
Oct-24 09:27:22.092 [main] DEBUG nextflow.Session - Session await > all process finished
Oct-24 09:27:22.122 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[id: 9; name: install_conda_GATK3 (install GATK3); status: COMPLETED; exit: 0; error: -; workDir: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/53/441b16ba5977950d5448d7a6e1f911]
Oct-24 09:27:22.163 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[id: 11; name: install_mixMHC2pred (install mixMHC2pred); status: COMPLETED; exit: 0; error: -; workDir: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/55/52f40d58eae2a4469f0bb28f90f70a]
Oct-24 09:27:24.356 [Actor Thread 37] DEBUG nextflow.container.SingularityCache - Singularity pull complete image=https://apps-01.i-med.ac.at/images/singularity/nextNEOpi_1.3.2_18734d43.sif path=/scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/singularity/apps-01.i-med.ac.at-images-singularity-nextNEOpi_1.3.2_18734d43.sif
Oct-24 09:27:45.317 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[id: 10; name: make_CNVkit_access_file (mkCNVkitaccess); status: COMPLETED; exit: 0; error: -; workDir: /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/work/a0/476d141585af12e09203e1cb8cd556]
Oct-24 09:27:45.344 [main] DEBUG nextflow.Session - Session await > all barriers passed
Oct-24 09:27:45.438 [main] INFO  nextflow.Nextflow - [icbi/nextNEOpi] Pipeline Complete! You can find your results in /scratch/u/kfang/ChenHZ_lab/Neoantigen/test2/RESULTS
Oct-24 09:27:45.442 [main] DEBUG nextflow.trace.WorkflowStatsObserver - Workflow completed > WorkflowStats[succeededCount=3; failedCount=1; ignoredCount=0; cachedCount=0; pendingCount=8; submittedCount=0; runningCount=0; retriesCount=0; abortedCount=0; succeedDuration=24.6s; failedDuration=1.8s; cachedDuration=0ms;loadCpus=0; loadMemory=0; peakRunning=4; peakCpus=4; peakMemory=0; ]
Oct-24 09:27:45.442 [main] DEBUG nextflow.trace.TraceFileObserver - Flow completing -- flushing trace file
Oct-24 09:27:45.446 [main] DEBUG nextflow.trace.ReportObserver - Flow completing -- rendering html report
Oct-24 09:27:45.579 [main] DEBUG nextflow.trace.ReportObserver - Execution report summary data:
  [{"cpuUsage":null,"process":"install_IGS","mem":null,"memUsage":null,"timeUsage":null,"vmem":null,"reads":null,"cpu":null,"time":{"mean":1769,"min":1769,"q1":1769,"q2":1769,"q3":1769,"max":1769,"minLabel":"install_IGS (install IGS)","maxLabel":"install_IGS (install IGS)","q1Label":"install_IGS (install IGS)","q2Label":"install_IGS (install IGS)","q3Label":"install_IGS (install IGS)"},"writes":null},{"cpuUsage":{"mean":143.7,"min":143.7,"q1":143.7,"q2":143.7,"q3":143.7,"max":143.7,"minLabel":"install_conda_GATK3 (install GATK3)","maxLabel":"install_conda_GATK3 (install GATK3)","q1Label":"install_conda_GATK3 (install GATK3)","q2Label":"install_conda_GATK3 (install GATK3)","q3Label":"install_conda_GATK3 (install GATK3)"},"process":"install_conda_GATK3","mem":null,"memUsage":null,"timeUsage":null,"vmem":null,"reads":{"mean":66182,"min":66182,"q1":66182,"q2":66182,"q3":66182,"max":66182,"minLabel":"install_conda_GATK3 (install GATK3)","maxLabel":"install_conda_GATK3 (install GATK3)","q1Label":"install_conda_GATK3 (install GATK3)","q2Label":"install_conda_GATK3 (install GATK3)","q3Label":"install_conda_GATK3 (install GATK3)"},"cpu":{"mean":143.7,"min":143.7,"q1":143.7,"q2":143.7,"q3":143.7,"max":143.7,"minLabel":"install_conda_GATK3 (install GATK3)","maxLabel":"install_conda_GATK3 (install GATK3)","q1Label":"install_conda_GATK3 (install GATK3)","q2Label":"install_conda_GATK3 (install GATK3)","q3Label":"install_conda_GATK3 (install GATK3)"},"time":null,"writes":{"mean":227,"min":227,"q1":227,"q2":227,"q3":227,"max":227,"minLabel":"install_conda_GATK3 (install GATK3)","maxLabel":"install_conda_GATK3 (install GATK3)","q1Label":"install_conda_GATK3 (install GATK3)","q2Label":"install_conda_GATK3 (install GATK3)","q3Label":"install_conda_GATK3 (install GATK3)"}},{"cpuUsage":{"mean":19.3,"min":19.3,"q1":19.3,"q2":19.3,"q3":19.3,"max":19.3,"minLabel":"install_mixMHC2pred (install mixMHC2pred)","maxLabel":"install_mixMHC2pred (install mixMHC2pred)","q1Label":"install_mixMHC2pred (install mixMHC2pred)","q2Label":"install_mixMHC2pred (install mixMHC2pred)","q3Label":"install_mixMHC2pred (install mixMHC2pred)"},"process":"install_mixMHC2pred","mem":{"mean":1634304,"min":1634304,"q1":1634304,"q2":1634304,"q3":1634304,"max":1634304,"minLabel":"install_mixMHC2pred (install mixMHC2pred)","maxLabel":"install_mixMHC2pred (install mixMHC2pred)","q1Label":"install_mixMHC2pred (install mixMHC2pred)","q2Label":"install_mixMHC2pred (install mixMHC2pred)","q3Label":"install_mixMHC2pred (install mixMHC2pred)"},"memUsage":null,"timeUsage":null,"vmem":{"mean":116150272,"min":116150272,"q1":116150272,"q2":116150272,"q3":116150272,"max":116150272,"minLabel":"install_mixMHC2pred (install mixMHC2pred)","maxLabel":"install_mixMHC2pred (install mixMHC2pred)","q1Label":"install_mixMHC2pred (install mixMHC2pred)","q2Label":"install_mixMHC2pred (install mixMHC2pred)","q3Label":"install_mixMHC2pred (install mixMHC2pred)"},"reads":{"mean":10027852,"min":10027852,"q1":10027852,"q2":10027852,"q3":10027852,"max":10027852,"minLabel":"install_mixMHC2pred (install mixMHC2pred)","maxLabel":"install_mixMHC2pred (install mixMHC2pred)","q1Label":"install_mixMHC2pred (install mixMHC2pred)","q2Label":"install_mixMHC2pred (install mixMHC2pred)","q3Label":"install_mixMHC2pred (install mixMHC2pred)"},"cpu":{"mean":19.3,"min":19.3,"q1":19.3,"q2":19.3,"q3":19.3,"max":19.3,"minLabel":"install_mixMHC2pred (install mixMHC2pred)","maxLabel":"install_mixMHC2pred (install mixMHC2pred)","q1Label":"install_mixMHC2pred (install mixMHC2pred)","q2Label":"install_mixMHC2pred (install mixMHC2pred)","q3Label":"install_mixMHC2pred (install mixMHC2pred)"},"time":{"mean":1620,"min":1620,"q1":1620,"q2":1620,"q3":1620,"max":1620,"minLabel":"install_mixMHC2pred (install mixMHC2pred)","maxLabel":"install_mixMHC2pred (install mixMHC2pred)","q1Label":"install_mixMHC2pred (install mixMHC2pred)","q2Label":"install_mixMHC2pred (install mixMHC2pred)","q3Label":"install_mixMHC2pred (install mixMHC2pred)"},"writes":{"mean":15253171,"min":15253171,"q1":15253171,"q2":15253171,"q3":15253171,"max":15253171,"minLabel":"install_mixMHC2pred (install mixMHC2pred)","maxLabel":"install_mixMHC2pred (install mixMHC2pred)","q1Label":"install_mixMHC2pred (install mixMHC2pred)","q2Label":"install_mixMHC2pred (install mixMHC2pred)","q3Label":"install_mixMHC2pred (install mixMHC2pred)"}},{"cpuUsage":{"mean":111.4,"min":111.4,"q1":111.4,"q2":111.4,"q3":111.4,"max":111.4,"minLabel":"make_CNVkit_access_file (mkCNVkitaccess)","maxLabel":"make_CNVkit_access_file (mkCNVkitaccess)","q1Label":"make_CNVkit_access_file (mkCNVkitaccess)","q2Label":"make_CNVkit_access_file (mkCNVkitaccess)","q3Label":"make_CNVkit_access_file (mkCNVkitaccess)"},"process":"make_CNVkit_access_file","mem":{"mean":119500800,"min":119500800,"q1":119500800,"q2":119500800,"q3":119500800,"max":119500800,"minLabel":"make_CNVkit_access_file (mkCNVkitaccess)","maxLabel":"make_CNVkit_access_file (mkCNVkitaccess)","q1Label":"make_CNVkit_access_file (mkCNVkitaccess)","q2Label":"make_CNVkit_access_file (mkCNVkitaccess)","q3Label":"make_CNVkit_access_file (mkCNVkitaccess)"},"memUsage":null,"timeUsage":null,"vmem":{"mean":5785579520,"min":5785579520,"q1":5785579520,"q2":5785579520,"q3":5785579520,"max":5785579520,"minLabel":"make_CNVkit_access_file (mkCNVkitaccess)","maxLabel":"make_CNVkit_access_file (mkCNVkitaccess)","q1Label":"make_CNVkit_access_file (mkCNVkitaccess)","q2Label":"make_CNVkit_access_file (mkCNVkitaccess)","q3Label":"make_CNVkit_access_file (mkCNVkitaccess)"},"reads":{"mean":3203800228,"min":3203800228,"q1":3203800228,"q2":3203800228,"q3":3203800228,"max":3203800228,"minLabel":"make_CNVkit_access_file (mkCNVkitaccess)","maxLabel":"make_CNVkit_access_file (mkCNVkitaccess)","q1Label":"make_CNVkit_access_file (mkCNVkitaccess)","q2Label":"make_CNVkit_access_file (mkCNVkitaccess)","q3Label":"make_CNVkit_access_file (mkCNVkitaccess)"},"cpu":{"mean":111.4,"min":111.4,"q1":111.4,"q2":111.4,"q3":111.4,"max":111.4,"minLabel":"make_CNVkit_access_file (mkCNVkitaccess)","maxLabel":"make_CNVkit_access_file (mkCNVkitaccess)","q1Label":"make_CNVkit_access_file (mkCNVkitaccess)","q2Label":"make_CNVkit_access_file (mkCNVkitaccess)","q3Label":"make_CNVkit_access_file (mkCNVkitaccess)"},"time":{"mean":23000,"min":23000,"q1":23000,"q2":23000,"q3":23000,"max":23000,"minLabel":"make_CNVkit_access_file (mkCNVkitaccess)","maxLabel":"make_CNVkit_access_file (mkCNVkitaccess)","q1Label":"make_CNVkit_access_file (mkCNVkitaccess)","q2Label":"make_CNVkit_access_file (mkCNVkitaccess)","q3Label":"make_CNVkit_access_file (mkCNVkitaccess)"},"writes":{"mean":450430,"min":450430,"q1":450430,"q2":450430,"q3":450430,"max":450430,"minLabel":"make_CNVkit_access_file (mkCNVkitaccess)","maxLabel":"make_CNVkit_access_file (mkCNVkitaccess)","q1Label":"make_CNVkit_access_file (mkCNVkitaccess)","q2Label":"make_CNVkit_access_file (mkCNVkitaccess)","q3Label":"make_CNVkit_access_file (mkCNVkitaccess)"}}]
Oct-24 09:27:46.062 [main] DEBUG nextflow.trace.TimelineObserver - Flow completing -- rendering html timeline
Oct-24 09:27:46.209 [main] WARN  nextflow.dag.GraphvizRenderer - To render the execution DAG in the required format it is required to install Graphviz -- See http://www.graphviz.org for more info.
Oct-24 09:27:46.357 [main] DEBUG nextflow.cache.CacheDB - Closing CacheDB done
Oct-24 09:27:46.413 [main] DEBUG nextflow.script.ScriptRunner - > Execution complete -- Goodbye

I notice that there is option that to use -profile cluster in the pipeline but I am not sure how to use it for our SLURM system. I wondered if you could give any suggestions of running the pipeline in SLURM. Thanks for your time and help!

Best,
Kun

Path to MixCR is incorrect

Hi there,
I just noticed the path to the current version to MixCR is incorrect and will throw a bad zip archive error.

This is the actual path:

https://github.com/milaboratory/mixcr/releases/download/v4.0.0/mixcr-4.0.0.zip

within the conf/params.conf file, which is currently:

https://github.com/milaboratory/mixcr/releases/download/v4.0.0/mixcr.zip

Lacking VEP cache file Homo_sapiens.GRCh38.dna.toplevel.fa.gz

VEP cache has been (successfuly?) pulled during installation process and spawned a bunch of files. However, on the next start pipeline throws:
ERROR: Resource file does not exist: /media/alex/_data_ext/nextNEOpi/resources/databases/vep_cache/homo_sapiens/110_GRCh38/Homo_sapiens.GRCh38.dna.toplevel.fa.gz Please check the references resource file settings in conf/resources.config

vep_cache/Plugins/ and vep_cache/homo_sapiens/110_GRCh38 seem to be populated too

Thanks!

Can run on sensitive data cluster without internet access?

Dear nextNEOpi maintainer(s),

Thanks for nextNEOpi, the help (e.g. #36 ) and the documentation!

Regarding the documentation, at the usage section there is mention of an HPC cluster profile. However, as far as I can see, such a profile will download the Singularity containers and/or conda packages. This may be a problem for HPC clusters that have no internet access (due to sensitive data and data protection laws) except for a file transfer folder.

Can nextNEOpi run on an HPC cluster that has no internet access? Or which ways you'd recommend to get it to do so?

Thanks and cheers, Richel

(note to self: nf-core pipelines do have this feature, see https://nf-co.re/docs/usage/offline#pipeline-code)

(another note to self, here a reply from Maxime Garcia:

I'm not familiar with said pipeline, and it's a DSL1, so you'd have to use an older Nextflow version.
We do have some docs on how to run nf-core pipelines offlines, and I did run sarek quite often on bianca myself, so it should be possible, but as this is not an nf-core pipeline, and still in DSL1, you'll need to get the containers yourself beforehand and transfer them to bianca via wharf

)

how to install all required tools?

Thank you for providing such convenient software.
I saw in the readme it was written:

The pipeline will install almost all required tools via Singularity images or conda environments.

But I didn't find a way or requirement.txt to install the dependent tools. Do I need to use conda to install all the dependent tools by myself?

Thanks for your help!

pVACseq is deprecated and it is now called pVACtools

The pipeline went fine all the way until step number 12 and I got this error:

Do you think it's because the pVACseq is deprecated?

icbi/nextNEOpi execution completed unsuccessfully! The exit status of the task that caused the workflow execution to fail was: 1. The full error message was: Error executing process > 'pVACseq (sample2)' Caused by: Process pVACseq (sample2)terminated with an error exit status (1) Command executed: pvacseq run \ --iedb-install-directory /opt/iedb \ -t 10 \ -p sample2_vep_phased.vcf.gz \ -e1 8,9,10,11 \ -e2 15,16,17,18,19,20,21,22,23,24,25 \ --normal-sample-name sample2_normal \ --tumor-purity 0.68 \ \ --netmhc-stab \ --binding-threshold 500 --top-score-metric median --minimum-fold-change 0.0 --normal-cov 5 --tdna-cov 10 --trna-cov 10 --normal-vaf 0.02 --tdna-vaf 0.25 --trna-vaf 0.25 --expn-val 1 --maximum-transcript-support-level 1 \ sample2_vep_somatic_gx.vcf.gz sample2_tumor HLA-B*27:02 NetMHCpan MHCflurry NetMHCIIpan ./

Unable to parse config file: '/a/b/nextNEOpi/conf/params.config'

I have installed nextNEOpi latest version from github and all its requirements. I am using Ubuntu 18.04.

  1. nextflow -version
    N E X T F L O W
    version 23.04.3 build 5875
    created 11-08-2023 18:37 UTC (21:37 ADT)
    cite doi:10.1038/nbt.3820
    http://nextflow.io
  2. java -version
    openjdk version "11.0.19" 2023-04-18
    OpenJDK Runtime Environment (build 11.0.19+7-post-Ubuntu-0ubuntu118.04.1)
    OpenJDK 64-Bit Server VM (build 11.0.19+7-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)
  3. singularity --version
    2.4.2-dist

The command I run is as below:
NXF_VER=22.10.8 nextflow run ./nextNEOpi.nf
--batchFile /media/mamoon/30806115-9b80-48b5-a937-40a7b4922f67/Data/NRC21R-082-03/Analysis/nextNEOpi/demo/batch_fastq.csv
-config conf/params.config
--outputDir /media/mamoon/30806115-9b80-48b5-a937-40a7b4922f67/Data/NRC21R-082-03/Analysis/nextNEOpi/demo/testresult
--trim_adapters true
--trim_adapters_RNAseq true
--use_NetChop false
-profile singularity
-resume

It gives following error in the log file:

_Sep-11 12:37:13.244 [main] DEBUG nextflow.cli.Launcher - $> nextflow run ./nextNEOpi.nf --batchFile /media/mamoon/30806115-9b80-48b5-a937
-40a7b4922f67/Data/NRC21R-082-03/Analysis/nextNEOpi/demo/batch_fastq.csv -config conf/params.config --outputDir /media/mamoon/30806115-9b
80-48b5-a937-40a7b4922f67/Data/NRC21R-082-03/Analysis/nextNEOpi/demo/testresult --trim_adapters true --trim_adapters_RNAseq true --use_Ne
tChop false -profile singularity -resume
Sep-11 12:37:13.329 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 22.10.8
Sep-11 12:37:13.370 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; embedded=false; plugins-dir=/home
/mamoon/.nextflow/plugins; core-plugins: [email protected],[email protected],[email protected],[email protected],[email protected],nf-google@1
.4.6,[email protected],[email protected]
Sep-11 12:37:13.384 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
Sep-11 12:37:13.386 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
Sep-11 12:37:13.390 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
Sep-11 12:37:13.405 [main] INFO org.pf4j.AbstractPluginManager - No plugins
Sep-11 12:37:13.430 [main] DEBUG nextflow.config.ConfigBuilder - User config file: /media/mamoon/30806115-9b80-48b5-a937-40a7b4922f67/sof
tware/nextNEOpi/conf/params.config
Sep-11 12:37:13.431 [main] DEBUG nextflow.config.ConfigBuilder - Parsing config file: /media/mamoon/30806115-9b80-48b5-a937-40a7b4922f67/
software/nextNEOpi/conf/params.config
Sep-11 12:37:13.464 [main] DEBUG nextflow.config.ConfigBuilder - Applying config profile: singularity
Sep-11 12:37:14.805 [main] ERROR nextflow.cli.Launcher - Unable to parse config file: '/media/mamoon/30806115-9b80-48b5-a937-40a7b4922f67/software/nextNEOpi/conf/params.config'

No signature of method: groovy.util.ConfigObject.div() is applicable for argument types: (Integer) values: [30806115]
Possible solutions: min(groovy.lang.Closure), is(java.lang.Object), drop(int), wait(), size(), any()

groovy.lang.MissingMethodException: No signature of method: groovy.util.ConfigObject.div() is applicable for argument types: (Integer) va
lues: [30806115]
Possible solutions: min(groovy.lang.Closure), is(java.lang.Object), drop(int), wait(), size(), any()
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:70)_

Please let me know possible suggestions to solve this issue.
Thanks a lot

CNNscoreVariant terminated with an error exit status (2)

I have installed nextNEOpi latest version from github and all its requirements. I am using Ubuntu 18.04.

  1. nextflow -version
    N E X T F L O W
    version 23.04.3 build 5875
    created 11-08-2023 18:37 UTC (21:37 ADT)
    cite doi:10.1038/nbt.3820
    http://nextflow.io/
  2. java -version
    openjdk version "11.0.19" 2023-04-18
    OpenJDK Runtime Environment (build 11.0.19+7-post-Ubuntu-0ubuntu118.04.1)
    OpenJDK 64-Bit Server VM (build 11.0.19+7-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)
  3. singularity --version
    singularity-ce version 3.11.3

I got below error, that i think related to TensorFlow. Please help me troubleshoot.

_Error executing process > 'CNNScoreVariants (test)'

Caused by:
Process CNNScoreVariants (test) terminated with an error exit status (2)

Command executed:

mkdir -p /tmp/mamoon/nextNEOpi

gatk CNNScoreVariants
--tmp-dir /tmp/mamoon/nextNEOpi
-R GRCh38.d1.vd1.fa
-I test_normal_DNA_recalibrated.bam
-V test_germline_0007-scattered.interval_list.vcf.gz
-tensor-type read_tensor
--inter-op-threads 2
--intra-op-threads 2
--transfer-batch-size 256
--inference-batch-size 128
-O test_germline_0007-scattered.interval_list.vcf_CNNScored.vcf.gz

Command exit status:
2

Command output:
(empty)

Command error:
10:22:37.702 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/opt/gatk/gatk-package-4.4.0.0-local.jar!/com/intel/gkl/native/libgkl_compression.so
10:22:37.832 INFO CNNScoreVariants - ------------------------------------------------------------
10:22:37.874 INFO CNNScoreVariants - The Genome Analysis Toolkit (GATK) v4.4.0.0
10:22:37.874 INFO CNNScoreVariants - For support and documentation go to https://software.broadinstitute.org/gatk/
10:22:37.875 INFO CNNScoreVariants - Executing as mambauser@mamoon-T7500 on Linux v5.4.0-105-generic amd64
10:22:37.875 INFO CNNScoreVariants - Java runtime: OpenJDK 64-Bit Server VM v17.0.7+7-Debian-1deb11u1
10:22:37.875 INFO CNNScoreVariants - Start Date/Time: September 18, 2023 at 10:22:37 AM UTC
10:22:37.875 INFO CNNScoreVariants - ------------------------------------------------------------
10:22:37.875 INFO CNNScoreVariants - ------------------------------------------------------------
10:22:37.876 INFO CNNScoreVariants - HTSJDK Version: 3.0.5
10:22:37.877 INFO CNNScoreVariants - Picard Version: 3.0.0
10:22:37.877 INFO CNNScoreVariants - Built for Spark Version: 3.3.1
10:22:37.877 INFO CNNScoreVariants - HTSJDK Defaults.COMPRESSION_LEVEL : 2
10:22:37.877 INFO CNNScoreVariants - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false
10:22:37.878 INFO CNNScoreVariants - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true
10:22:37.878 INFO CNNScoreVariants - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false
10:22:37.878 INFO CNNScoreVariants - Deflater: IntelDeflater
10:22:37.878 INFO CNNScoreVariants - Inflater: IntelInflater
10:22:37.879 INFO CNNScoreVariants - GCS max retries/reopens: 20
10:22:37.879 INFO CNNScoreVariants - Requester pays: disabled
10:22:37.879 INFO CNNScoreVariants - Initializing engine
10:22:38.427 INFO FeatureManager - Using codec VCFCodec to read file file://test_germline_0007-scattered.interval_list.vcf.gz
10:22:38.444 WARN IntelInflater - Zero Bytes Written : 0
10:22:38.496 WARN IntelInflater - Zero Bytes Written : 0
10:22:38.565 INFO CNNScoreVariants - Done initializing engine
10:22:38.566 INFO NativeLibraryLoader - Loading libgkl_utils.so from jar:file:/opt/gatk/gatk-package-4.4.0.0-local.jar!/com/intel/gkl/native/libgkl_utils.so
10:22:38.646 INFO CNNScoreVariants - Done scoring variants with CNN.
10:22:38.646 INFO CNNScoreVariants - Shutting down engine
[September 18, 2023 at 10:22:38 AM UTC] org.broadinstitute.hellbender.tools.walkers.vqsr.CNNScoreVariants done. Elapsed time: 0.02 minutes.
Runtime.totalMemory()=260046848


A USER ERROR has occurred: This tool requires AVX instruction set support by default due to its dependency on recent versions of the TensorFlow library.
If you have an older (pre-1.6) version of TensorFlow installed that does not require AVX you may attempt to re-run the tool with the disable-avx-check argument to bypass this check.
Note that such configurations are not officially supported.


Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--java-options '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace.
Using GATK jar /opt/gatk/gatk-package-4.4.0.0-local.jar
Running:
java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -jar /opt/gatk/gatk-package-4.4.0.0-local.jar CNNScoreVariants --tmp-dir /tmp/mamoon/nextNEOpi -R GRCh38.d1.vd1.fa -I test_normal_DNA_recalibrated.bam -V test_germline_0007-scattered.interval_list.vcf.gz -tensor-type read_tensor --inter-op-threads 2 --intra-op-threads 2 --transfer-batch-size 256 --inference-batch-size 128 -O test_germline_0007-scattered.interval_list.vcf_CNNScored.vcf.gz_

Thank you for your effort

error message when run a test case

Dear NextNEOpi authors:

I follow README, has the nextflow version 22.10.8 installed, set up reference in a customized site and modified resourcesBaseDir in conf/params.conf

Tried the following command to run a test case

$ nextflow run nextNEOpi.nf --batchFile testdata_batchFile_FASTQ.csv --CNVkit false -profile singularity -config conf/params.config --accept_license --TCR false

at the end of the output, I am seeing:


Execution cancelled -- Finishing pending tasks before exit
[icbi/nextNEOpi] Pipeline Complete! You can find your results in /rsrch3/home/itops/ryao/nextNEOpi/results
Error executing process > 'install_IEDB (Install IEDB)'

Caused by:
Process install_IEDB (Install IEDB) terminated with an error exit status (3)

Command executed:

export TMPDIR=/tmp/ryao/nextNEOpi/

CWD=pwd
cd /opt/iedb/
rm -f IEDB_MHC_I-3.1.4.tar.gz
wget https://downloads.iedb.org/tools/mhci/3.1.4/IEDB_MHC_I-3.1.4.tar.gz
tar -xzvf IEDB_MHC_I-3.1.4.tar.gz
cd mhc_i
bash -c "./configure"
cd /opt/iedb/
rm -f IEDB_MHC_I-3.1.4.tar.gz

rm -f IEDB_MHC_II-3.1.8.tar.gz
wget https://downloads.iedb.org/tools/mhcii/3.1.8/IEDB_MHC_II-3.1.8.tar.gz
tar -xzvf IEDB_MHC_II-3.1.8.tar.gz

ATTENTION: IEDB_MHC_II-3.1.8.tar.gz "python configure.py"

returns an assertion error in the unittest needs

to be fixed, skip unittests for now

cd mhc_ii

bash -c "python ./configure.py"

cd /opt/iedb/
rm IEDB_MHC_II-3.1.8.tar.gz

export MHCFLURRY_DATA_DIR=/opt/mhcflurry_data
mhcflurry-downloads fetch

cd $CWD
echo "OK" > .iedb_install_ok.chck

Command exit status:
3

Command output:
(empty)

Command error:
--2023-09-28 17:01:10-- https://downloads.iedb.org/tools/mhci/3.1.4/IEDB_MHC_I-3.1.4.tar.gz
Resolving downloads.iedb.org (downloads.iedb.org)... 8.37.117.143
Connecting to downloads.iedb.org (downloads.iedb.org)|8.37.117.143|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 345348484 (329M) [application/x-gzip]
IEDB_MHC_I-3.1.4.tar.gz: Permission denied

Cannot write to ‘IEDB_MHC_I-3.1.4.tar.gz’ (Permission denied).

Work dir:
/rsrch3/home/itops/ryao/nextNEOpi/work/6b/e0e98264330d8bc14d89ee7c13443f

Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run

--------- end of the command output

Would you please provide me some helps to understand this?

Thank you very much,
Rong Yao

Error at install_IEDB process

Hi there,

I’ve been attempting to run nextNEOpi on some test data for a while now but have been running into various issues. I am trying to run it both using a HPC and a cloud service but at the moment I am having problems with the install_IEDB process. Installation of MHCII seems to be failing in both cases - I have attached the output from both scenarios.

nextneopi_cloud.txt
nextneopi_hpc.txt

The errors are quite similar, but on the HPC, there also seems to be a permission error. I get a similar error when I try to run it with conda (instead of singularity) on the HPC.

Some things I have tried are: installing tcsh and perl-ENV (as per https://downloads.iedb.org/tools/mhcii/3.1.6/README), using different versions of python (3.6,3.8), installing an older version of mhcii (3.1.5). Any help would be appreciated, and let me know if you need more information.

Operating system: CentOS7

No such file [...]/references/hg38/gdc/GRCh38.d1.vd1/fasta/hg38.len

Dear nextNEOpi maintainers,

I try to get nextNEOpi running for a minimal run, without experience in the field. I could not find example data (#33) or any testing scripts, so all I am left to do is post an Issue :-)

In this GitHub repository I try to setup a most simple example, (including a GitHub Actions script):

sampleName,reads1,reads2,sampleType,HLAfile,sex
sample1,reads_s1_t_1.fastq.gz,reads_s1_t_2.fastq.gz,tumor_DNA,,female
sample1,reads_s1_n_1.fastq.gz,reads_s1_n_2.fastq.gz,normal_DNA,,female
sample1,reads_s1_r_1.fastq.gz,reads_s1_r_2.fastq.gz,tumor_RNA,,female
@SEQ_ID
GATTTGGGGTTCAAAGCAGTATCGATCAAATAGTAAATCCATTTGTTCAACTCACAGTTT
+
\!''*((((***+))%%%++)(%%%%).1***-+*''))**55CCF>>>>>>CCCCCCC65
  • reads_s1_n_2.fastq.gz: same as reads_s1_n_1.fastq.gz
  • reads_s1_r_1.fastq.gz: same as reads_s1_n_1.fastq.gz
  • reads_s1_r_2.fastq.gz: same as reads_s1_n_1.fastq.gz
  • reads_s1_t_1.fastq.gz: same as reads_s1_n_1.fastq.gz
  • reads_s1_t_2.fastq.gz: same as reads_s1_n_1.fastq.gz

Using GitHub Actions I get this error:

No such file: /home/runner/work/nextNEOpi_example_1/nextNEOpi_example_1/nextNEOpi/resources/references/hg38/gdc/GRCh38.d1.vd1/fasta/hg38.len

I understand that the name of this file is determined in conf/resources.config.

But where do I get that data?

I do understand it is a human reference genome, however, [path]/hg38/gdc/GRCh38.d1.vd1/fasta/hg38.len confuses me.

Is there a URL where I can wget it all?

Thanks and cheers, Richel Bilderbeek

Cross-species compatibility

Hello, I am attempting to use nextNEOpi for my honours project analyzing some mouse data. Is it possible to use nextNEOpi on data from other species. If yes, what sort of parameters need to be changed? At this stage the error seems to be coming from GATK not being compatible with the mouse data. Any help appreciated, Sarah.

Problem with CNNScoreVariants

I just tried to run the workflow on a WGS dataset and got an error message from two of the CNNScore tasks (out of 40 total). I tried to verify the problem by running the failed task directly, both with the singularity image from nextNEOpi and with a local docker image of GATK, but both of them finished without any errors.

Any idea what might be happening here?
Error message:

Error executing process > 'CNNScoreVariants (P01)'

Caused by:                                                                                                                                                                                                  Process `CNNScoreVariants (P01)` terminated with an error exit status (3)                                                                                                                                                                                                                                                                                                                                         Command executed:                                                                                                                                                                                                                                                                                                                                                                                                     mkdir -p /gnome/harald/2022/neoantigens/analysis_results/nextneopi_WGS_hg38_nextflow/tmp
                                                                                                                                                                                                            gatk CNNScoreVariants \                                                                                                                                                                                       --tmp-dir /gnome/harald/2022/neoantigens/analysis_results/nextneopi_WGS_hg38_nextflow/tmp \                                                                                                               -R GRCh38.d1.vd1.fa \
      -I P01_normal_DNA_recalibrated.bam \
      -V P01_germline_0013-scattered.interval_list.vcf.gz \
      -tensor-type read_tensor \                                                                                                                                                                                --inter-op-threads 2 \
      --intra-op-threads 2 \
      --transfer-batch-size 256 \                                                                                                                                                                               --inference-batch-size 128 \
      -O P01_germline_0013-scattered.interval_list.vcf_CNNScored.vcf.gz                                                                                                                                                                                                                                                                                                                                             Command exit status:                                                                                                                                                                                        3

Command output:
  (empty)

Command error:
        ... 11 more                                                                                                                                                                                         Caused by: org.broadinstitute.hellbender.exceptions.GATKException: Expected message of length 3 but only found 0 bytes                                                                                          at org.broadinstitute.hellbender.utils.runtime.StreamingProcessController.getBytesFromStream(StreamingProcessController.java:267)                                                                         at org.broadinstitute.hellbender.utils.runtime.StreamingPro 2.27.1
  03:39:59.647 INFO  CNNScoreVariants - Built for Spark Version: 2.4.5
  03:39:59.647 INFO  CNNScoreVariants - HTSJDK Defaults.COMPRESSION_LEVEL : 2
  03:39:59.648 INFO  CNNScoreVariants - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false                                                                                                              03:39:59.648 INFO  CNNScoreVariants - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true
  03:39:59.648 INFO  CNNScoreVariants - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false                                                                                                              03:39:59.648 INFO  CNNScoreVariants - Deflater: IntelDeflater
  03:39:59.649 INFO  CNNScoreVariants - Inflater: IntelInflater
  03:39:59.649 INFO  CNNScoreVariants - GCS max retries/reopens: 20
  03:39:59.649 INFO  CNNScoreVariants - Requester pays: disabled
   03:39:59.649 INFO  CNNScoreVariants - Initializing engine
  03:40:03.900 INFO  FeatureManager - Using codec VCFCodec to read file file://P01_germline_0013-scattered.interval_list.vcf.gz
  03:40:05.154 INFO  CNNScoreVariants - Done initializing engine
  03:40:05.157 INFO  NativeLibraryLoader - Loading libgkl_utils.so from jar:file:/opt/conda/share/gatk4-4.2.6.1-1/gatk-package-4.2.6.1-local.jar!/com/intel/gkl/native/libgkl_utils.so
  03:40:14.233 INFO  CNNScoreVariants - Using key:CNN_2D for CNN architecture:/gnome/harald/2022/neoantigens/analysis_results/nextneopi_WGS_hg38_nextflow/tmp/small_2d.16228056869151426233.json and weights:/gnome/harald/2022/neoantigens/analysis_results/nextneopi_WGS_hg38_nextflow/tmp/small_2d.5079293164478269062.hd5
  03:40:14.705 INFO  CNNScoreVariants - Done scoring variants with CNN.
  03:40:14.706 INFO  CNNScoreVariants - Shutting down engine
  [January 26, 2023 at 3:40:14 AM UTC] org.broadinstitute.hellbender.tools.walkers.vqsr.CNNScoreVariants done. Elapsed time: 0.29 minutes.
  Runtime.totalMemory()=3812622336
  org.broadinstitute.hellbender.exceptions.GATKException: Exception waiting for ack from Python: org.broadinstitute.hellbender.exceptions.GATKException: Expected message of length 3 but only found 0 bytes
        at org.broadinstitute.hellbender.utils.runtime.StreamingProcessController.waitForAck(StreamingProcessController.java:239)
        at org.broadinstitute.hellbender.utils.python.StreamingPythonScriptExecutor.waitForAck(StreamingPythonScriptExecutor.java:216)
        at org.broadinstitute.hellbender.utils.python.StreamingPythonScriptExecutor.sendSynchronousCommand(StreamingPythonScriptExecutor.java:183)
        at org.broadinstitute.hellbender.tools.walkers.vqsr.CNNScoreVariants.initializePythonArgsAndModel(CNNScoreVariants.java:557)
        at org.broadinstitute.hellbender.tools.walkers.vqsr.CNNScoreVariants.onTraversalStart(CNNScoreVariants.java:317)
        at org.broadinstitute.hellbender.engine.GATKTool.doWork(GATKTool.java:1083)
        at org.broadinstitute.hellbender.cmdline.CommandLineProgram.runTool(CommandLineProgram.java:140)
        at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMainPostParseArgs(CommandLineProgram.java:192)
        at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:211)
        at org.broadinstitute.hellbender.Main.runCommandLineProgram(Main.java:160)
        at org.broadinstitute.hellbender.Main.mainEntry(Main.java:203)
        at org.broadinstitute.hellbender.Main.main(Main.java:289)
  Caused by: java.util.concurrent.ExecutionException: org.broadinstitute.hellbender.exceptions.GATKException: Expected message of length 3 but only found 0 bytes
        at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
        at org.broadinstitute.hellbender.utils.runtime.StreamingProcessController.waitForAck(StreamingProcessController.java:234)
        ... 11 more
  Caused by: org.broadinstitute.hellbender.exceptions.GATKException: Expected message of length 3 but only found 0 bytes
        at org.broadinstitute.hellbender.utils.runtime.StreamingProcessController.getBytesFromStream(StreamingProcessController.java:267)
        at org.broadinstitute.hellbender.utils.runtime.StreamingProcessController.lambda$waitForAck$0(StreamingProcessController.java:214)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
  Using GATK jar /opt/conda/share/gatk4-4.2.6.1-1/gatk-package-4.2.6.1-local.jar
  Running:
      java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -jar /opt/conda/share/gatk4-4.2.6.1-1/gatk-package-4.2.6.1-local.jar CNNScoreVariants --tmp-dir /gnome/harald/2022/neoantigens/analysis_results/nextneopi_WGS_hg38_nextflow/tmp -R GRCh38.d1.vd1.fa -I P01_normal_DNA_recalibrated.bam -V P01_germline_0013-scattered.interval_list.vcf.gz -tensor-type read_tensor --inter-op-threads 2 --intra-op-threads 2 --transfer-batch-size 256 --inference-batch-size 128 -O P01_germline_0013-scattered.interval_list.vcf_CNNScored.vcf.gz

Work dir:
  /gnome/harald/2022/neoantigens/analysis_results/nextneopi_WGS_hg38_nextflow/41/58b884e0a616d7e02cc9dd1a5d0d27

Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line

Also, when I tried to rerun the whole job (with -resume and without making any changes to the any of the inputs), the process started at the beginning of the DNA alignment step. Since the log indicated that the Mutect2 step was finished, I assumed it would be able to use the bam files from the previous run?

-Harald

Random error with NetMHCII

Hi,
I would like to ask for help in consequences of swapping containers for pVACtools. But first I want to explain why I'm doing this.

  1. I was running predictions with your original containers but I was getting an error for MHC class II peptides while predicting binding affinity using NetMHCIIpan
Making binding predictions on Allele DRB1*13:01 and Epitope Length 23 with Method NetMHCIIpan - File MHC_Class_II/tmp/patient1_tumor.NetMHCIIpan.DRB1*13:01.23.tsv_201-400 - Completed
 Making binding predictions on Allele DRB1*13:01 and Epitope Length 24 with Method NetMHCIIpan - File MHC_Class_II/tmp/patient1_tumor.NetMHCIIpan.DRB1*13:01.24.tsv_1-200 - Completed

Command error:
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1110.
 Use of uninitialized value $Score in numeric lt (<) at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1123.
 Use of uninitialized value $output_allele in concatenation (.) or string at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1241.
 Use of uninitialized value $reliability in concatenation (.) or string at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1241.
 Use of uninitialized value $score_BA in concatenation (.) or string at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1241.
 Use of uninitialized value $scores[10] in sprintf at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1287, <IN> line 15.
 Use of uninitialized value $scores[11] in sprintf at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1288, <IN> line 15.
 Use of uninitialized value $scores[12] in sprintf at /opt/iedb/mhc_ii/methods/netmhciipan-4.0-executable/netmhciipan_4_0_executable/NetMHCIIpan-4.0.pl line 1289, <IN> line 15.
 list index out of range
 list index out of range
 CRITICAL:pymp:An exception occured in thread 7: (<class 'subprocess.CalledProcessError'>, Command '['/opt/conda/bin/python', '/opt/iedb/mhc_ii/mhc_II_binding.py', 'NetMHCIIpan', 'DRB1*13:01', 'MHC_Class_II/tmp/patient1_tumor.25.fa.split_401-600', '25']' returned non-zero exit status 1.).
 Traceback (most recent call last):
   File "/opt/conda/bin/pvacseq", line 8, in <module>
     sys.exit(main())
   File "/opt/conda/lib/python3.8/site-packages/pvactools/tools/pvacseq/main.py", line 116, in main
     args[0].func.main(args[1])
   File "/opt/conda/lib/python3.8/site-packages/pvactools/tools/pvacseq/run.py", line 158, in main
     pipeline.execute()
   File "/opt/conda/lib/python3.8/site-packages/pvactools/lib/pipeline.py", line 473, in execute
     self.call_iedb(chunks)
   File "/opt/conda/lib/python3.8/site-packages/pvactools/lib/pipeline.py", line 380, in call_iedb
     p.print("Making binding predictions on Allele %s and Epitope Length %s with Method %s - File %s - Completed" % (a, epl, method, filename))
   File "/opt/conda/lib/python3.8/site-packages/pymp/__init__.py", line 148, in __exit__
     raise exc_t(exc_val)
 TypeError: __init__() missing 1 required positional argument: 'cmd'

I did some research and I did not see this error reported by anyone else and also I could not reproduce it consistently! I thought maybe it's some of the nodes on the HPC or maybe it depends on the order the predictions I made. Either way, I didn't want to investigate but first I thought I'd try to use a newer versions of mhc_i and mhc_ii tools.
2) I renamed the nextNEOpi_1.3_resources/databases/iedb folder and I replaced the urls in params.config with the following

  IEDB_MHCI_url  = "https://downloads.iedb.org/tools/mhci/3.1.4/IEDB_MHC_I-3.1.4.tar.gz"
  IEDB_MHCII_url = "https://downloads.iedb.org/tools/mhcii/3.1.8/IEDB_MHC_II-3.1.8.tar.gz"

This way I thought I tricked the pipeline to re-install them. The new MHCI was installed fine but MHCII wasn't. THe process downloaded the tar.gz file, unpacked it but then during running configure.py the pipeline threw the following error:

/opt/conda/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 9060 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./configure.py:28: ResourceWarning: unclosed file <_io.TextIOWrapper name=3 encoding='UTF-8'>
  result=os.popen(cmd).read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./opt/conda/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 9090 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./configure.py:18: ResourceWarning: unclosed file <_io.TextIOWrapper name=3 encoding='UTF-8'>
  result=os.popen(cmd).read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
../opt/conda/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 9153 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./configure.py:33: ResourceWarning: unclosed file <_io.TextIOWrapper name=3 encoding='UTF-8'>
  result=os.popen(cmd).read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./opt/conda/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 9200 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./configure.py:38: ResourceWarning: unclosed file <_io.TextIOWrapper name=3 encoding='UTF-8'>
  result=os.popen(cmd).read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./opt/conda/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 9246 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./configure.py:13: ResourceWarning: unclosed file <_io.TextIOWrapper name=3 encoding='UTF-8'>
  result=os.popen('echo example').read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./opt/conda/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 9247 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./configure.py:23: ResourceWarning: unclosed file <_io.TextIOWrapper name=3 encoding='UTF-8'>
  result=os.popen(cmd).read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
F/opt/conda/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 9294 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./configure.py:43: ResourceWarning: unclosed file <_io.TextIOWrapper name=3 encoding='UTF-8'>
  result=os.popen(cmd).read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./opt/conda/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 9341 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./configure.py:48: ResourceWarning: unclosed file <_io.TextIOWrapper name=3 encoding='UTF-8'>
  result=os.popen(cmd).read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
.
======================================================================
FAIL: test_recommended (__main__.Test_test)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "./configure.py", line 24, in test_recommended
    self.assertEqual(result, 'allele\tseq_num\tstart\tend\tlength\tmethod\tpeptide\tconsensus_percentile_rank\tadjusted_consensus_percentile_rank\tcomblib_core\tcomblib_score\tcomblib_rank\tadjusted_comblib_rank\tsmm_align_core\tsmm_align_ic50\tsmm_align_rank\tadjusted_smm_align_rank\tnn_align_core\tnn_align_ic50\tnn_align_rank\tadjusted_nn_align_rank\tnetmhciipan_core\tnetmhciipan_ic50\tnetmhciipan_rank\tadjusted_netmhciipan_rank\tsturniolo_core\tsturniolo_score\tsturniolo_rank\tadjusted_sturniolo_rank\nHLA-DRB1*03:01\t1\t8\t22\t15\tSMM-NN-Sturniolo\tEGVSGATWVDLVLEG\t59.0\t59.00\t-\t-\t-\t-\tVSGATWVDL\t5932.0\t59.0\t59.00\tTWVDLVLEG\t3356.9\t50.0\t50.00\t-\t-\t-\t-\tVSGATWVDL\t-1.34\t88.0\t88.00\nHLA-DRB1*03:01\t1\t4\t18\t15\tSMM-NN-Sturniolo\tRDFLEGVSGATWVDL\t85.0\t85.00\t-\t-\t-\t-\tFLEGVSGAT\t8523.0\t69.0\t69.00\tVSGATWVDL\t12707.7\t85.0\t85.00\t-\t-\t-\t-\tFLEGVSGAT\t-1.2\t87.0\t87.00\nHLA-DRB1*03:01\t1\t5\t19\t15\tSMM-NN-Sturniolo\tDFLEGVSGATWVDLV\t85.0\t85.00\t-\t-\t-\t-\tFLEGVSGAT\t18841.0\t85.0\t85.00\tVSGATWVDL\t10634.4\t81.0\t81.00\t-\t-\t-\t-\tFLEGVSGAT\t-1.2\t87.0\t87.00\nHLA-DRB1*03:01\t1\t6\t20\t15\tSMM-NN-Sturniolo\tFLEGVSGATWVDLVL\t85.0\t85.00\t-\t-\t-\t-\tFLEGVSGAT\t18363.0\t85.0\t85.00\tVSGATWVDL\t8154.8\t74.0\t74.00\t-\t-\t-\t-\tFLEGVSGAT\t-1.2\t87.0\t87.00\nHLA-DRB1*03:01\t1\t1\t15\t15\tSMM-NN-Sturniolo\tMSNRDFLEGVSGATW\t87.0\t87.00\t-\t-\t-\t-\tFLEGVSGAT\t9343.0\t71.0\t71.00\tDFLEGVSGA\t17372.9\t92.0\t92.00\t-\t-\t-\t-\tFLEGVSGAT\t-1.2\t87.0\t87.00\nHLA-DRB1*03:01\t1\t2\t16\t15\tSMM-NN-Sturniolo\tSNRDFLEGVSGATWV\t87.0\t87.00\t-\t-\t-\t-\tFLEGVSGAT\t8986.0\t70.0\t70.00\tDFLEGVSGA\t18531.9\t93.0\t93.00\t-\t-\t-\t-\tFLEGVSGAT\t-1.2\t87.0\t87.00\nHLA-DRB1*03:01\t1\t3\t17\t15\tSMM-NN-Sturniolo\tNRDFLEGVSGATWVD\t87.0\t87.00\t-\t-\t-\t-\tFLEGVSGAT\t9085.0\t70.0\t70.00\tDFLEGVSGA\t18458.2\t93.0\t93.00\t-\t-\t-\t-\tFLEGVSGAT\t-1.2\t87.0\t87.00\nHLA-DRB1*03:01\t1\t7\t21\t15\tSMM-NN-Sturniolo\tLEGVSGATWVDLVLE\t88.0\t88.00\t-\t-\t-\t-\tSGATWVDLV\t26621.0\t91.0\t91.00\tVSGATWVDL\t5473.3\t63.0\t63.00\t-\t-\t-\t-\tVSGATWVDL\t-1.34\t88.0\t88.00\n')
AssertionError: 'alle[28 chars]gth\tcore_peptide\tpeptide\tscore\tpercentile_[594 chars].0\n' != 'alle[28 chars]gth\tmethod\tpeptide\tconsensus_percentile_ran[1976 chars]00\n'
Diff is 2429 characters long. Set self.maxDiff to None to see it.

----------------------------------------------------------------------
Ran 9 tests in 5.695s

FAILED (failures=1)

  1. I tried to install manually MHCII tools but I run into problems with installing some perl modules. I skipped this because I don't really like troubleshooting perl problems. Instead, I reverted the IEDB installation folder but decided to use the newer pVACtools container that have them.
  2. So I pulled the container from here https://hub.docker.com/r/griffithlab/pvactools/tags and changed the singularity run options, i.e. I removed binding of IEDB and MHCflurry folders. I changed this
    runOptions = "--no-home --containall" + " -H " + params.singularityTmpMount + " -B " + params.singularityAssetsMount + " -B " + params.singularityTmpMount + " -B " + params.resourcesBaseDir + params.singularityHLAHDmount + " -B " + params.databases.IEDB_dir + ":/opt/iedb" + " -B " + params.databases.MHCFLURRY_dir + ":/opt/mhcflurry_data"
    into this:
    runOptions = "--no-home --containall" + " -H " + params.singularityTmpMount + " -B " + params.singularityAssetsMount + " -B " + params.singularityTmpMount + " -B " + params.resourcesBaseDir + params.singularityHLAHDmount
    And I changed the container in process.config
    withName:pVACseq {
		container = '/scratch/project_mnt/S0091/mantczak/.nextflow/NXF_SINGULARITY_CACHEDIR/pvactools.sif'
        cpus = 10
		memory = '64G'
    }

This didn't help either though. I'm still getting a similar error:

  Making binding predictions on Allele DRB3*02:02 and Epitope Length 24 with Method NetMHCIIpan - File MHC_Class_II/tmp/patient1_tumor.NetMHCIIpan.DRB3*02:02.24.tsv_1-200 - Completed

Command error:
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric ge (>=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1149.
  Use of uninitialized value $Score in numeric lt (<) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1162.
  Use of uninitialized value $output_allele in concatenation (.) or string at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1288.
  Use of uninitialized value $peptide in concatenation (.) or string at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1288.
  Use of uninitialized value $reliability in concatenation (.) or string at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1288.
  Use of uninitialized value $score_BA in concatenation (.) or string at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1288.
  Use of uninitialized value $lengthRef in numeric ne (!=) at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1213, <IN> line 17.
  Argument "<=WB" isn't numeric in sprintf at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1328, <IN> line 20.
  Use of uninitialized value $scores[10] in sprintf at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1334, <IN> line 20.
  Use of uninitialized value $scores[11] in sprintf at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1335, <IN> line 20.
  Use of uninitialized value $scores[12] in sprintf at /opt/iedb/mhc_ii/methods/netmhciipan-4.1-executable/netmhciipan_4_1_executable/NetMHCIIpan-4.1.pl line 1336, <IN> line 20.
  list index out of range
  CRITICAL:pymp:An exception occured in thread 2: (<class 'subprocess.CalledProcessError'>, Command '['/usr/local/bin/python', '/opt/iedb/mhc_ii/mhc_II_binding.py', 'NetMHCIIpan', 'DRB3*02:02', 'MHC_Class_II/tmp/patient1_tumor.25.fa.split_1-200', '25']' returned non-zero exit status 1.).
  Traceback (most recent call last):
    File "/usr/local/bin/pvacseq", line 8, in <module>
      sys.exit(main())
               ^^^^^^
    File "/usr/local/lib/python3.11/site-packages/pvactools/tools/pvacseq/main.py", line 123, in main
      args[0].func.main(args[1])
    File "/usr/local/lib/python3.11/site-packages/pvactools/tools/pvacseq/run.py", line 165, in main
      pipeline.execute()
    File "/usr/local/lib/python3.11/site-packages/pvactools/lib/pipeline.py", line 451, in execute
      self.call_iedb(chunks)
    File "/usr/local/lib/python3.11/site-packages/pvactools/lib/pipeline.py", line 348, in call_iedb
      with pymp.Parallel(self.n_threads) as p:
    File "/usr/local/lib/python3.11/site-packages/pymp/__init__.py", line 148, in __exit__
      raise exc_t(exc_val)
            ^^^^^^^^^^^^^^
  TypeError: CalledProcessError.__init__() missing 1 required positional argument: 'cmd'

I'll reach out to developers of NetMHCII too but maybe you have some recommendations how to work around that?
Best wishes,
Magda

Error executing process with NeoFuse

Hi,

I came across your pipeline not long ago and found it would be great for my research so I am really keen to make sure I can get it running.

I set the pipeline up and did not input the HLA-HD file as I wanted to first make sure I could get the pipeline running to predict MHCI neoepitopes first. The only thing I did change was in the process.conf file where I changed the CPU usage for all processes to 40.

I have attached the HTML file detailing the error which stopped the pipeline running at the NeoFuse stage. I went and had a look at the "sample1_MHCI_final.log" file as detailed in the error and the ".command.sh" script file in the working directory stated in the pdf file of the error which I have attached. In the pdf file the first line of the error has been omitted when I converted from HTML to pdf but it stated "Error executing process > 'Neofuse (sample1)".

I was happy that it was running smoothly and the prepoccessing steps had been completed successfully however, unfortunaley an error arised. I am actually not sure how to resolve the error stated and would really appreciate any help.

Thank you in advance!

Nextflow Workflow Report.pdf
sample1_MHCI_final.log
Screenshot 2022-05-25 at 10 34 22

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.