Comments (3)
This is because I've set an limit on how many records a single minimap2 run can report, currently set to -N 2000000
as seen in code here.
So, when you put in a big batch like 500k sequences, even if every sequence is homologous, you'd get an average of 4 hits per sequence. This -N
limit is necessary to prevent minimap2 from blowing up in targeted Iso-Seq cases while still maintaining a reasonable tradeoff with eventual pre-cluster size.
The default batch size of 1000000 was selected to work well for a whole transcriptome (high diversity) case, specifically for inputs of FL reads though. Your input is supposedly 71k HQ isoforms that already has redundancy reduced, so the diversity would be even higher than the usual case.
I'm curious - did you notice a significant run-time difference between a batch size of 5000 vs 500000?
--Liz
from cdna_cupcake.
Thanks. I see the reasoning for batching in relation with minimap -N.
However for me the amount of orphan reads is the real question.
First, why do the number of orphan reads differ between batch sizes?
Second, I can reason that the orphan reads are determined per batch, meaning that the smaller the batches, the more orphan reads you will get. However, my 500k batch setting makes much more orphans than the 5k batch setting, which is then counter-intuitive.
As a final result, I would add the orphans to the cogent output, so it makes a huge difference whether I add 7k or 23k orphans to the final output.
- I assume that orphans are like singletons, so there is only 1 transcript for a gene which has no close family gene in the dataset.
from cdna_cupcake.
Hi @HenrivdGeest ,
My intuition is actually the opposite - the smaller the batch, the less orphans, but probably there's a balance.
Because of the -N
(reporting limit) in minimap2, when the batch is smaller, each hit will be reported and clustered together --> why small batches result in fewer orphans.
However, when the batch size is too small, it's also possible no clusters get formed because by chance nothing is a hit when the batch is too small.
Below is an example of a Cogent input. All input is HQ isoforms. There are 641,653 HQ isoforms. It confirms my hypothesis. When the batch size is too high (500k, when -N
is set to 2000k, meaning on average only 4 hits are reported per read), the number of orphans is very high. The batch size between 50-200k don't seem very different.
batchsize_50k/preCluster_out.orphans.fasta:17168
batchsize_100k/preCluster_out.orphans.fasta:19309
batchsize_200k/preCluster_out.orphans.fasta:16382
batchsize_500k/preCluster_out.orphans.fasta:71678
from cdna_cupcake.
Related Issues (20)
- Question: where does the count.txt file come from?
- cluster id mismatch issue in get_abundance_post_collapse.py HOT 3
- chain_samples.py does not work!
- cupcake2 directory missing, compilation fails HOT 4
- Create sam format for collapse_isoforms_by_sam.py
- lima for skera.bam
- fusion_collate_info.py script has bugs and its not working.
- TypeError: iter_gmap_sam() got an unexpected keyword argument 'type' using collapse_isoforms_by_sam.py
- run_preCluster.py HOT 1
- setup.py requires sklearn instead of scikit-learn HOT 1
- fusion_collate_info.py KeyError: 'count_fl' HOT 3
- Cython compiler error when installing/building cDNA_cupcake HOT 2
- solved
- Saturation analysis bug fixes; compatibility with newer isoseq cluster [make_file_for_subsampling_from_collapsed.py]
- AttributeError: module 'numpy' has no attribute 'int'. HOT 2
- ModuleNotFoundError: No module named 'cupcake.tofu' after installing cdna_cupcake in conda subenv HOT 3
- fusion_finder.py HOT 1
- Demultiplex after clustering / genome alignment
- cdna_cupcake与SQANTI3
- collapse_isoforms_by_sam.py out of memory
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cdna_cupcake.