CS/informatics papers usually hold six main components:
- title
- abstract
- introduction
- method
- experiments
- conclusion
And I will spend time to run through the paper three times:
First round: Run through the title, abstract, conclusion. Take a look at important figures and tables in the Methods and Experiments section. In this way, you can spend more than ten minutes to understand whether the paper is suitable for your research direction.
Second round: After confirming that the paper is worth reading, you can quickly go through the whole paper. You don’t need to know all the details. You need to understand important figures and tables, know what each part is doing, and circle the relevant literature. If you think the article is too difficult, you can read the cited literature.
The third round: what problem to ask, what method to use to solve this problem. How the experiment was done. Close the article and recall what each section is about.
(Using semantics-scholar,Because its clean API)
How to use semantics-scholar's API:
(https://img.shields.io/badge/dynamic/json?label=citation&query=citationCount&url=https%3A%2F%2Fapi.semanticscholar.org%2Fgraph%2Fv1%2Fpaper%2F + last chunk of slash in the url + %3Ffields%3DcitationCount) plus sign not included with no space
What is Diffusion // Diffusion Demo with code // Prompting intro // Contrastive learning and multi-modal // Method call //
Proposed by? | Year | Title | Date | citation |
---|---|---|---|---|
Shan | 2022 | Competence-based Multimodal Curriculum Learning for Medical Report Generation | Sept,06/22 |
Read? | Year | Title | what do i think | citation |
---|---|---|---|---|
✅ | 2012 | AlexNet | First big W for Deeplearning |
Read? | Year | Title | what do i think | citation |
---|---|---|---|---|
✅ | 2022 | CheXzero | Limited refinement work of CLIP | |
✅ | 2019 | ConVirt | medical domain and inspired CLIP | |
✅ | 2020 | ViT | Transformer into CV world | |
✅ | 2021 | CLIP | unsupervised contrastive learning with huge amount data building a richer semantics |
Read? | Year | Title | what do i think | citation |
---|---|---|---|---|
✅ | 2014 | GAN | First GAN |
Read? | Year | Title | what do i think | citation |
---|---|---|---|---|
✅ | 2017 | Transformer | Best for NLP after (MLP、CNN、RNN) | |
✅ | 2022 | Do Prompt-Based Models Really Understand the Meaning of Their Prompts? | prompt semantics not as how we think |
Read? | Year | Title | what do i think | citation |
---|---|---|---|---|
✅ | 2021 | AlphaFold 2 | atomic level 3D protein prediction |