GithubHelp home page GithubHelp logo

atfortes / awesome-llm-reasoning Goto Github PK

View Code? Open in Web Editor NEW
1.1K 27.0 58.0 257 KB

Reasoning in Large Language Models: Papers and Resources, including Chain-of-Thought, Instruction-Tuning and Multimodality.

License: MIT License

language-models reasoning prompt question-answering in-context-learning chatgpt chain-of-thought prompt-engineering cot awesome

awesome-llm-reasoning's Introduction

Stargazers Forks Contributors MIT License

Awesome LLM Reasoning

Curated collection of papers and resources on how to unlock the reasoning ability of LLMs and MLLMs.

🗂️ Table of Contents
  1. 🔤 Language Reasoning
  2. 🧠 Multimodal Reasoning
  3. Other Useful Resources
  4. Other Awesome Lists
  5. Contributing

Also check out Awesome-Controllable-Generation!

🔤 Language Reasoning

Large Language Models have revolutionized the NLP landscape, showing improved performance and sample efficiency over smaller models. However, increasing model size alone has not proved sufficient for high performance on challenging reasoning tasks, such as solving arithmetic or commonsense problems. We present a collection of papers and resources on how to unlock these abilities.

Survey

  1. Reasoning with Language Model Prompting: A Survey. ACL 2023

    Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen. [Paper] [Code], 2022.12

  2. Towards Reasoning in Large Language Models: A Survey. ACL 2023 Findings

    Jie Huang, Kevin Chen-Chuan Chang. [Paper] [Code], 2022.12

  3. Puzzle Solving using Reasoning of Large Language Models: A Survey. Preprint

    Panagiotis Giadikiaroglou, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou. [Paper] [Code], 2024.2

↑ Back to Top ↑

Analysis

  1. Can language models learn from explanations in context? EMNLP 2022

    Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill. [Paper], 2022.4

  2. Emergent Abilities of Large Language Models. TMLR 2022

    Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus. [Paper] [Blog], 2022.6

  3. Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them. ACL 2023 Findings

    Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei. [Paper] [Code], 2022.10

  4. Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. ACL 2023

    Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun. [Paper] [Code], 2022.12

  5. On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning. ACL 2023

    Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, Diyi Yang. [Paper], 2022.12

  6. Dissociating language and thought in large language models: a cognitive perspective. ICBINB NeurIPS Workshop 2023

    Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, Evelina Fedorenko. [Paper], 2023.1

  7. Large Language Models Can Be Easily Distracted by Irrelevant Context. ICML 2023

    Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou. [Paper], 2023.1

  8. A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. AACL 2023

    Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung. [Paper], 2023.2

  9. Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting. NeurIPS 2023

    Miles Turpin, Julian Michael, Ethan Perez, Samuel R. Bowman. [Paper] [Code], 2023.5

  10. Faith and Fate: Limits of Transformers on Compositionality. NeurIPS 2023

    Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi. [Paper], 2023.5

  11. Measuring Faithfulness in Chain-of-Thought Reasoning. Preprint

    Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez. [Paper], 2023.7

  12. Large Language Models Cannot Self-Correct Reasoning Yet. Preprint

    Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou. [Paper], 2023.10

  13. The Impact of Reasoning Step Length on Large Language Models. Preprint

    Mingyu Jin, Qinkai Yu, Dong shu, Haiyan Zhao, Wenyue Hua, Yanda Meng, Yongfeng Zhang, Mengnan Du. [Paper], 2024.1

↑ Back to Top ↑

Technique

Reasoning in Large Language Models - An Emergent Ability

  1. Chain of Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS 2022

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou. [Paper] [Blog], 2022.1

  2. Self-consistency improves chain of thought reasoning in language models. ICLR 2023

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou. [Paper], 2022.3

  3. Iteratively Prompt Pre-trained Language Models for Chain of Thought. EMNLP 2022

    Boshi Wang, Xiang Deng, Huan Sun. [Paper] [Code]

  4. Least-to-most prompting enables complex reasoning in large language models. ICLR 2023

    Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi. [Paper], 2022.5

  5. Large Language Models are Zero-Shot Reasoners. NeurIPS 2022

    Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa. [Paper], 2022.5

  6. Making Large Language Models Better Reasoners with Step-Aware Verifier. ACL 2023

    Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen. [Paper], 2022.6

  7. Large Language Models Still Can't Plan. NeurIPS 2022

    Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, Subbarao Kambhampati. [Paper] [Code], 2022.6

  8. Solving Quantitative Reasoning Problems with Language Models. NeurIPS 2022

    Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, Vedant Misra. [Paper] [Blog], 2022.6

  9. Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning. ICLR 2023

    Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan. [Project] [Paper] [Code], 2022.9

  10. Ask Me Anything: A simple strategy for prompting language models. ICLR 2023

    Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, Christopher Ré. [Paper] [Code], 2022.10

  11. Language Models are Multilingual Chain-of-Thought Reasoners. ICLR 2023

    Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei. [Paper], 2022.10

  12. Automatic Chain of Thought Prompting in Large Language Models. ICLR 2023

    Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola. [Paper] [Code], 2022.10

  13. Mind's Eye: Grounded language model reasoning through simulation. ICLR 2023

    Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai. [Paper], 2022.10

  14. Language Models of Code are Few-Shot Commonsense Learners. EMNLP 2022

    Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig. [Paper] [Code], 2022.10

  15. Large Language Models Can Self-Improve. Preprint

    Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han. [Paper], 2022.10

  16. Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022

    Wenhao Yu, Chenguang Zhu, Zhihan Zhang, Shuohang Wang, Zhuosheng Zhang, Yuwei Fang, Meng Jiang. [Paper] [Code], 2022.10

  17. Solving Math Word Problems via Cooperative Reasoning induced Language Models. ACL 2023

    Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Ruyi Gan, Jiaxing Zhang, Yujiu Yang. [Paper] [Code], 2022.10

  18. PAL: Program-aided Language Models. ICML 2023

    Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig. [Project] [Paper] [Code], 2022.11

  19. Unsupervised Explanation Generation via Correct Instantiations. AAAI 2023

    Sijie Cheng, Zhiyong Wu, Jiangjie Chen, Zhixing Li, Yang Liu, Lingpeng Kong. [Paper], 2022.11

  20. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. TMLR 2023

    Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen. [Paper] [Code], 2022.11

  21. Complementary Explanations for Effective In-Context Learning. ACL 2023 Findings

    Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, Ramakanth Pasunuru. [Paper], 2022.11

  22. Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model. EMNLP 2023 Findings

    Parishad BehnamGhader, Santiago Miret, Siva Reddy. [Paper] [Code], 2022.12

  23. Large Language Models are reasoners with Self-Verification. EMNLP 2023 Findings

    Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao. [Paper] [Code], 2022.12

  24. Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions. ACL 2023

    Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal. [Paper] [Code], 2022.12

  25. Language Models as Inductive Reasoners. Preprint

    Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, Furu Wei. [Paper], 2022.12

  26. LAMBADA: Backward Chaining for Automated Reasoning in Natural Language. ACL 2023

    Seyed Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran. [Paper], 2022.12

  27. Rethinking with Retrieval: Faithful Large Language Model Inference. Preprint

    Hangfeng He, Hongming Zhang, Dan Roth. [Paper], 2023.1

  28. Faithful Chain-of-Thought Reasoning. IJCNLP-AACL 2023

    Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch. [Paper], 2023.1

  29. Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models. ICML 2023

    Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen. [Paper], 2023.2

  30. Active Prompting with Chain-of-Thought for Large Language Models. Preprint

    Shizhe Diao, Pengcheng Wang, Yong Lin, Tong Zhang. [Paper] [Code], 2023.2

  31. Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data. EMNLP 2023 Findings

    KaShun Shum, Shizhe Diao, Tong Zhang. [Paper] [Code], 2023.2

  32. ART: Automatic multi-step reasoning and tool-use for large language models. Preprint

    Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro. [Paper], 2023.3

  33. REFINER: Reasoning Feedback on Intermediate Representations. Preprint

    Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, Boi Faltings. [Project] [Paper] [Code], 2023.4

  34. SatLM: Satisfiability-Aided Language Models Using Declarative Prompting NeurIPS 2023

    Xi Ye, Qiaochu Chen, Isil Dillig, Greg Durrett [Paper] [Code], 2023.5

  35. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. NeurIPS 2023

    Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan. [Paper] [Code], 2023.5

  36. Reasoning Implicit Sentiment with Chain-of-Thought Prompting ACL 2023

    Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua. [Paper] [Code], 2023.05

  37. Reasoning with Language Model is Planning with World Model. EMNLP 2023

    Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu. [Paper], 2023.5

  38. Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with Language Models. ACL 2023 Findings

    Soochan Lee, Gunhee Kim. [Paper] [Code] [Poster], 2023.6

  39. Question Decomposition Improves the Faithfulness of Model-Generated Reasoning. Preprint

    Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Sam McCandlish, Sheer El Showk, Tamera Lanham, Tim Maxwell, Venkatesa Chandrasekaran, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez. [Paper] [Code], 2023.7

  40. Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding. ENLSP NeurIPS Workshop 2023

    Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, Yu Wang. [Paper], 2023.7

  41. Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models. Preprint

    Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen. [Paper], 2023.8

  42. Chain-of-Verification Reduces Hallucination in Large Language Models. Preprint

    Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, Jason Weston. [Paper], 2023.9

  43. Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic. COLING 2024

    Xufeng Zhao, Mengdi Li, Wenhao Lu, Cornelius Weber, Jae Hee Lee, Kun Chu, Stefan Wermter. [Paper] [Code], 2023.9

  44. Enable Language Models to Implicitly Learn Self-Improvement From Data. Preprint

    Ziqi Wang, Le Hou, Tianjian Lu, Yuexin Wu, Yunxuan Li, Hongkun Yu, Heng Ji. [Paper], 2023.10

  45. Improving Large Language Model Fine-tuning for Solving Math Problems. Preprint

    Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co-Reyes, Peter J. Liu. [Paper], 2023.10

  46. Teaching Language Models to Self-Improve through Interactive Demonstrations. Preprint

    Xiao Yu, Baolin Peng, Michel Galley, Jianfeng Gao, Zhou Yu. [Paper], 2023.10

  47. Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning. EMNLP 2023 Findings

    Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang. [Paper] [Code], 2023.10

  48. Boosting LLM Reasoning: Push the Limits of Few-shot Learning with Reinforced In-Context Pruning Preprint

    Xijie Huang, Li Lyna Zhang, Kwang-Ting Cheng, Mao Yang. [Paper], 2023.12

  49. Efficient Tool Use with Chain-of-Abstraction Reasoning. Preprint

    Silin Gao, Jane Dwivedi-Yu, Ping Yu, Xiaoqing Ellen Tan, Ramakanth Pasunuru, Olga Golovneva, Koustuv Sinha, Asli Celikyilmaz, Antoine Bosselut, Tianlu Wang. [Paper], 2024.1

  50. Self-playing Adversarial Language Game Enhances LLM Reasoning. Preprint

    Pengyu Cheng, Tianhao Hu, Han Xu, Zhisong Zhang, Yong Dai, Lei Han, Nan Du. [Paper], 2024.4

↑ Back to Top ↑

Scaling Smaller Language Models to Reason

  1. Scaling Instruction-Finetuned Language Models. Preprint

    Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei. [Paper], 2022.10

  2. Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions. ACL 2023 Findings

    Kumar Shridhar, Alessandro Stolfo, Mrinmaya Sachan. [Paper], 2022.12

  3. Teaching Small Language Models to Reason. ACL 2023 Short Papers

    Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn. [Paper], 2022.12

  4. Large Language Models Are Reasoning Teachers. ACL 2023

    Namgyu Ho, Laura Schmid, Se-Young Yun. [Paper] [Code], 2022.12

  5. Specializing Smaller Language Models towards Multi-Step Reasoning. ICML 2023

    Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot. [Paper], 2023.1

  6. Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step. ACL 2023

    Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, Yejin Choi. [Paper] [Code], 2023.6

  7. Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic. ICML 2023

    Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa. [Paper] [Code], 2023.8

  8. Making Reasoning Matter: Measuring and Improving Faithfulness of Chain-of-Thought Reasoning. Preprint

    Debjit Paul, Robert West, Antoine Bosselut, Boi Faltings. [Paper] [Code], 2024.2

↑ Back to Top ↑

Benchmark

Reasoning Ability Benchmarks
Arithmetic GSM8K / SVAMP / ASDiv / AQuA / MAWPS / AddSub / MultiArith / SingleEq / SingleOp / Lila
Commonsense CommonsenseQA / StrategyQA / ARC / BoolQ / HotpotQA / OpenBookQA / PIQA
Symbolic CoinFlip / LastLetterConcatenation / ReverseList
Logical ReClor / LogiQA / ProofWriter / FLD / FOLIO
Other ARB / BIG-bench / AGIEval / ALERT / CONDAQA / SCAN / WikiWhy

Note: Although there is no official version for the Symbolic Reasoning benchmarks, you can generate your own here!

↑ Back to Top ↑

🧠 Multimodal Reasoning

Consider how difficult it would be to study from a book that lacks any figures, diagrams, or tables. We enhance our learning ability when we combine different data modalities, such as vision, language, and audio [1]. We present a collection of papers and resources on how to unlock these abilities under multimodal settings.

Technique

End-to-end Models

  1. Flamingo: a Visual Language Model for Few-Shot Learning. NeurIPS 2022

    Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, Karen Simonyan. [Blog] [Paper], 2022.4

  2. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. ICML 2023

    Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. [Paper] [Code], 2023.1

  3. Language Is Not All You Need: Aligning Perception with Language Models. NeurIPS 2023

    Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Barun Patra, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei. [Paper], 2023.2

  4. Prismer: A Vision-Language Model with An Ensemble of Experts. TMLR 2024

    Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar. [Project] [Paper] [Code] [Demo], 2023.3

  5. PaLM-E: An Embodied Multimodal Language Model. ICML 2023

    Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, Pete Florence. [Project] [Paper], 2023.3

  6. GPT-4 Technical Report. Technical Report

    OpenAI. [Blog] [Paper], 2023.3

  7. Visual Instruction Tuning. NeurIPS 2023

    Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee. [Project] [Paper] [Code] [Demo], 2023.4

  8. MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models. Preprint

    Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, Mohamed Elhoseiny. [Project] [Paper] [Code], 2023.4

  9. Otter: A Multi-Modal Model with In-Context Instruction Tuning Technical Report

    Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, Ziwei Liu. [Paper] [Code], 2023.5

  10. VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks. Technical Report

    VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks. [Paper] [Code] [Demo], 2023.5

  11. Kosmos-2: Grounding Multimodal Large Language Models to the World Preprint

    Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei. [Paper], 2023.6

  12. BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs. Preprint

    Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, Bingyi Kang. [Paper] [Code], 2023.7

  13. Med-Flamingo: a Multimodal Medical Few-shot Learner. Preprint

    Michael Moor, Qian Huang, Shirley Wu, Michihiro Yasunaga, Cyril Zakka, Yash Dalmia, Eduardo Pontes Reis, Pranav Rajpurkar, Jure Leskovec. [Paper] [Code], 2023.7

  14. Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities. Preprint

    Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou. [Paper] [Code], 2023.8

  15. Kosmos-2.5: A Multimodal Literate Model. Preprint

    Tengchao Lv, Yupan Huang, Jingye Chen, Lei Cui, Shuming Ma, Yaoyao Chang, Shaohan Huang, Wenhui Wang, Li Dong, Weiyao Luo, Shaoxiang Wu, Guoxin Wang, Cha Zhang, Furu Wei. [Paper], 2023.9

  16. Improved Baselines with Visual Instruction Tuning. Preprint

    Haotian Liu, Chunyuan Li, Yuheng Li, Yong Jae Lee. [Project] [Paper] [Code], 2023.10

  17. G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language Model. Preprint

    Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, Lingpeng Kong. [Paper], 2023.12

  18. Gemini: A Family of Highly Capable Multimodal Models. Preprint

    Gemini Team, Google. [Paper], 2023.12

  19. Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models. Preprint

    Yuqing Wang, Yun Zhao. [Paper], 2023.12

  20. SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities. Preprint

    Boyuan Chen, Zhuo Xu, Sean Kirmani, Brian Ichter, Danny Driess, Pete Florence, Dorsa Sadigh, Leonidas Guibas, Fei Xia. [Project] [Paper], 2024.1

↑ Back to Top ↑

Prompting & In-context Learning

  1. Multimodal Few-Shot Learning with Frozen Language Models. NeurIPS 2021

    Multimodal Few-Shot Learning with Frozen Language Models. [Paper], 2021.6

  2. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language. ICLR 2023

    Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, Pete Florence. [Project] [Paper] [Code], 2022.4

  3. Multimodal Chain-of-Thought Reasoning in Language Models. Preprint

    Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola. [Paper] [Code], 2023.2

  4. Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models. Preprint

    Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan. [Paper] [Code], 2023.3

  5. MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action. Preprint

    Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang. [Project] [Paper] [Code] [Demo], 2023.3

  6. Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings. Preprint

    Daniel Rose, Vaishnavi Himakunthala, Andy Ouyang, Ryan He, Alex Mei, Yujie Lu, Michael Saxon, Chinmay Sonar, Diba Mirza, William Yang Wang. [Paper] [Code], 2023.5

  7. Link-Context Learning for Multimodal LLMs. Preprint

    Yan Tai, Weichen Fan, Zhao Zhang, Feng Zhu, Rui Zhao, Ziwei Liu. [Paper] [Code], 20233.8

  8. Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding Preprint

    Zilong Wang, Hao Zhang, Chun-Liang Li, Julian Martin Eisenschlos, Vincent Perot, Zifeng Wang, Lesly Miculicich, Yasuhisa Fujii, Jingbo Shang, Chen-Yu Lee, Tomas Pfister. [Paper], 2024.1

↑ Back to Top ↑

Compositional & Symbolic Approach

  1. Inferring and Executing Programs for Visual Reasoning. ICCV 2017

    Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence Zitnick, Ross Girshick. [Project] [Paper] [Code], 2017.5

  2. Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. NeurIPS 2018

    Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, Joshua B. Tenenbaum. [Project] [Paper] [Code], 2018.10

  3. Visual Programming: Compositional visual reasoning without training. CPVR 2023

    Tanmay Gupta, Aniruddha Kembhavi. [Project] [Paper] [Code], 2022.11

  4. ViperGPT: Visual Inference via Python Execution for Reasoning. ICCV 2023

    Dídac Surís, Sachit Menon, Carl Vondrick. [Project] [Paper] [Code], 2023.3

  5. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace. NeurIPS 2023

    Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang. [Paper] [Code], 2023.3

  6. Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models. NeurIPS 2023

    Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Jianfeng Gao. [Project] [Paper] [Code], 2023.4

  7. Woodpecker: Hallucination Correction for Multimodal Large Language Models. Preprint

    Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, Enhong Chen. [Paper] [Code], 2023.10

  8. MM-VID: Advancing Video Understanding with GPT-4V(ision). Preprint

    Kevin Lin, Faisal Ahmed, Linjie Li, Chung-Ching Lin, Ehsan Azarnasab, Zhengyuan Yang, Jianfeng Wang, Lin Liang, Zicheng Liu, Yumao Lu, Ce Liu, Lijuan Wang. [Project] [Paper] [Demo], 2023.10

↑ Back to Top ↑

Benchmark

  • SCIENCEQA Multimodal multiple choice questions with diverse science topics and annotations of their answers with corresponding lectures and explanations.
  • ARO Systematically evaluate the ability of VLMs to understand different types of relationships, attributes, and order.
  • OK-VQA Visual question answering that requires methods which can draw upon outside knowledge to answer questions.
  • A-OKVQA Knowledge-based visual question answering benchmark.
  • NExT-QA Video question answering (VideoQA) benchmark to advance video understanding from describing to explaining the temporal actions.
  • GQA Compositional questions over real-world images.
  • VQA Questions about images that require an understanding of vision, language and commonsense knowledge.
  • VQAv2 2nd iteration of the Visual Question Answering Dataset (VQA).
  • TAG Questions that require understanding the textual cues in an image.
  • Bongard-HOI Visual reasoning benchmark on compositional learning of human-object interactions (HOIs) from natural images.
  • ARC General artificial intelligence benchmark, targetted at artificially intelligent systems that aim at emulating a human-like form of general fluid intelligence.

↑ Back to Top ↑

Other Useful Resources

  • LLM Reasoners A library for advanced large language model reasoning
  • Chain-of-Thought Hub Benchmarking LLM reasoning performance with chain-of-thought prompting.
  • ThoughtSource Central and open resource for data and tools related to chain-of-thought reasoning in large language models.
  • CoTEVer Chain of Thought Prompting Annotation Toolkit for Explanation Verification.
  • AgentChain Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks.
  • Cascades Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference, and more.
  • LogiTorch PyTorch-based library for logical reasoning on natural language.
  • Promptify Solve NLP Problems with LLM's & Easily generate different NLP Task prompts for popular generative models like GPT, PaLM, and more.
  • MiniChain Tiny library for large language models.
  • LlamaIndex Provides a central interface to connect your LLM's with external data.
  • EasyInstruct Easy to use package for instructing Large Language Models (LLMs) like GPT-3 in research experiments.
  • salesforce/LAVIS One-stop Library for Language-Vision Intelligence.

↑ Back to Top ↑

Other Awesome Lists

↑ Back to Top ↑

Contributing

  • Add a new paper or update an existing paper, thinking about which category the work should belong to.
  • Use the same format as existing entries to describe the work.
  • Add the abstract link of the paper (/abs/ format if it is an arXiv publication).

Don't worry if you do something wrong, it will be fixed for you!

Contributors

Star History

Star History Chart

awesome-llm-reasoning's People

Contributors

atfortes avatar chinmaymittal avatar debjitpaul avatar huybery avatar jtonglet avatar linear95 avatar lynazhang avatar marialymperaiou avatar morisht avatar soochan-lee avatar superbrucejia avatar theashworld avatar tianhongzxy avatar xf-zhao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awesome-llm-reasoning's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.