GithubHelp home page GithubHelp logo

zjunlp / prompt4reasoningpapers Goto Github PK

View Code? Open in Web Editor NEW
800.0 27.0 63.0 7.46 MB

[ACL 2023] Reasoning with Language Model Prompting: A Survey

License: MIT License

prompt reasoning awsome-list chain-of-thought paper-list survey nlp datasets language-models natural-language-processing

prompt4reasoningpapers's Introduction

Reasoning with Language Model Prompting Papers

Awesome License: MIT

🔔 News


🔍 Contents


🌟 Introduction

Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions.


📜 Papers

Overview

  1. Reasoning with Language Model Prompting: A Survey.

    Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen. [abs], 2022.12

  2. Towards Reasoning in Large Language Models: A Survey.

    Jie Huang, Kevin Chen-Chuan Chang. [abs], 2022.12

  3. A Survey of Deep Learning for Mathematical Reasoning.

    Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, Kai-Wei Chang. [abs], 2022.12

  4. A Survey for In-context Learning.

    Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, Zhifang Sui. [abs], 2022.12

  5. Knowledge-enhanced Neural Machine Reasoning: A Review.

    Tanmoy Chowdhury, Chen Ling, Xuchao Zhang, Xujiang Zhao, Guangji Bai, Jian Pei, Haifeng Chen, Liang Zhao. [abs], 2023.2

  6. Augmented Language Models: a Survey.

    Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, Thomas Scialom. [abs], 2023.2

  7. The Life Cycle of Knowledge in Big Language Models: A Survey.

    Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun. [abs], 2023.3

  8. Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning.

    Renze Lou, Kai Zhang, Wenpeng Yin. [abs], 2023.3

  9. Logical Reasoning over Natural Language as Knowledge Representation: A Survey.

    Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, Erik Cambria. [abs], 2023.3

  10. Nature Language Reasoning, A Survey.

    Fei Yu, Hongbo Zhang, Benyou Wang. [abs], 2023.3

  11. A Survey of Large Language Models.

    Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen. [abs], 2023.3

  12. Tool Learning with Foundation Models.

    Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, Maosong Sun. [abs], 2023.4

  13. A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future.

    Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, Ting Liu. [abs], 2023.9

  14. A Survey of Reasoning with Foundation Models: Concepts, Methodologies, and Outlook.

    Jiankai Sun, Chuanyang Zheng, Enze Xie, Zhengying Liu, Ruihang Chu, Jianing Qiu, Jiaqi Xu, Mingyu Ding, Hongyang Li, Mengzhe Geng, Yue Wu, Wenhai Wang, Junsong Chen, Zhangyue Yin, Xiaozhe Ren, Jie Fu, Junxian He, Wu Yuan, Qi Liu, Xihui Liu, Yu Li, Hao Dong, Yu Cheng, Ming Zhang, Pheng Ann Heng, Jifeng Dai, Ping Luo, Jingdong Wang, Ji-Rong Wen, Xipeng Qiu, Yike Guo, Hui Xiong, Qun Liu, Zhenguo Li. [abs], 2023.12

Methods

Strategy Enhanced Reasoning

Prompt Engineering
Single-Stage
  1. Prompting Contrastive Explanations for Commonsense Reasoning Tasks.

    Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Luke Zettlemoyer, Hannaneh Hajishirzi. [abs], 2021.6

  2. Template Filling for Controllable Commonsense Reasoning.

    Dheeraj Rajagopal, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Andrew Fano, Eduard Hovy. [abs], 2021.11

  3. Chain of Thought Prompting Elicits Reasoning in Large Language Models.

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, Denny Zhou. [abs], 2022.1

  4. Large Language Models are Zero-Shot Reasoners.

    Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa. [abs], 2022.5

  5. Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models.

    Ben Prystawski, Paul Thibodeau, Noah Goodman. [abs], 2022.9

  6. Complexity-based Prompting for Multi-step Reasoning.

    Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot. [abs], 2022.10

  7. Language Models are Multilingual Chain-of-thought Reasoners.

    Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei. [abs], 2022.10

  8. Automatic Chain of Thought Prompting in Large Language Models.

    Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola. [abs], 2022.10

  9. Large Language Models are few(1)-shot Table Reasoners.

    Wenhu Chen. [abs], 2022.10

  10. Teaching Algorithmic Reasoning via In-context Learning.

    Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, Hanie Sedghi. [abs], 2022.11

  11. Active Prompting with Chain-of-Thought for Large Language Models.

    Shizhe Diao, Pengcheng Wang, Yong Lin, Tong Zhang. [abs], 2023.2

  12. Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data.

    KaShun Shum, Shizhe Diao, Tong Zhang. [abs], 2023.2

  13. A prompt pattern catalog to enhance prompt engineering with chatgpt.

    Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, Douglas C Schmidt. [abs], 2023.2

  14. ChatGPT Prompt Patterns for Improving Code Quality, Refactoring, Requirements Elicitation, anLearning to Reason and Memorize with Self-Notesd Software Design.

    Jules White, Sam Hays, Quchen Fu, Jesse Spencer-Smith, Douglas C Schmidt. [abs], 2023.3

  15. Learning to Reason and Memorize with Self-Notes.

    Jack lanchantin, Shubham Toshniwal, Jason Weston, Arthur Szlam, Sainbayar Sukhbaatar. [abs], 2023.5

  16. Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models.

    Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim. [abs], 2023.5

  17. Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models.

    Yao Yao, Zuchao Li, Hai Zhao. [abs], 2023.5

  18. Re-Reading Improves Reasoning in Language Models.

    Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, Jian-guang Lou. [abs], 2023.9

  19. Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL.

    Hao Sun, Alihan Huyuk, Mihaela van der Schaar.[abs], 2023.9

Multi-Stage
  1. Iteratively Prompt Pre-trained Language Models for Chain of Thought.

    Boshi Wang, Xiang Deng, Huan Sun. [abs], 2022.3

  2. Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning.

    Antonia Creswell, Murray Shanahan, Irina Higgins. [abs], 2022.5

  3. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.

    Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi. [abs], 2022.5

  4. Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations.

    Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi. [abs], 2022.5

  5. Faithful Reasoning Using Large Language Models.

    Antonia Creswell, Murray Shanahan. [abs], 2022.8

  6. Compositional Semantic Parsing with Large Language Models.

    Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou. [abs], 2022.9

  7. Decomposed Prompting: A Modular Approach for Solving Complex Tasks.

    Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, Ashish Sabharwal. [abs], 2022.10

  8. Measuring and Narrowing the Compositionality Gap in Language Models.

    Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis. [abs], 2022.10

  9. Successive Prompting for Decomposing Complex Questions.

    Dheeru Dua, Shivanshu Gupta, Sameer Singh, Matt Gardner. [abs], 2022.12

  10. The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning.

    Hanlin Zhang, Yi-Fan Zhang, Li Erran Li, Eric Xing. [abs], 2022.12

  11. LAMBADA: Backward Chaining for Automated Reasoning in Natural Language.

    Seyed Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran. [abs], 2022.12

  12. Iterated Decomposition: Improving Science Q&A by Supervising Reasoning Processes.

    Justin Reppert, Ben Rachbach, Charlie George, Luke Stebbing, Jungwon Byun, Maggie Appleton, Andreas Stuhlmüller. [abs], 2023.1

  13. Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement.

    Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang. [abs], 2023.5

Process Optimization
Self-Optimization
  1. Reframing Human-AI Collaboration for Generating Free-Text Explanations.

    Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, Yejin Choi. [abs], 2021.12

  2. The Unreliability of Explanations in Few-Shot In-Context Learning.

    Xi Ye, Greg Durrett. [abs], 2022.5

  3. Discriminator-Guided Multi-step Reasoning with Language Models.

    Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang. [abs], 2023.5

  4. RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought.

    Tianci Xue, Ziqi Wang, Zhenhailong Wang, Chi Han, Pengfei Yu, Heng Ji. [abs], 2023.5

Ensemble-Optimization
  1. Self-Consistency Improves Chain of Thought Reasoning in Language Models.

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou. [abs], 2022.3

  2. On the Advance of Making Language Models Better Reasoners.

    Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen. [abs], 2022.6

  3. Complexity-based Prompting for Multi-step Reasoning.

    Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot. [abs], 2022.10

  4. Large Language Models are reasoners with Self-Verification.

    Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao. [abs], 2022.12

  5. Answering Questions by Meta-Reasoning over Multiple Chains of Thought.

    Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel Deutch, Jonathan Berant. [abs], 2023.4

  6. Tree of Thoughts: Deliberate Problem Solving with Large Language Models.

    Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan. [abs], 2023.5

  7. Improving Factuality and Reasoning in Language Models through Multiagent Debate.

    Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch. [abs], 2023.5

  8. AutoMix: Automatically Mixing Language Models

    Aman Madaan, Pranjal Aggarwal, Ankit Anand, Srividya Pranavi Potharaju, Swaroop Mishra, Pei Zhou, Aditya Gupta, Dheeraj Rajagopal, Karthik Kappaganthu, Yiming Yang, Shyam Upadhyay, Mausam, Manaal Faruqui. [abs], 2023.9

Iterative-Optimization
  1. STaR: Bootstrapping Reasoning With Reasoning.

    Eric Zelikman, Yuhuai Wu, Noah D. Goodman. [abs], 2022.3

  2. Large Language Models Can Self-Improve.

    Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han. [abs], 2022.10

  3. Reflexion: An Autonomous Agent with Dynamic Memory and Self-reflection.

    Noah Shinn, Beck Labash, Ashwin Gopinath. [abs], 2023.3

  4. Self-Refine: Iterative Refinement with Self-Feedback.

    Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, Peter Clark. [abs], 2023.3

  5. REFINER: Reasoning Feedback on Intermediate Representations.

    Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, Boi Faltings. [abs], 2023.4

  6. Reasoning with Language Model is Planning with World Model

    Shibo Hao*, Yi Gu*, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu [abs], 2023.5

  7. Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic.

    Xufeng Zhao, Mengdi Li, Wenhao Lu, Cornelius Weber, Jae Hee Lee, Kun Chu, Stefan Wermter. [abs] [code], 2024.2

External Engine
Physical Simulator
  1. Mind's Eye: Grounded Language Model Reasoning through Simulation.

    Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai. [abs], 2022.10

Code Interpreter
  1. Language Models of Code are Few-Shot Commonsense Learners.

    Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig. [abs], 2022.10

  2. PAL: Program-aided Language Models.

    Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig. [abs], 2022.11

  3. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks.

    Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen. [abs], 2022.11

  4. Faithful Chain-of-Thought Reasoning.

    Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch. [abs], 2023.1

  5. Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning.

    Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li. [abs], 2023.1

  6. Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models.

    Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen. [abs], 2023.2

  7. MathPrompter: Mathematical Reasoning Using Large Language Models.

    Shima Imani, Liang Du, Harsh Shrivastava. [abs], 2023.3

  8. Automatic Model Selection with Large Language Models for Reasoning.

    Xu Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, Qizhe Xie. [abs], 2023.5

  9. Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models.

    Yi Hu, Haotong Yang, Zhouchen Lin, Muhan Zhang. [abs], 2023.5

  10. The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code.

Xiao Liu, Da Yin, Chen Zhang, Yansong Feng, Dongyan Zhao. [abs], 2023.5

  1. When Do Program-of-Thought Works for Reasoning?

Zhen Bi, Ningyu Zhang, Yinuo Jiang, Shumin Deng, Guozhou Zheng, Huajun Chen. [abs], 2023.12

Tool Learning
  1. Toolformer: Language Models Can Teach Themselves to Use Tools.

    Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom. [abs], 2023.2

  2. ART: Automatic multi-step reasoning and tool-use for large language models.

    Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro. [abs], 2023.3

  3. Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models.

    Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Jianfeng Gao. [abs], 2023.4

  4. CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing.

    Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, Weizhu Chen. [abs], 2023.5

  5. Making Language Models Better Tool Learners with Execution Feedback.

    Shuofei Qiao, Honghao Gui, Huajun Chen, Ningyu Zhang. [abs], 2023.5

  6. CREATOR: Disentangling Abstract and Concrete Reasonings of Large Language Models through Tool Creation.

    Cheng Qian, Chi Han, Yi R. Fung, Yujia Qin, Zhiyuan Liu, Heng Ji. [abs], 2023.5

  7. ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models.

    Zhipeng Chen, Kun Zhou, Beichen Zhang, Zheng Gong, Wayne Xin Zhao, Ji-Rong Wen. [abs], 2023.5

  8. MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting.

    Tatsuro Inaba, Hirokazu Kiyomaru, Fei Cheng, Sadao Kurohashi. [abs], 2023.5

  9. ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings

    Shibo Hao, Tianyang Liu, Zhen Wang, Zhiting Hu [abs], 2023.5

Knowledge Enhanced Reasoning

Implicit Knowledge
  1. Generated Knowledge Prompting for Commonsense Reasoning.

    Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi. [abs], 2021.10

  2. Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering.

    Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi. [abs], 2022.10

  3. Explanations from Large Language Models Make Small Reasoners Better.

    Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, Wenhu Chen, Xifeng Yan. [abs], 2022.10

  4. PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales.

    Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren. [abs], 2022.11

  5. TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense Question Answering.

    Yueqing Sun, Yu Zhang, Le Qi, Qi Shi. [abs], 2022.11

  6. Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions.

    Kumar Shridhar, Alessandro Stolfo, Mrinmaya Sachan. [abs], 2022.12

  7. Teaching Small Language Models to Reason.

    Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn. [abs], 2022.12

  8. Large Language Models Are Reasoning Teachers.

    Namgyu Ho, Laura Schmid, Se-Young Yun. [abs], 2022.12

  9. Specializing Smaller Language Models towards Multi-Step Reasoning.

    Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot. [abs], 2023.1

  10. PaD: Program-aided Distillation Specializes Large Models in Reasoning.

    Xuekai Zhu, Biqing Qi, Kaiyan Zhang, Xingwei Long, Bowen Zhou. [abs], 2023.5

Explicit Knowledge
  1. MemPrompt: Memory-assisted prompt editing to improve GPT-3 after deployment

    Aman Madaan, Niket Tandon, Peter Clark, Yiming Yang. [abs], 2022.1

  2. LogicSolver: Towards Interpretable Math Word Problem Solving with Logical Prompt-enhanced Learning.

    Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Liang Lin, Xiaodan Liang. [abs], 2022.5

  3. Selective Annotation Makes Language Models Better Few-Shot Learners.

    Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu. [abs], 2022.9

  4. Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning.

    Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan. [abs], 2022.9

  5. Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions.

    Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal. [abs], 2022.12

  6. Rethinking with Retrieval: Faithful Large Language Model Inference.

    Hangfeng He, Hongming Zhang, Dan Roth. [abs], 2023.1

  7. Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework.

    Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing. [abs], 2023.5

Others

  1. Language Model Cascades.

    David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton. [abs], 2022.7

  2. Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering.

    Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, Ashwin Kalyan. [abs], 2022.9

  3. Multimodal Analogical Reasoning over Knowledge Graphs.

    Ningyu Zhang, Lei Li, Xiang Chen, Xiaozhuan Liang, Shumin Deng, Huajun Chen. [abs], 2022.10

  4. Scaling Instruction-Finetuned Language Models.

    Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei. [abs], 2022.10

  5. See, Think, Confirm: Interative Prompting Between Vision and Language Models for Knowledge-based Visual Reasoning.

    Zhenfang Chen, Qinhong Zhou, Yikang Shen, Yining Hong, Hao Zhang, Chuang Gan. [abs], 2023.1

  6. Multimodal Chain-of-Thought Reasoning in Language Models.

    Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola. [abs], 2023.2

  7. Language Is not All You Need: Aligning Perception with Language Models.

    Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei. [abs], 2023.2

  8. Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models.

    Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan. [abs], 2023.3

  9. ViperGPT: Visual Inference via Python Execution for Reasoning.

    Dídac Surís, Sachit Menon, Carl Vondrick. [abs], 2023.3

  10. MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action.

    Zhengyuan Yang, Linjie Li , Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang. [abs], 2023.3

  11. Boosting Theory-of-Mind Performance in Large Language Models via Prompting.

    Shima Rahimi Moghaddam, Christopher J. Honey. [abs], 2023.4

Analysis

  1. Can language models learn from explanations in context?

    Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill. [abs], 2022.4

  2. Emergent Abilities of Large Language Models.

    Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus. [abs], 2022.6

  3. Language models show human-like content effects on reasoning.

    Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill. [abs], 2022.7

  4. Rationale-Augmented Ensembles in Language Models.

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou. [abs], 2022.7

  5. Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts.

    Joel Jang, Seongheyon Ye, Minjoon Seo. [abs], 2022.9

  6. Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango

    Aman Madaan, Amir Yazdanbakhsh. [abs], 2022.9

  7. Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them.

    Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei. [abs], 2022.10

  8. Language Models are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-thought.

    Abulhair Saparov, He He. [abs], 2022.10

  9. Knowledge Unlearning for Mitigating Privacy Risks in Language Models.

    Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo. [abs], 2022.10

  10. Emergent Analogical Reasoning in Large Language Models.

    Taylor Webb, Keith J. Holyoak, Hongjing Lu. [abs], 2022.12

  11. Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters.

    Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun. [abs], 2022.12

  12. On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning.

    Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, Diyi Yang. [abs], 2022.12

  13. Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model.

    Parishad BehnamGhader, Santiago Miret, Siva Reddy. [abs], 2022.12

  14. Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers.

    Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei. [abs], 2022.12

  15. Dissociating language and thought in large language models: a cognitive perspective.

    Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, Evelina Fedorenko. [abs], 2023.1

  16. Large Language Models Can Be Easily Distracted by Irrelevant Context.

    Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou. [abs], 2023.2

  17. A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity.

    Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung. [abs], 2023.2

  18. ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models.

    Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie Lu, Ben He. [abs], 2023.3

  19. Why think step-by-step? Reasoning emerges from the locality of experience.

    Ben Prystawski, Noah D. Goodman. [abs], 2023.4

  20. Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic.

    Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa. [abs], 2023.8


🧰 Resources

Benchmarks and Tasks

Reasoning Skills Benchmarks
Arithmetic Reasoning GSM8K, SVAMP, ASDiv, AQuA-RAT, MAWPS, AddSub, MultiArith, SingleEq, SingleOp
Commonsense Reasoning CommonsenseQA, StrategyQA, ARC, SayCan, BoolQA, HotpotQA, OpenBookQA, PIQA, WikiWhy
Symbolic Reasoning Last Letter Concatenation, Coin Flip, Reverse List
Logical Reasoning ProofWriter, EntailmentBank, RuleTaker, CLUTRR, FLD
Multimodal Reasoning SCIENCEQA
Others BIG-bench, SCAN, Chain-of-Thought Hub

Tools

  • ThoughtSource: A central, open resource for data and tools related to chain-of-thought reasoning in LLMs.
  • LangChain: A library designed to help developers build applications using LLMs combined with other sources of computation or knowledge.
  • LogiTorch: A PyTorch-based library for logical reasoning on natural language.
  • λprompt: A library that allows for building a full large LM-based prompt machines, including ones that self-edit to correct and even self-write their own execution code.
  • Promptify: Prompt Engineering, Solve NLP Problems with LLM's & Easily generate different NLP Task prompts for popular generative models like GPT, PaLM, and more with Promptify.
  • MiniChain: A tiny library for coding with large language models that aims to implement the core prompt chaining functionality.
  • LlamaIndex: A project that provides a central interface to connect your LLM's with external data.
  • EasyInstruct: A package for instructing Large Language Models (LLMs) like GPT-3 in your research experiments. It is designed to be easy to use and easy to extend.

🎉 Contributing

  • Add a new paper or update an existing paper, thinking about which category the work should belong to.
  • Use the same format as existing entries to describe the work.
  • Add the abstract link of the paper (/abs/ format if it is an arXiv publication).
  • A very brief explanation why you think a paper should be added or updated is recommended.

Don't worry if you put something wrong, they will be fixed for you. Just contribute and promote your awesome work here!

Contributors


🚩Citation

If you find this survey useful for your research, please consider citing

@inproceedings{qiao-etal-2023-reasoning,
    title = "Reasoning with Language Model Prompting: A Survey",
    author = "Qiao, Shuofei  and
      Ou, Yixin  and
      Zhang, Ningyu  and
      Chen, Xiang  and
      Yao, Yunzhi  and
      Deng, Shumin  and
      Tan, Chuanqi  and
      Huang, Fei  and
      Chen, Huajun",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.294",
    pages = "5368--5393",
    abstract = "Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions. Resources are available at https://github.com/zjunlp/Prompt4ReasoningPapers (updated periodically).",
}

prompt4reasoningpapers's People

Contributors

aeft avatar atfortes avatar ber666 avatar bizhen46766 avatar gooodte avatar holarissun avatar huybery avatar lesilez avatar madaan avatar morisht avatar oe-heart avatar quchenfu avatar tebmer avatar xf-zhao avatar xxxiaol avatar zxlzr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prompt4reasoningpapers's Issues

Request to add a new survey

Hi, thanks for your contributions to collating reasoning prompting methods!
Recently, we release a reasoning survey on natural language reasoning mainly from another perspective: the reasoning paradigm (end-to-end, forward, and backward).

Here are our survey and repository:
Nature Language Reasoning, A Survey
https://arxiv.org/pdf/2303.14725.pdf
https://github.com/FreedomIntelligence/ReasoningNLP

I believe our surveys and repositories can complementarily help people better understand the reasoning!

Some new papers with logical reasoning

Hi,

Thanks for the great work! We are the team from Strong AI Lab, University of Auckland, New Zealand. Here are three papers about deductive logical reasoning and abductive logical reasoning. Please feel free to consider adding them in the future ArXiv version paper.

Deductive Logical Reasoning

We construct logical equivalence data augmentation for contrastive learning to improve language model's logical reasoning performance and we achieved #2 on the ReClor leaderboard (One of the hardest logical reasoning reading comprehension dataset, the data was collected from LSAT and GMAT) and we also achieved better performance than other baseline models on different logical reasoning readining comprehension tasks and natural language inference tasks. Here is the details for the paper.

Our paper (Qiming Bao, Alex Yuxuan Peng, Zhenyun Deng, Wanjun Zhong, Neset Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock, Jiamou Liu)
"Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text" [Paper link] [Source code] [Model weights] [Leaderboard].

Multi-Step Deductive Logical Reasoning

This paper from our lab has been published on IJCLR-NeSy 2022. It is a new conference that specifically focuses on learning and reasoning and Prof. Zhi-hua Zhou is one of the co-organizers. This paper focused on multi-step deductive reasoning and it proposed a larger deep multi-step deductive reasoning dataset over natural language called PARARULE-PLUS which addresses the reasoning depth imbalance issue for the Ruletaker dataset. Our proposed PARARULE-Plus dataset has been collected and merged by LogiTorch.ai and OpenAI/Evals.

Our paper (Qiming Bao, Alex Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu) "Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation" has been accepted for presentation to the 2nd International Joint Conference on Learning & Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy-22) [Paper link] [Source code and dataset] [Presentation recording].

Abductive Logical Reasoning

This paper from our lab has been published on ACL 2022 Findings. This paper focused on abductive logical reasoning and it proposed a new abductive logical reasoning dataset over natural language called AbductionRules which is to help transformers explain and generate the reason by given the observation. Our proposed AbductionRules dataset has been collected by LogiTorch.ai.

Our paper (Nathan Young, Qiming Bao, Joshua Ljudo Bensemann, Michael J. Witbrock) "AbductionRules: Training Transformers to Explain Unexpected Inputs" has been accpeted for publication in the Findings of 60th Annual Meeting of the Association for Computational Linguistics (ACL-22) [Paper link] [Source code].

Suggestion of related work

Dear repo authors,

Thanks for building the excellent repo, it is very helpful in tracking the recent advances in the area.

I was wondering if our recent work (accepted by ICLR'2024) Prompt-OIRL --- using inverse RL for prompt optimization in arithmetic reasoning tasks can be included in the repo.

Please find also the openreview forum link of our paper.

Many thanks,
Hao

COT 方法应该是Multi-Stage吧

COT(Chain of thought prompting elicits reasoning in large language models)这篇文章应该是Prompt Engineering的Multi-Stage方法吧,我看归类到了Single-Stage下面

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.