GithubHelp home page GithubHelp logo

zjunlp / prompt4reasoningpapers Goto Github PK

View Code? Open in Web Editor NEW
806.0 25.0 64.0 7.46 MB

[ACL 2023] Reasoning with Language Model Prompting: A Survey

License: MIT License

prompt reasoning awsome-list chain-of-thought paper-list survey nlp datasets language-models natural-language-processing

prompt4reasoningpapers's Issues

Asking for figure images

Dear authors,

Thank you for your excellent effort. It's really helpful to me in my research.

I was wondering if you publish your figure images from any sources. I'd need it for my report and slideshow.

Best regards,
Thinh Pham

Suggestion of related work

Dear repo authors,

Thanks for building the excellent repo, it is very helpful in tracking the recent advances in the area.

I was wondering if our recent work (accepted by ICLR'2024) Prompt-OIRL --- using inverse RL for prompt optimization in arithmetic reasoning tasks can be included in the repo.

Please find also the openreview forum link of our paper.

Many thanks,
Hao

Request to add a new survey

Hi, thanks for your contributions to collating reasoning prompting methods!
Recently, we release a reasoning survey on natural language reasoning mainly from another perspective: the reasoning paradigm (end-to-end, forward, and backward).

Here are our survey and repository:
Nature Language Reasoning, A Survey
https://arxiv.org/pdf/2303.14725.pdf
https://github.com/FreedomIntelligence/ReasoningNLP

I believe our surveys and repositories can complementarily help people better understand the reasoning!

Some new papers with logical reasoning

Hi,

Thanks for the great work! We are the team from Strong AI Lab, University of Auckland, New Zealand. Here are three papers about deductive logical reasoning and abductive logical reasoning. Please feel free to consider adding them in the future ArXiv version paper.

Deductive Logical Reasoning

We construct logical equivalence data augmentation for contrastive learning to improve language model's logical reasoning performance and we achieved #2 on the ReClor leaderboard (One of the hardest logical reasoning reading comprehension dataset, the data was collected from LSAT and GMAT) and we also achieved better performance than other baseline models on different logical reasoning readining comprehension tasks and natural language inference tasks. Here is the details for the paper.

Our paper (Qiming Bao, Alex Yuxuan Peng, Zhenyun Deng, Wanjun Zhong, Neset Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock, Jiamou Liu)
"Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text" [Paper link] [Source code] [Model weights] [Leaderboard].

Multi-Step Deductive Logical Reasoning

This paper from our lab has been published on IJCLR-NeSy 2022. It is a new conference that specifically focuses on learning and reasoning and Prof. Zhi-hua Zhou is one of the co-organizers. This paper focused on multi-step deductive reasoning and it proposed a larger deep multi-step deductive reasoning dataset over natural language called PARARULE-PLUS which addresses the reasoning depth imbalance issue for the Ruletaker dataset. Our proposed PARARULE-Plus dataset has been collected and merged by LogiTorch.ai and OpenAI/Evals.

Our paper (Qiming Bao, Alex Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu) "Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation" has been accepted for presentation to the 2nd International Joint Conference on Learning & Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy-22) [Paper link] [Source code and dataset] [Presentation recording].

Abductive Logical Reasoning

This paper from our lab has been published on ACL 2022 Findings. This paper focused on abductive logical reasoning and it proposed a new abductive logical reasoning dataset over natural language called AbductionRules which is to help transformers explain and generate the reason by given the observation. Our proposed AbductionRules dataset has been collected by LogiTorch.ai.

Our paper (Nathan Young, Qiming Bao, Joshua Ljudo Bensemann, Michael J. Witbrock) "AbductionRules: Training Transformers to Explain Unexpected Inputs" has been accpeted for publication in the Findings of 60th Annual Meeting of the Association for Computational Linguistics (ACL-22) [Paper link] [Source code].

COT 方法应该是Multi-Stage吧

COT(Chain of thought prompting elicits reasoning in large language models)这篇文章应该是Prompt Engineering的Multi-Stage方法吧,我看归类到了Single-Stage下面

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.