GithubHelp home page GithubHelp logo

Comments (4)

fate-ubw avatar fate-ubw commented on July 17, 2024

I hava met the same problem as you
change the 313 line in self-rag/retrieval_lm/run_short_form.py . I think the author made wrong with this code

    def generate(prompt, evidences, max_new_tokens):
        return call_model_rerank_w_scores_batch(prompt, evidences=evidences, model=model, max_new_tokens=max_new_tokens,
                                                rel_tokens=rel_tokens, ret_tokens=ret_tokens, grd_tokens=grd_tokens, ut_tokens=ut_tokens,
                                                threshold=args.threshold, use_seqscore=args.use_seqscore,
                                                w_rel=args.w_rel, w_sup=args.w_sup, w_use=args.w_use, mode=args.mode, closed=args.task in ["fever", "arc_c"])

from self-rag.

carlosandrea avatar carlosandrea commented on July 17, 2024

@fate-ubw I have done the same thing for so far only able to make run short_form with : always_retrieve mode, other mode are throwing error.
Did you make it run ?
I have some issues reproducing paper numbers, while self.rag numbers are in line, I have some strange value for LLama-2 7B :
Very low value for PUB : 0
Very high value for ARC : 0.91

from self-rag.

AkariAsai avatar AkariAsai commented on July 17, 2024

Thank you so much for reporting! I was changing the codebase before releasing and seems forgot to fix the variable name. I will fix it.
@carlosandrea Would you mind sharing your excat evaluation command? I can help debugginng. I haven't seen that issue on my side, so some more info helps me to dig into the issue!

from self-rag.

AkariAsai avatar AkariAsai commented on July 17, 2024

I fixed the beam_searh argument in the script. Thanks again for reporting the issue!
@carlosandrea could you create a separate issue for the llama2 performance, and include the command you used? One possible reason is, in some previous issues, people got strange numbers when they are using a script written for self-rag for baselines. Self-RAG embeds retrieved context in a way different from other baselines, and some models show incredibly low performance when the context is not given in front of the prompts.

from self-rag.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.