GithubHelp home page GithubHelp logo

Comments (6)

weilinie avatar weilinie commented on August 17, 2024

If I understand correctly, you are referring to the self-BLEU. Actually, I opened an issue #27 on Texygen about the self-BLEU metric.

from relgan.

bojone avatar bojone commented on August 17, 2024

No, it is not self bleu.

the bleu in your work is something like

np.mean([
    bleu(references=the_whole_test_data, hypothesis=s)
    for s in the_whole_generated_data
])

it can be a metric of generated reality.

my idea is to calculate

np.mean([
    bleu(references=the_whole_generated_data, hypothesis=s)
    for s in the_whole_test_data
])

as a metric of generated diversity, while high score means all the the_whole_test_data can be found in the_whole_generated_data.

from relgan.

weilinie avatar weilinie commented on August 17, 2024

Thanks for the explanation and now I see your point. I guess what you have proposed is basically the same with bleu, since the func bleu() in our case actually calculates the mean of all the bleu scores between each reference and hypothesis, and you just swap the order of two for loops.

from relgan.

bojone avatar bojone commented on August 17, 2024

Approximately, the original one is to check whether if the_whole_generated_data is a subset of the_whole_test_data or not. And my idea is to check whether if the_whole_test_data is a subset of the_whole_generated_data or not.

If both of them are high, it means the_whole_generated_data ⊆ the_whole_test_data and the_whole_test_data ⊆ the_whole_generated_data, indicating the_whole_test_data = the_whole_generated_data.

from relgan.

LanghingCryingFace avatar LanghingCryingFace commented on August 17, 2024

I have computed Self-BLEU which ensured that test data and reference data is the same. I guess that the issue #27 on Texygen does not happen for me. Because I do not reuse the saved "references" in SelfBleu Class.

For COCO, I saved 1,000 sentences and compute Self-BLEU-2 at each epoch. After pretraining, Self-BLEU-2 was around 0.76. After adversarial training for about 10 epochs (3130 iters), Self-BLEU-2 rise to about 0.85.

from relgan.

weilinie avatar weilinie commented on August 17, 2024

Hmm, this is interesting. Could you please share your code to calculate the self-BLEU score? Thanks!

from relgan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.