GithubHelp home page GithubHelp logo

paie's People

Contributors

mayubo2333 avatar zehao-wang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

paie's Issues

cuda device

hello,
how to chage the number of gpu?
It doesn't work when I add ''os.environ['CUDA_VISIBLE_DEVICES'] = "2"", and I didn't find where is the code that you set gpu.

How to learn pesudo tokens in soft prompt ?

It may be a silly question but haunts in my mind. As mentioned in the paper, pesudo tokens in soft prompts are learnable. However, I found the training process of PAIE with soft prompt is pretty similar to PAIE with manual prompts. They share the same model architecture which means, when backwarding, the loss object are from the same model. So how to learn pseudo tokens? In the paper, you mentioned that you follow two previous method, but how to implement them in PAIE code?

prompt full data

why the prompt template have "prompt start" and " end" ,which is not show in the paper . Is it get an better result ?

Question about sentence segmentation

Hello, I don't understand why W should be divided by 2 in line 208 of the code below. I found that this will cause the position of event_trigger to become a negative value, so I think W should not be divided by 2 in line 208. I hope to get your answer. Thanks!

if sent_length > W+1:
if event_trigger['end'] <= W//2: # trigger word is located at the front of the sents
cut_text = full_text[:(W+1)]
else: # trigger word is located at the latter of the sents
offset = sent_length - (W+1)
min_s += offset
max_e += offset
event_trigger['start'] -= offset
event_trigger['end'] -= offset
event_trigger['offset'] = offset
cut_text = full_text[-(W+1):]

ACE05数据集

ACE05数据集在LDC网站上好像不是免费下载的,请问您是否可以提供一下这个数据集呢?希望能尽快得到您的回复!

Question about evaluation on RAMS dataset

Hello, nice work, and thanks for sharing the code!

I have a question about the RAMS dataset. It looks like some arguments get dropped because of the window size. Were the dropped arguments included in the final test set? Thank you very much!

How to learn pseudo tokens in soft prompts?

It may be a silly question but haunts in my mind. As mentioned in the paper, pesudo tokens in soft prompts are learnable. However, I found the training process of PAIE with soft prompt is pretty similar to PAIE with manual prompts. They share the same model architecture which means, when backwarding, the loss object are from the same model. So how to learn pseudo tokens? In the paper, you mentioned that you follow two previous method, but how to implement them in PAIE code?

About result comparison

Hello author,

Thanks for sharing the code. I have one question when reading the paper w.r.t. the result comparison with the baseline BART-Gen. Some results in Table 2 of your paper have mismatches with the results in the paper of BART-Gen. For example, the Arg-I Arg-C Head-C numbers for WIKIEVENTS are 66.8 62.4 65.4, but I can't find exactly the same numbers in either Table 5 or 6 in the paper of BART-Gen. Are these results reproduced by yourself? If so, it seems that they are very different from the numbers in the original paper.
no offense, but just confused about this issue. Did I miss some experimental details?

How to add a new dataset

Thanks for your code! I have some questions about how to add a new dataset. I think there are 3 steps to add a new dataset:

  1. Place the data in the ./data folder.
  2. Add data reading code in DSET_processor class in processor_base.py file. And modify some details such as arg parser and build_processor.
  3. Add the description file in ./data/dset_meta folder and the prompt file in ./data/prompts folder.
    Then I can run the code to train/infer on my own dataset. Are there any misunderstandings? Thank you!

关于Head-C 指标

您好,请问该指标只用判断 论元的head span正确就可以了吗?是否还要求角色也识别正确?
因为我看BartGen代码里似乎是head word span + role name

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.