GithubHelp home page GithubHelp logo

controlprefixes's Introduction

Hi there 👋

  • 🧑‍💻 I'm currently a Lead Deep Learning Engineer at Chattermill, previously Research Engineer at Ontocord.ai

  • 🌍 I also carry out Machine Learning Research for LAION (Stability AI) on the Ezra-1 UltraCluster, LUMI and JUWELS supercomputers; previously did work for BigScience and the BLOOM evaluation

  • 🎓 I previously did my Masters in Machine Learning & A.I at Imperial College London carrying out work in natural language generation

  • 📝 I’m an active contributor of machine learning libraries such as Hugging Face Transformers and Gem-benchmark

  • 💬 I sometimes give talks for the NLP study group, the most popular NLP community on meetup.com

  • 🔭 I’m currently working on Mixture of experts and the open-source chat agent OpenAssistant

  • 📫 How to reach me: [email protected] or message me on LinkedIn

controlprefixes's People

Contributors

jordanclive50 avatar jordiclive avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

controlprefixes's Issues

Inconsistent UnicodeEncodeError for each Config

Hi,
I am trying to run this project as described in the readme.
I completed installation and tried to run a config, but after trying each config I have been stopped at a UnicodeEncodeError.
Each traceback is slightly different; the e2e_clean is the only one that makes it to training, but also crashes due to a UnicodeEncodeError after Epoch 0.

Here's a couple tracebacks for example.
For webnlg17
Traceback (most recent call last):
File "finetune.py", line 932, in
model = main(args)
File "finetune.py", line 902, in main
logger=logger,
File "/workspace/ControlPrefixes-main/src/datatotext/lightning_base.py", line 634, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 114, in start_training
self._results = trainer.run_train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 607, in run_train
self.run_sanity_check(self.lightning_module)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 864, in run_sanity_check
_, eval_results = self.run_evaluation(max_batches=self.num_sanity_val_batches)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 742, in run_evaluation
deprecated_eval_results = self.evaluation_loop.evaluation_epoch_end()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 189, in evaluation_epoch_end
deprecated_results = self.__run_eval_epoch_end(self.num_dataloaders)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 227, in __run_eval_epoch_end
eval_results = model.validation_epoch_end(eval_results)
File "finetune.py", line 345, in validation_epoch_end
convert_text(s) + "\n" for s in output_batch["target"]
UnicodeEncodeError: 'ascii' codec can't encode character '\xe1' in position 9: ordinal not in range(128)

For DART

Traceback (most recent call last):
File "finetune.py", line 932, in
model = main(args)
File "finetune.py", line 902, in main
logger=logger,
File "/workspace/ControlPrefixes-main/src/datatotext/lightning_base.py", line 634, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self.call_setup_hook(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1066, in call_setup_hook
model.setup(stage_name)
File "/workspace/ControlPrefixes-main/src/datatotext/lightning_base.py", line 286, in setup
"train", self.hparams.train_batch_size, shuffle=True
File "finetune.py", line 610, in get_dataloader
dataset = self.get_dataset(type_path)
File "finetune.py", line 603, in get_dataset
**self.dataset_kwargs,
File "/workspace/ControlPrefixes-main/src/datatotext/utils.py", line 610, in init
self.src_lens = self.get_char_lens(self.src_file)
File "/workspace/ControlPrefixes-main/src/datatotext/utils.py", line 633, in get_char_lens
return [len(x) for x in Path(data_file).open().readlines()]
File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 6422: ordinal not in range(128)

Every config does this at some point; let me know if you need more information.
I tried moving the data around, unzipping it differently, rolling pytorch-lightning back to an older/newer version, but nothing seems to work. Is there some unspoken data processing step that needs to be done before training?
Thanks,
CH

Approach for "unseen" data

Thank you for sharing the implementation details for the ControlPrefixes model. Could you also provide the code for the approach for unseen categories discussed in Section 6.2?

python files miss

I think you forget to upload some python files (e.g. utils_conditional3.py, utils_graph2text.py) to webnlg folder. I couldn't run the codes without these libraries.

Data Pre-processing and Conditional Information

Hello @jordiclive.

First, thank you for your excellent work and for making it available for everyone to use.
I have some questions relating to the Data Pre-Processing for the data-to-text task, more particularly about how the Conditional Information is generated.

How are the *.source_cat.npy files generated? If I wanted to train a model with a different set of categories how should I go around generating my own *.source_cat.npy files?

Thank you once more for making your work available and for any help you can provide.

Question: control prefixes without task prefix

HI, I see control prefixes being used along with task prefixes in the code. Can I ran control prefixes without task prefix and will that affect any part of the code? Also, wondering the thought behind using both hard prompt task prefix (translate graph to english - as mentioned in the paper) and soft-prompt task prefix.

what is the data format

Hello, when running the summary task (Xsum), there are two data files .sports/.cats, what is the data format. thanks

Baseline comparison code for Prefix-Tuning?

Hi @jordiclive, wonderful work and repo! I'm a researcher interested in building off this repo, but was wondering -- is there a way to run standard prefix-tuning in your experiments to compare with ControlPrefixes?

Thank you, much appreciated!

Invalid syntax in finetune.py

Hi,
I am trying to run
python3 read_yaml.py configs/DART.yaml

This gives:

File "finetune.py", line 84
self.dataset_kwargs: dict = dict(
____________________^
SyntaxError: invalid syntax

I am on Python 3.6.9

I cannot reproduce the finetuning procedure by following README

Hello, I am trying to test my data in Control Prefix manner.
While I am testing, I always encounter error like below.

Validation sanity check: 0it [00:00, ?it/s]
Validation sanity check:   0%|          | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
  File "cptest/src/datatotext/finetune.py", line 932, in <module>
    model = main(args)
   ...
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

What Can I do for solving this problem? there is nothing I modified in files in src/ directory.
Thanks for watching this!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.