GithubHelp home page GithubHelp logo

xlang-ai / instructor-embedding Goto Github PK

View Code? Open in Web Editor NEW
1.8K 18.0 131.0 173.97 MB

[ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings

License: Apache License 2.0

Python 100.00%
embeddings information-retrieval language-model text-classification text-clustering text-embedding text-evaluation text-semantic-similarity prompt-retrieval text-reranking

instructor-embedding's People

Contributors

ashokrajab avatar englhardt avatar harry-hash avatar hongjin-su avatar michael-quinlan avatar raravindds avatar silasmarvin avatar taoyds avatar tomaarsen avatar tuanacelik avatar venkatesh-pro avatar yuxi-liu-wired avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

instructor-embedding's Issues

maximum input length issue

i can see that the maximum input length is set to 512 how can i change that ? and is seq length more than 512 supported ? what is the max seq length supported

Multi-gpu evaluation

Hi there,

Is there a way to evaluate models with multiple GPUs? It is intimidating to run over large datasets.

Thanks,
Rui

Model Fine-tune for Classification Mission?

Hello! There is an automatic classification mission for engineering construction safe cases, we want to fine-tune your model using our dataset. However, no specification was found on your web page. Could you please tell me HOW to do that?

License?

Hi, there is no license in this repo, and I was wondering if you would consider adding an explicit license? (Hopefully apache or other friendly license :)) Thanks, and thanks for the awesome work!

Fine tuning: Some weights of the model checkpoint were not used when initializing T5EncoderModel

Thanks for open sourcing this model! I have attempted to fine tune instructor-large and instructor-xl.

I now get an error on the outputted model when I try to load it "Some weights of the model checkpoint at ../up-l/ were not used when initializing T5EncoderModel" similarly the outputted model is missing key files like modules.json etc. which I attempted to use from your existing model given it should be the same architecture.

I modified medi-data.json to have my training data.

docs: instructions syntax

Here is what I have noted for myself.

The syntax to write instructions: "Represent the domain text_type for task_objective: ", where:

  • domain is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
  • text_type is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
  • task_objective is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc.

The questions:

  1. what is the exhaustive list of possible values for domain?
  2. what is the exhaustive list of possible values for text_type?
  3. what is the exhaustive list of possible values for task_objective?

Multilingual capabilities

Dead authors,

It was a delight to read your work on instruction fine-tuned embeddings.

Are there any plans for extending the capabilities to a multilingual setup?

Generating batch embeddings using hugging face datasets crashes my EC2 instance on XL model.

This only happens with the XL model, large and smaller seem to work fine.

Here's how I import it and verify that it's working:

from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-xl')
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Science title:"
embeddings = model.encode([[instruction,sentence]])
print(embeddings[0, :5])

Then I batch process using Hugging Face datasets:

from datasets import Dataset
ds = Dataset.from_parquet('../data/mydata.parquet')

import torch
torch.backends.cuda.matmul.allow_tf32 = True

BATCH_SIZE = 4

instruction = "Represent the news article for clustering"
def encode_text(item):
    input = [[instruction, subitem] for subitem in item['text']]
    return {"embedding": model.encode(input, batch_size=BATCH_SIZE)}

ds = ds.map(encode_text, batched=True, batch_size=BATCH_SIZE, remove_columns='text')

This will either crash my ipykernel or worse take my entire EC2 instance offline. Seems like this shouldn't be happening, the model needs 5GB of VRAM and my g5.xlarge instance has 24GB.

Am I doing the batching correctly? This is the only way I could make it work/make it make sense.

Thanks!

Set embedding vector dimension

Hi, how are you?
Is there a way to set the dimension of the returned vector or is always fixed to 768?
(I'm very new to this)

Thanks,
Fran

Peculiar Cosine Similarity Values

Is there a reason the model seems only to output embeddings with cosine similarities all in a very narrow range?

(One way this can happen is if effectively only a very small subspace of the 768 dimensions is getting used)

I have tried a number of different tasks, with many different strings and types of strings, and find that the results are nearly always in a very narrow range from about +0.4 to +0.9. This is despite creating test sets that should generate lots of orthogonal embeddings and graded similarities from near to further. I have literally been unable to get a value under +0.4 for any two embeddings; I have only ever gotten above +0.9 when testing a vector with itself (as a test.)

I find this true for both base and XL models.

I find this even for the example given in the README. I created the following test code:

arguments = sys.argv
original=(arguments[1].lower()=="true")
q1=(arguments[2].lower()=="true")
crosscheck=True and ~original

#Below copy-pasted from https://github.com/HKUNLP/instructor-embedding except 
#  for if statements failing "original"

from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-base')

import numpy as np
from sklearn.metrics.pairwise import cosine_similarity


if (original or q1):
    query  = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']]
else:
    query  = [['Represent the Wikipedia question for retrieving supporting documents: ','what is the dominant economic theory in the United States?']]
corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'],
          ['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"],
          ['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']]
if (crosscheck and ~original):
    corpus.append(query[0]+[])
query_embeddings = model.encode(query)
corpus_embeddings = model.encode(corpus)
similarities = cosine_similarity(query_embeddings,corpus_embeddings)
retrieved_doc_id = np.argmax(similarities)
print(retrieved_doc_id)
if (original):
    pass
else:
    print(similarities)

I then ran it three times: exactly as posted on the README, adding a printout of similarities (with the query vector with itself added as a test), and then using a different query created to score very highly with the first corpus item and printing out the results:

$ python3 instructor_example_2.py True True
load INSTRUCTOR_Transformer
max_seq_length 512
3
$ python3 instructor_example_2.py False True
load INSTRUCTOR_Transformer
max_seq_length 512
3
[[0.7325637 0.71300924 0.7206404 1. ]]
$ python3 instructor_example_2.py False False
load INSTRUCTOR_Transformer
max_seq_length 512
3
[[0.86386305 0.83299637 0.8046411 0.9999999 ]]

In high dimensional spaces cosine similarity of 0.7 is very significant; however a question about a yam has nothing visibly to do with capitalism or disparate impact.

Two other embedding models both returned much more intuitive results (nearly all 0 for the first yam query; graduated similarities for the capitalism query) that were both pretty close to one another.

I've found the same thing with other queries and corpuses; the output of similarities is always in a very narrow range.

KeyError: 'task_id'

Hi! I am trying to train instructor-embedding and come up with the error shown in the title. More specifically,

Traceback (most recent call last):
  File "train.py", line 577, in <module>
    main()
  File "train.py", line 450, in main
    print(f'one batch in task {old_train_examples_raw[idx1]["task_id"]} is skipped')
KeyError: 'task_id'

I have downloaded data and put them right in the cache_dir. And here is my running script:

# train the model
model_name=hkunlp/instructor-base
sentence_model_name=sentence-transformers/gtr-t5-base
output_dir=outputs
data_dir=medi-data

python train.py \
    --model_name_or_path=${sentence_model_name} \
    --output_dir=${output_dir} \
    --cache_dir=${data_dir} \
    --max_source_length=512 \
    --num_train_epochs=10 \
    --save_steps=500 \
    --cl_temperature=0.01 \
    --warmup_ratio=0.1 \
    --learning_rate=2e-5 \
    --overwrite_output_dir

Can't load config for 'sentence-transformers/gtr-t5-large'

Running the following command:
python train.py --model_name_or_path sentence-transformers/gtr-t5-large --output_dir . --cache_dir medi-data/medi-data.json --max_source_length 512 --num_train_epochs 10 --save_steps 500 --cl_temperature 0.01 --warmup_ratio 0.1 --learning_rate 2e-5 --overwrite_output_dir

and receiving the following error repeatedly:

  File "/home/engineering/instructor-embedding/train.py", line 570, in <module>
    main()
  File "/home/engineering/instructor-embedding/train.py", line 423, in main
    tokenizer = AutoTokenizer.from_pretrained(
  File "/home/engineering/miniconda3/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 535, in from_pretrained
    config = AutoConfig.from_pretrained(
  File "/home/engineering/miniconda3/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 705, in from_pretrained
    config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/home/engineering/miniconda3/lib/python3.10/site-packages/transformers/configuration_utils.py", line 553, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/home/engineering/miniconda3/lib/python3.10/site-packages/transformers/configuration_utils.py", line 641, in _get_config_dict
    raise EnvironmentError(
OSError: Can't load config for 'sentence-transformers/gtr-t5-large'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'sentence-transformers/gtr-t5-large' is the correct path to a directory containing a config.json file```

I even tried copying the config.json (from https://huggingface.co/sentence-transformers/gtr-t5-large/blob/main/config.json) into a directory I created in sentence-transformers/gtr-t5-large, but I receive the same error.

Details about the Training Configurations

Thanks for your very exciting work and nice publicly released code and data. It is a very very great work in sentence representation learning!

I am very willing to follow this work and re-implement the training process. But I am not very clear about the per-device-batch-size and total-batch-size when tuning the large and XL models. And I am also a bit confused about the actual training steps of all the models. Could you please answer my above questions? I would like to estimate how many V100 32G GPUs are required based on them :)

Other model bases?

Thanks for this repo! I noticed that Instructor is initialised from a T5 / GTR

In your experiments how crucial is this base - if we were to try train say a MPNET or BERT with this dataset has this been explored?

Just wondering before I bite the bullet on this!

Erro with test on MSMARCO dataset.

I have try to do the evaluation following read me. When I run:
python examples/evaluate_model.py --model_name hkunlp/instructor-large --output_dir msmarco.out --task_name MSMARCO --result_file msmarco.res
I met the following erro:
`
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

`

Replication of Instructor

Hey, we are currently trying to replicate the Instructor model. Issue #14 already asks this, but please report the exact training setup for the models.

Also, I am interested in the loss of your model. I didn't get your reported results by running the model for 100k steps. It could be more evident to me how you used just 40k steps for the model while you mentioned in your paper that you trained it on the MEDI dataset.

I would appreciate your help here :)

hard negative dataset

is the dataset used in the hard negative commit changes available for download somewhere?

i'm assuming the MEDI instructions are different from the dataset originally released, since the eval instructions are different (e.g. dropping ; Input:)

docs: using the model with sagemaker

Hi,

I am following this guide to deploy instructor-embedding on Amazon SageMaker.

https://www.philschmid.de/custom-inference-huggingface-sagemaker

I've created model.tar.gz that contains cached version of the model.

drwxr-xr-x root/root         0 2023-06-20 15:40 model/
-rw-r--r-- root/root      1477 2023-06-20 15:33 model/.gitattributes
drwxr-xr-x root/root         0 2023-06-20 15:40 model/.ipynb_checkpoints/
-rw-r--r-- root/root     66318 2023-06-20 15:33 model/.ipynb_checkpoints/README-checkpoint.md
-rw-r--r-- root/root       122 2023-06-20 15:33 model/.ipynb_checkpoints/config_sentence_transformers-checkpoint.json
drwxr-xr-x root/root         0 2023-06-20 15:33 model/1_Pooling/
-rw-r--r-- root/root       270 2023-06-20 15:33 model/1_Pooling/config.json
drwxr-xr-x root/root         0 2023-06-20 15:33 model/2_Dense/
-rw-r--r-- root/root       116 2023-06-20 15:33 model/2_Dense/config.json
-rw-r--r-- root/root   3146603 2023-06-20 15:33 model/2_Dense/pytorch_model.bin
-rw-r--r-- root/root     66318 2023-06-20 15:33 model/README.md
-rw-r--r-- root/root      1529 2023-06-20 15:33 model/config.json
-rw-r--r-- root/root       122 2023-06-20 15:33 model/config_sentence_transformers.json
-rw-r--r-- root/root       461 2023-06-20 15:33 model/modules.json
-rw-r--r-- root/root 1339823867 2023-06-20 15:33 model/pytorch_model.bin
-rw-r--r-- root/root         53 2023-06-20 15:33 model/sentence_bert_config.json
-rw-r--r-- root/root       2201 2023-06-20 15:33 model/special_tokens_map.json
-rw-r--r-- root/root     791656 2023-06-20 15:33 model/spiece.model
-rw-r--r-- root/root    2422360 2023-06-20 15:33 model/tokenizer.json
-rw-r--r-- root/root       2407 2023-06-20 15:33 model/tokenizer_config.json
-rw-r--r-- root/root       2177 2023-06-22 18:17 code/inference.py
-rw-r--r-- root/root         70 2023-06-22 18:17 code/requirements.txt

In the inference.py I load the model from the model directory in model.tar.gz.

from transformers import AutoTokenizer, AutoModel

    tokenizer = AutoTokenizer.from_pretrained(model_dir + "/model")
    model = AutoModel.from_pretrained(model_dir + "/model")

The model type for this one comes up as T5Model and it does not have encode method.

[INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: 'T5Model' object has no attribute 'encode' : 400",

Which method and syntax do I use to perform the embedding?

hard coding problem of max sequence length

Due to this line, even if I run the code below, I still got max_seq_length fixed at 512.
model.max_seq_length = 1024

Is it because I can't change the max_seq_length?
If not, it would be right to erase this line.

Install fails, dependencies don't get installed

I installed this package via:

pip install InstructorEmbedding

Then I try to run the quickstart. However, this yields an error on import.

---------------------------------------------------------------------------

ModuleNotFoundError                       Traceback (most recent call last)

[<ipython-input-2-479956c3d0e9>](https://localhost:8080/#) in <module>
----> 1 from InstructorEmbedding import INSTRUCTOR
      2 model = INSTRUCTOR('hkunlp/instructor-large')
      3 sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
      4 instruction = "Represent the Science title:"
      5 

1 frames

[/usr/local/lib/python3.8/dist-packages/InstructorEmbedding/instructor.py](https://localhost:8080/#) in <module>
      7 from tqdm.autonotebook import trange
      8 from torch import Tensor, device
----> 9 from sentence_transformers import SentenceTransformer
     10 from sentence_transformers.models import Transformer
     11 from transformers import AutoConfig

ModuleNotFoundError: No module named 'sentence_transformers'

It seems this project doesn't install it's dependencies.

Input Length / Accuracy

Do you have any data on the performance given a range of input lengths? I'm working on neural search, and I came across instructor-xl as a potential replacement for text-embedding-ada-002, which has an context window of 8,191 tokens. Can instructor-xl handle that length without degrading? Any longer?

Issue 12 touched on this but didn't provide many details.

My immediate use is cosine similarity for search but I also have a need for clustering and categorization. Any info you can provide regarding the context length in relation to these use-cases will be super helpful and appreciated.


For anyone else reading this trying to compare the model to ada, here's a bit of discussion: UKPLab/sentence-transformers#1897

and related benchmarking: https://huggingface.co/spaces/mteb/leaderboard

Install MTEB

Hi,
Great work! I'm trying to reproduce the evaluation results on MTEB, but there occurs an error when I install the evaluation/MTEB following this command cd evaluation/MTEB pip install -e . :
OSError: [Errno 2] No such file or directory: '/tmp/tmpni4r6sx4/output.json
image

Can you give some advice to fix this error? Thank you!

When does the training stop?

Hello,

Was there a specific reason for ending the training at 40K steps?
What criteria or basis(i.e. metric) did you use to determine the stopping point?

file downloads on package import

When first importing the package, I noticed a number of downloads happening.

from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-large')
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Science title:"
embeddings = model.encode([[instruction,sentence]])
print(embeddings)

I am deploying the package in the environment where I don't have access to Internet.

Is there a way to download all the required files ahead of time and tell the INSTRUCTOR to use the files instead of downloading them at runtime?

multilingual model

Thank you for this wonderful model. I have a question for you: Do you have any plans to develop a multilingual version of INSTRUCTOR?

Does embedding pooling include instructions?

In section 2.1 of your paper, you indicate

Given an input text x and a task instruction I_x, INSTRUCTOR encodes their concatenation Ix ⊕ x. We then generate a fixed-sized, task-specific embedding E_I (I_x, x) by applying mean pooling to the last hidden representations over the tokens in x

I interpret this to mean that you average only over x and ignore the instruction embeddings.

I see two different kinds of model definitions in this repo: INSTRUCTOR_Transformer and INSTRUCTOR. It looks like the first one averages over x only and the second one will average over the instruction and x. Any reason why the second model averages over the whole sequence instead of just x? It appears that INSTRUCTOR_Transformer is not used anywhere else in this repo.

Bib citation issue

Hi, thanks for making the model publicly available. 🙂

I wanted to cite it so I used the provided bibtex entry:

@inproceedings{INSTRUCTOR,
  title={One Embedder, Any Task: Instruction-Finetuned Text Embeddings},
  author={Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu},
  url={https://arxiv.org/abs/2212.09741},
  year={2022},
}

However the Overleaf BibTeX installation is complaining that there are too many commas in the author list. My understanding is that it treats and as author separator and , as last name, first names separator.

Changing it to the following makes it work.

@inproceedings{INSTRUCTOR,
  title={One Embedder, Any Task: {I}nstruction-Finetuned Text Embeddings},
  author={Su, Hongjin and Shi, Weijia and Kasai, Jungo and Wang, Yizhong and Hu, Yushi and  Ostendorf, Mari and Yih, Wen-tau and Smith, Noah A. and  Zettlemoyer, Luke and Yu, Tao},
  url={https://arxiv.org/abs/2212.09741},
  year={2022},
}

GPU memory leak.

I am doing batch inference over a very large dataset. And I see that slowly over time I become OOM even though I am deleting all variables assigned. Here is the code

import gc

for batch_number in tqdm(batch_numbers):
  ids = []
  inputs = []
  for x in sampled_input_batched.filter(col('batch') == batch_number).collect():
    ids.append(x[0])
    inputs.append(x[1])
  
  output = instructor_model.encode(inputs, batch_size=BATCH_SIZE)

  temp_df = spark.createDataFrame(pd.DataFrame({
  'reviewid' : ids,
  'instructor_embeddings' : output
  }))

  temp_df.write.parquet(f"{inference_folder}/temp_inference_output_batch_{batch_number}.parquet")

  with torch.no_grad():
    del review_ids
    del review_inputs
    del output
    del temp_df
    torch.cuda.empty_cache()
    gc.collect()

But after each iteration of the loop. there is some residual memory being retained by the gpu. The model takes 5685mb of memory. But after each loop this number increases slightly. So after enough loops I run OOM. Could you tell me where the memory leak may be?

How influential is the "domain" and "task_objective" of each instruction on new data?

Hey Team, thank you for the awesome project and for sharing it with the wider community.

I have a question regarding the optimal strategy for attaching instructions to text we plan on embedding with InstructOR. If I have a set of documents I wish to embed, specifically for information retrieval, what would be the best way to write the instructions for these new documents and queries?

Additional Context:

In the appendices of your paper, I see that retrieval instructions were frequently tailored to the training dataset in question (e.g. for the gooaq_pairs dataset, you refer to each document as a "Google answer for retrieval"). This makes sense for training, but now for inference on new data, how much does the domain and task_objective influence the resulting embeddings? I see they are both optional parts of the unified template, but the examples you provide have domains and objectives that range from general "science sentence" to specific "Wikipedia question".

I figure using domains & task objectives leads to more specialized embeddings, but is there a curated list of domains/objectives somewhere to help guide end users?

Thanks for your time!

Trouble running training jobs

Hi HKUNLP,

First off, really awesome paper on leveraging instructions to improve embedding quality across domains and tasks.

I am trying to train a model by following the directions to train a model. I downloaded the MTEB dataset, installed the requirements and am running the train job and continue to run into this error:
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.

Any idea why this is happening?

Thanks!

show_progress_bar = True does not work properly in notebook

Hi,

Thank you for this model.
I'm running model.encode() in a Jupyter notebook on VSCode. It works fine at generating the embeddings, however when I try using the progress bar (show_progress_bar = True), no progress bar shows up in the VS Code notebook cell output, and the cell also hangs (it does not finish running, even when the embeddings have been generated).

When I try interrupting the cell in VSC, it says: "Interrupting the kernel timed out. Do you want to restart the kernel instead? All variables will be lost." After restarting the kernel, the output shows up.

Another potentially relevant piece of context is that when I initialize the model, I get the following warning

TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
  from tqdm.autonotebook import trange

I'm not sure what's causing this behavior, but wondering if you have any thoughts/suggestions. Thanks!

Code embeddings

I would like to finetine this model for code embeddings. Have you tried this before. Any suggestions on how to proceed. Do we need hard negatives or we can use in batch negatives?

Data & training details

Hi, awesome work on text embeddings!

After reading your paper and the code, I have a few questions.

As stated in the paper (Section 2.3 data construction),

Following Ni et al. (2021), we use four negative pairs (hard or in-batch negatives) during the model finetuning process

But in the data downloaded from the link of your repo, each training instance from each task is accompanied with exactly 1 positive and 1 negative. Since some datasets from embedding-training-data do not contain negatives, I'm wondering how the negatives are sampled. Randomly or following the same way as superNI datasets? Also the data construction code for 300 datasets from superNI is missing.

In addition, I think the current ckpt is different from the first released ones since it's trained by hard negatives. But details about how hard negatives are sampled is missing...

Finally, many tasks are subsampled according to the paper to balance each dataset, would you mind sharing the whole data for each data source with all the hard negatives? Thanks.

The time cost for training this instructor?

Hi, thanks for your great work. I do not see the time cost for training this model on the paper. Could you public the time cost for training the different size models(base, large, xl)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.