When I setting beam_size>1 for generateing sequences, missing 1 required positional argument: 'token_type_ids' is occurred.
How can I generate utterances by beam_size>1?
generated_2 = model.generate(
input_ids=input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
num_beams=5,
length_penalty=1.0,
min_length=3,
max_length=32,
no_repeat_ngram_size=1,
use_decoder2=True,
per_input_ids=persona_input_ids
)
BoB\xlibs\generation_utils.py in beam_search(self, input_ids, beam_scorer, logits_processor, max_length, pad_token_id, eos_token_id, use_decoder2, **model_kwargs)
966
967 while cur_len < max_length:
--> 968 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
969
970 outputs_1, outputs_2 = self(**model_inputs, return_dict=True)
TypeError: prepare_inputs_for_generation() missing 1 required positional argument: 'token_type_ids'