If the batch size is 128, then q_embed is the features of the 128 questions after the 'word embedding', and d_embed is the features of the 128 responses. After the concatenate function, it is not the corresponding two-sentence connection, So the first layer of input is how to perform one-dimensional convolution? I don't understand this very well. Can you explain it?
Hi, thank you for sharing the codes!
However, there is still something wrong with the dataset you provided. In corpus_preprocessed.txt, the indexes are numbers, whereas in relation_train.txt, the query indexes are Q+numbers. This results in the error: File "arcii.py", line 40, in get_texts texts1.append(all_words[t_].split(' ')) KeyError: 'Q1'