Position embedding layers in Keras.
pip install keras-pos-embd
import keras
from keras_pos_embd import PositionEmbedding
model = keras.models.Sequential()
model.add(PositionEmbedding(
input_shape=(None,),
input_dim=10, # The maximum absolute value of positions.
output_dim=2, # The dimension of embeddings.
mask_zero=10000, # The index that presents padding (because `0` will be used in relative positioning).
name='Pos-Embd',
))
model.compile('adam', keras.losses.mae, {})
model.summary()
(Note that you don't need to enable mask_zero
if you would concatenate other layers like word embeddings with masks)
The sine and cosine embedding has no trainable weights. The layer has three modes, it works just like PositionEmbedding
in expand
mode:
import keras
from keras_pos_embd import TrigPosEmbedding
model = keras.models.Sequential()
model.add(TrigPosEmbedding(
input_shape=(None,),
output_dim=30, # The dimension of embeddings.
mode=TrigPosEmbedding.MODE_EXPAND, # Use `expand` mode
name='Pos-Embd',
))
model.compile('adam', keras.losses.mae, {})
model.summary()
If you want to add this embedding to existed embedding, then there is no need to add a position input in add
mode:
import keras
from keras_pos_embd import TrigPosEmbedding
model = keras.models.Sequential()
model.add(TrigPosEmbedding(
input_shape=(None, 100),
mode=TrigPosEmbedding.MODE_ADD, # Use `add` mode (default)
name='Pos-Embd',
))
model.compile('adam', keras.losses.mae, {})
model.summary()