erksch / fnet-pytorch Goto Github PK
View Code? Open in Web Editor NEWUnofficial PyTorch implementation of Google's FNet: Mixing Tokens with Fourier Transforms. With checkpoints.
License: MIT License
Unofficial PyTorch implementation of Google's FNet: Mixing Tokens with Fourier Transforms. With checkpoints.
License: MIT License
Great job, mate! I will have a look at your implementation thoroughly a bit later, for now it seems outstanding for me ๐ค๐ป
Is there any way I can use this repo to pre-trained Fnet from scratch?
How can we mask/pad tokens for sequences of varied length ?
When we use fft along dimension (-2) , sequences , if we just use zero padding the result will be skewed.
torch.fft.fft(torch.fft.fft(hidden_states.float(), dim=-1), dim=-2).real
Hey,
Wonderful translation!
I just implemented it myself, but this FourierMatmul is giving error of dimension mismatch.
Can you please let me know what are the dimensions it expects? Please help?
My sample Inputs
import json
from fnet import FNetPretraining
from transformers import FNetTokenizer
with open('config.json', 'r') as f:
config = json.load(f)
tokenizer = FNetTokenizer.from_pretrained("google/fnet-base")
inputs = tokenizer(['Hello, my dog is so cute', 'Hello world'],
return_tensors='pt',
padding=True,
truncation=True, max_length=512)
# print(inputs)
{'input_ids': tensor([[ 4, 9665, 16680, 275, 3314, 65, 215, 6387, 5],
[ 4, 9665, 725, 5, 3, 3, 3, 3, 3]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]])}
input_ids=inputs['input_ids']
token_type_ids = inputs['token_type_ids']
obj1 = FNetPretraining(config=config)
obj1.forward(input_ids, token_type_ids)
class FourierMMLayer(nn.Module):
def __init__(self, config):
super().__init__()
self.dft_mat_seq = torch.tensor(linalg.dft(config['max_position_embeddings']))
self.dft_mat_hidden = torch.tensor(linalg.dft(config['hidden_size']))
def forward(self, hidden_states):
hidden_states_complex = hidden_states.type(torch.complex128)
return torch.einsum(
"...ij,...jk,...ni->...nk",
hidden_states_complex,
self.dft_mat_hidden,
self.dft_mat_seq
).real.type(torch.float32)
Error
Traceback (most recent call last):
File "inference.py", line 22, in <module>
obj1.forward(input_ids, token_type_ids)
File "/mnt/sda1/ml_models/fourier_net/fnet.py", line 124, in forward
self.encoder(input_ids, type_ids)
File "/mnt/sda1/luck/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/sda1/ml_models/fourier_net/fnet.py", line 113, in forward
sequence_output = self.encoder(embedding_output)
File "/mnt/sda1/luck/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/sda1/ml_models/fourier_net/fnet.py", line 94, in forward
hidden_states = layer_module(hidden_states)
File "/mnt/sda1/luck/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/sda1/ml_models/fourier_net/fnet.py", line 80, in forward
fft_output = self.fft(hidden_states)
File "/mnt/sda1/luck/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/sda1/ml_models/fourier_net/fnet.py", line 62, in forward
return torch.einsum(
File "/mnt/sda1/luck/lib/python3.8/site-packages/torch/functional.py", line 299, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [2, 9, 768]->[2, 1, 1, 9, 768] [768, 768]->[1, 1, 768, 1, 768] [512, 512]->[1, 512, 1, 512, 1]
Hi,
I am currently working on reproducing the results from the paper on the GLUE benchmark. However, my current results are very far from those in the paper. Have you already conducted experiments in this direction or could you reproduce the scores?
I have a running implementation compatible with Huggingface if you want to try it out:
https://github.com/paul-grundmann/transformers/blob/fnet/src/transformers/models/fnet/modeling_fnet.py
In my case, it seems that the model steadily learns on the masked language modeling task but does not improve on downstream tasks at all even after 200k pre-training steps.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.