Comments (6)
Hi @se4u ,
Thanks for your interest in our repo! Regarding your questions:
- You do need to generate the words one by one (as in the case of any sequence model). However, you shouldn't need to re-compute the already generated tokens. For instance, to generate sequence [x1, x2, x3, x4] with a normal Transformer, you need to pass x1, (x1,x2), (x1,x2,x3) to the model, which could be quadratic due to re-computations. However, for DEQ, as the output is an equilibrium point, you can keep whatever that's been generated as "history padding". You thus only need to solve for the equilibrium of one word, not 4 words. Here is how this padding (or what I call "memory") is appended in code: https://github.com/locuslab/deq/blob/master/DEQModel/models/transformers/deq_transformer.py#L134. You can also see what I mean in Figure 1(b) of our paper (the "history padding" part).
Basically, given a starting word (x0*=)x1, you first generate its equilibrium x1*=x2. Then given with x0*=x1 as padding, x1*=x2 as input, you generate x2*=x3. After that, (x0*,x1*)=(x1,x2) will be the padding, x2*=x3 will be the input and you shall generate x3*=x4, etc.
- The paper provides experiments on language modeling, which is causal (cf. the 1st paragraph of Sec. 2). In this task, z_i cannot depend on x_{i+1}. However, DEQ can also be applied in cases where z_i does depend on x_{i+1} (e.g., machine translation encoder); the causality structure shouldn't matter. The training objective is exactly the same as any deep architecture; we used the cross-entropy loss (so linear + softmax + NLL loss) in training: https://github.com/locuslab/deq/blob/master/DEQModel/models/transformers/deq_transformer.py#L408
Let me know if these answered your questions :-)
from deq.
for DEQ, as the output is an equilibrium point, you can keep whatever that's been generated as "history padding". You thus only need to solve for the equilibrium of one word, not 4 words.
My apologies but I am still a little confused, consider the equations for the trellis network.
This suggests that at time t we will have to recompute the equilibrium for all the hidden states up to time t. I.e.
- at t=1 we will give a dummy token (x₀) as input and compute z*₁ and generate x₁ (greedily using softmax)
- at t=2 we will give (x₀ x₁) as input, recompute z₁ and compute new z₂ -- because there is no guarantee that the new equilibrium point for the hidden state will be the same -- and then compute x₂ using z₂. The recomputed z₁ will be discarded.
So at time t we will need to recompute the hidden state sequence up to time t-1. Is this correct? Or did you mean that at time t=2 you will re-use z₁ generated at time t=1? This seems to be wrong to me because the z₁ generated at time t=1 need not be the same as the z*₁ generated at time t=2, because the sequences at equilibrium may be arbitrarily different.
Thank you again for taking the time to answer these questions.
from deq.
No, that is incorrect. In sentence generation, at t=2 you only need to compute z2*, while holding z1* fixed. You shouldn't need to find the equilibrium point of t=1 again.
The way to think about this is to analyze in Transformer's QKV form: only t=2 exists in the query (Q); z1* will be directly appended to the key/value (K/V) part.
from deq.
No, that is incorrect. In sentence generation, at t=2 you only need to compute z2*, while holding z1* fixed. You shouldn't need to find the equilibrium point of t=1 again.
Got it 👍Thanks a lot for clarifying this point.
However I just wanted to note -- and I think you'll agree -- that the equilibrium state found this way will be different from the equilibrium state found by the procedure that I outlined above. I.e. we can have a different procedure, say DEQ', which takes a sequence of tokens and produces a same size sequence of hidden states in one shot.
from deq.
It's hard to say; what I observed is that as long as the model is causal (e.g., decoder Transformer, or temporal ConvNet), the equilibrium states obtained at sequence level or (one-by-one) token level are usually the same. It's hard to show that the equilibrium is unique, but empirically, this is usually what I observed to be true. May be easier to see this if we consider DEQ not as a Broyden unrolling but as an infinite-depth network.
from deq.
as long as the model is causal
Ah got it, I think this was the key part, if the whole model is causal then z_i will never use information from x_{j > i}.
from deq.
Related Issues (20)
- Two slightly different process for Deq HOT 2
- Segmentation Fault when Loss Backward CIFAR cls_mdeq_LARGE_reg HOT 10
- CIFAR-10 Reproduction HOT 6
- Test ImageNet Pre-trained Model HOT 10
- Segmentation fault after removing hook HOT 3
- RuntimeError: einsum(): the number of subscripts in the equation (3) does not match the number of dimensions (4) for operand 0 and no ellipsis was given HOT 1
- DEQ for Vision Transformer HOT 2
- Memory consumption on CIFAR-10 HOT 4
- I'd like to ask if anderson can't be used normally sometimes HOT 11
- Does MDEQ have different inference results for different batch sizes? HOT 6
- Expected a 'cuda' device type for generator (related to speed issues?) HOT 5
- RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment HOT 2
- Question about Remove Hook HOT 6
- Higher order derivatives
- UnboundLocalError: local variable 'lowest_xest' referenced before assignment HOT 4
- Broyden defeats the purpose of DEQs? HOT 6
- UserWarning: resource_tracker: There appear to be 14 leaked semaphore objects to clean up at shutdown HOT 4
- Expected a 'cuda' device type for generator but found 'cpu' HOT 2
- Mismatch between a pretrained ImageNet model and a config file HOT 1
- Hyperparameters for MDEQ-XL on ImageNet
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deq.