GithubHelp home page GithubHelp logo

Comments (6)

jerrybai1995 avatar jerrybai1995 commented on July 21, 2024

Hi @se4u ,

Thanks for your interest in our repo! Regarding your questions:

  1. You do need to generate the words one by one (as in the case of any sequence model). However, you shouldn't need to re-compute the already generated tokens. For instance, to generate sequence [x1, x2, x3, x4] with a normal Transformer, you need to pass x1, (x1,x2), (x1,x2,x3) to the model, which could be quadratic due to re-computations. However, for DEQ, as the output is an equilibrium point, you can keep whatever that's been generated as "history padding". You thus only need to solve for the equilibrium of one word, not 4 words. Here is how this padding (or what I call "memory") is appended in code: https://github.com/locuslab/deq/blob/master/DEQModel/models/transformers/deq_transformer.py#L134. You can also see what I mean in Figure 1(b) of our paper (the "history padding" part).

Basically, given a starting word (x0*=)x1, you first generate its equilibrium x1*=x2. Then given with x0*=x1 as padding, x1*=x2 as input, you generate x2*=x3. After that, (x0*,x1*)=(x1,x2) will be the padding, x2*=x3 will be the input and you shall generate x3*=x4, etc.

  1. The paper provides experiments on language modeling, which is causal (cf. the 1st paragraph of Sec. 2). In this task, z_i cannot depend on x_{i+1}. However, DEQ can also be applied in cases where z_i does depend on x_{i+1} (e.g., machine translation encoder); the causality structure shouldn't matter. The training objective is exactly the same as any deep architecture; we used the cross-entropy loss (so linear + softmax + NLL loss) in training: https://github.com/locuslab/deq/blob/master/DEQModel/models/transformers/deq_transformer.py#L408

Let me know if these answered your questions :-)

from deq.

se4u avatar se4u commented on July 21, 2024

for DEQ, as the output is an equilibrium point, you can keep whatever that's been generated as "history padding". You thus only need to solve for the equilibrium of one word, not 4 words.

My apologies but I am still a little confused, consider the equations for the trellis network.
image
This suggests that at time t we will have to recompute the equilibrium for all the hidden states up to time t. I.e.

  1. at t=1 we will give a dummy token (x₀) as input and compute z*₁ and generate x₁ (greedily using softmax)
  2. at t=2 we will give (x₀ x₁) as input, recompute z₁ and compute new z₂ -- because there is no guarantee that the new equilibrium point for the hidden state will be the same -- and then compute x₂ using z₂. The recomputed z₁ will be discarded.

So at time t we will need to recompute the hidden state sequence up to time t-1. Is this correct? Or did you mean that at time t=2 you will re-use z₁ generated at time t=1? This seems to be wrong to me because the z₁ generated at time t=1 need not be the same as the z*₁ generated at time t=2, because the sequences at equilibrium may be arbitrarily different.

Thank you again for taking the time to answer these questions.

from deq.

jerrybai1995 avatar jerrybai1995 commented on July 21, 2024

No, that is incorrect. In sentence generation, at t=2 you only need to compute z2*, while holding z1* fixed. You shouldn't need to find the equilibrium point of t=1 again.

The way to think about this is to analyze in Transformer's QKV form: only t=2 exists in the query (Q); z1* will be directly appended to the key/value (K/V) part.

from deq.

se4u avatar se4u commented on July 21, 2024

No, that is incorrect. In sentence generation, at t=2 you only need to compute z2*, while holding z1* fixed. You shouldn't need to find the equilibrium point of t=1 again.

Got it 👍Thanks a lot for clarifying this point.

However I just wanted to note -- and I think you'll agree -- that the equilibrium state found this way will be different from the equilibrium state found by the procedure that I outlined above. I.e. we can have a different procedure, say DEQ', which takes a sequence of tokens and produces a same size sequence of hidden states in one shot.

from deq.

jerrybai1995 avatar jerrybai1995 commented on July 21, 2024

It's hard to say; what I observed is that as long as the model is causal (e.g., decoder Transformer, or temporal ConvNet), the equilibrium states obtained at sequence level or (one-by-one) token level are usually the same. It's hard to show that the equilibrium is unique, but empirically, this is usually what I observed to be true. May be easier to see this if we consider DEQ not as a Broyden unrolling but as an infinite-depth network.

from deq.

se4u avatar se4u commented on July 21, 2024

as long as the model is causal

Ah got it, I think this was the key part, if the whole model is causal then z_i will never use information from x_{j > i}.

from deq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.