GithubHelp home page GithubHelp logo

A poorman mamba code about mamba HOT 12 OPEN

state-spaces avatar state-spaces commented on August 27, 2024
A poorman mamba code

from mamba.

Comments (12)

tridao avatar tridao commented on August 27, 2024 6

I tried to compare my code with your code as well as @johnma2006 's code line-by-line, taking three code files in perspective, there seems to be no successful findings so far, except for delta inverse-softplus initialization which only your code had performed.

I am bit stucked, Please advise.

The delta initialization is important.

from mamba.

buttercutter avatar buttercutter commented on August 27, 2024 2

My Mamba code implementation seems to work now without any negative training loss for now so far, will do further checking and code regression running to see if the issue still persists.

from mamba.

albertfgu avatar albertfgu commented on August 27, 2024 1

Have you plugged in a standard Transformer first? It seems more likely that there's something wrong with the training pipeline than with any particular model.

from mamba.

albertfgu avatar albertfgu commented on August 27, 2024 1

It looks like you reimplemented the model from scratch, so this is beyond the scope of our ability to help. Perhaps check line by line that your implementation matches ours?

from mamba.

johnma2006 avatar johnma2006 commented on August 27, 2024 1

Hi, here is a suggestion is to check the correctness of your implementation:

  1. Load an instance of your implementation and the official implementation side-by-side.
  2. Transfer the official instance's weights into your instance.
  3. Make sure the forward is identical. If not, drill down into each submodule to see where the diffs are coming from.

Good luck!

from mamba.

radarFudan avatar radarFudan commented on August 27, 2024 1

Comment on the initialization and parameterization: They are super important in the sense that without the suitable initialization and parameterization, the learning of long-term memory with SSMs can be unstable thus difficult. (https://arxiv.org/abs/2311.14495)

from mamba.

buttercutter avatar buttercutter commented on August 27, 2024 1

Thanks for the comments.

I had already incorporated proper delta initialization into the mamba code, but it is not helping with training loss convergence issue yet.

I need to think from other angle perspectives. 👀

image

@radarFudan : I noticed that StableSSM tries to constraint the growth rate of gradient by constraining the eigenvalues. This approach seems to complement the operations done by clip_grad_norm(). I will give StableSSM a go in my code implementation, will post further updates here, thanks !!

from mamba.

albertfgu avatar albertfgu commented on August 27, 2024 1

The stable SSM initializations may or may not help, we've never tried them. But I think the theory doesn't apply directly to the selective SSM setting. I don't think there should be anything particular that you need to do here, so either there's an issue in the implementation or somehow Mamba interacts with your data weirdly, which would be interesting.

  1. Have you checked that your mamba function returns the same outputs as ours, as @johnma2006 suggested?
  2. Is there any reason you can't directly call the model from this repository? Is the purpose of your model expository or for research?

from mamba.

buttercutter avatar buttercutter commented on August 27, 2024

I had plugged in a small bert model, and the training works alright, so I am not really sure what else is missing from my MAMBA architecture module.

Please advise.

from mamba.

buttercutter avatar buttercutter commented on August 27, 2024

I tried to compare my code with your code as well as @johnma2006 's code line-by-line, taking three code files in perspective, there seems to be no successful findings so far, except for delta inverse-softplus initialization which only your code had performed.

I am bit stucked, Please advise.

from mamba.

albertfgu avatar albertfgu commented on August 27, 2024

Great! What did you change?

from mamba.

buttercutter avatar buttercutter commented on August 27, 2024

@albertfgu : One of the major change is the output sizing which has to be related to vocab_size instead of d_model

See all the other changes required for getting rid of negative training loss

I will update the new code in github instead of gist now.

from mamba.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.