GithubHelp home page GithubHelp logo

sdoria / simpleselfattention Goto Github PK

View Code? Open in Web Editor NEW
210.0 210.0 28.0 142 KB

A simpler version of the self-attention layer from SAGAN, and some image classification results.

License: Apache License 2.0

Jupyter Notebook 91.76% Python 8.24%

simpleselfattention's People

Contributors

sdoria avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

simpleselfattention's Issues

Clarification on implementation for convergence

Hi Sebastian, thanks for the layer.

Can I clarify what you mean by:
"the network converges if SimpleSelfAttention is placed right after a convolution layer that uses batch norm"

Are you saying the order of operations should always be

Conv => Batchnorm => Self Attention ?

And activation should come afterwards?

SimpleSelfAttention and se

@sdoria
Hi ,thank you for your work, is great.
I have a question ,
SimpleSelfAttention and se (senet ) , se is a independent structure,
Which is more effective? Have you done any experiments.

Not clear regarding gamma!!

self.gamma = nn.Parameter(tensor([0.]))

o = self.gamma * o + x
or
o = self.gamma * torch.bmm(h, beta) + x

Hi @sdoria ,

I have a problem understanding the code. If we set gamma to 0., then wouldn't that always be zero ?

where is the atention ? As gamma is zero the complete self-attention is zero and the code finally adds input and pass it to linear.

I dont see the self attention here.

May be i did not understand it clearly, would you mind clarifying that?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.