Comments (5)
This is not true for the current LSTM implementation but could be done for Recur
. Is this something people use or can we close as a non-priority?
from flux.jl.
Our LSTM implementation might be general enough that you can just throw a convolution into it. That would be a nice bit of reuse.
from flux.jl.
Definitely something that people use. Have a need for a quick implementation of this for a image denoising application in a Julia codebase, and would love to see in Flux :). seems like PyCall + pytorch is my best bet for now (e.g. https://towardsdatascience.com/video-prediction-using-convlstm-with-pytorch-lightning-27b195fd21a2)
from flux.jl.
At the very least, I think it would be a good contribution to the model zoo. Whether ConvLSTM should be a core layer would require more discussion (do you do ConvRNN? ConvGRU? Is it possible to generalize the cells so they works with conv and matmul, etc.)
from flux.jl.
Following up on this, Flax does an interesting thing by using Dense
layers instead of explicitly writing out the matmuls: https://github.com/google/flax/blob/main/flax/linen/recurrent.py#L113-L118. My understanding is that the ConvLSTM paper simply subs out those matmuls for convolutions, so it may be possible to parameterize cells in terms of sublayer(s) that compute results from W
+ x
or h
.
from flux.jl.
Related Issues (20)
- example for using apple GPU with flux HOT 4
- Dimensions check for `Conv` is incomplete, leading to confusing error HOT 1
- 2x performance regression due to 5e80211c3302b5e7b79b4f670498f5a68af6659b HOT 2
- Why is Flux.destructure type unstable? HOT 3
- bad formatting for PairwiseFusion docstring HOT 1
- Zero-sized arrays cannot be applied to Dense layers. HOT 4
- Adding Simple Recurrent Unit as a recurrent layer
- Collecting PyTorch -> Flux migration notes
- tests are failing due to ComponentArrays HOT 2
- deprecate Flux.params HOT 7
- Significant time spent moving medium-size arrays to GPU, type instability HOT 10
- ConvTranspose errors with symmetric non-constant pad
- SamePad() for even sized filters.
- Dense layers with shared parameters HOT 5
- Implementation of `AdamW` differs from PyTorch HOT 10
- `gpu` should warn if cuDNN is not installed HOT 2
- Cannot take `gradient` of L2 regularization loss HOT 1
- Create a flag to use Enzyme as the AD in training/etc. HOT 13
- test Enzyme gradient for loss functions
- test Enzyme gpu support
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flux.jl.