Comments (10)
Right, batch-normalization is not available yet. We started by focussing on language models where group-norm is far more frequent than batch-norm. We've just started adding the vision bits, e.g. convolutions so as to get stable-diffusion to run, we would like to add some actual vision model now so batch norm is likely to be added soonish (a week or two I would say).
from candle.
Not sure if it will be enough for your use case but I've just merged #508 which adds a batch normalization layer. It could be used in a similar way to nn::batch_norm_2d
but with the limitation that it's only designed for inference and would not work for training (it doesn't keep track/learn the running stats). I've tested it on some examples against the PyTorch implementation and it seems reasonable but let me know if you see anything weird with it.
from candle.
I am training networks, so unfortunately this is not enough for my usecase.
from candle.
Interesting, what models do you actually care about?
I had the feeling that most recent architectures use some form of group/layer norm instead of batch-norm (e.g. dinov2, the unet/vae from stable diffusion) and so I was thinking that we would only have batch-norm for inference as it's a mess to get right for training contrary to group/layer norms. That said, certainly happy to reconsider if there is much demand for it.
from candle.
I am working with ResNets for AlphaZero / MuZero.
from candle.
Has there been any progress on this front?
from candle.
Interesting, what models do you actually care about? I had the feeling that most recent architectures use some form of group/layer norm instead of batch-norm (e.g. dinov2, the unet/vae from stable diffusion) and so I was thinking that we would only have batch-norm for inference as it's a mess to get right for training contrary to group/layer norms. That said, certainly happy to reconsider if there is much demand for it.
I'm using MobileNetV3, which needs trainable batchnorms, as well as other mobile-scale realtime classification convnets.
from candle.
Not much progress I'm afraid. @Awpteamoose do you have some MobileNetV3 or other models code that you could share? Would be very interesting to point at it as external resources that use candle. If I understand you're training these models? I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc.
from candle.
I was porting my implementation from dfdx (coreylowman/dfdx#794) and halfway through noticed that batchnorms aren't trainable so I don't really have any code to share.
I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc.
I'm probably just out of date as the field moves very fast, but also transformers that I have looked at require an order of magnitude more FLOPS. I'm doing inference on tiny single-core CPUs as part of massively parallelised video analysis so even real-time is too slow for me.
from candle.
@LaurentMazare This should be closed due to the merge of #1504
from candle.
Related Issues (20)
- Matmul error introduced in #1884 HOT 2
- Error: DriverError(CUDA_ERROR_NOT_FOUND, "named symbol not found") when loading cast_f32_bf16 HOT 1
- quantized exmaple with CUDA Error: not a f64 F32(1e-5) HOT 2
- calculation result is incorrect on metal backend HOT 9
- no cuda implementation for rms-norm HOT 2
- Add docs for argmax_keepdim and specify what happens in the event of a tie
- Can't loop over model implementation based off examples more than N times (7-20+ it ends up breaking) HOT 12
- Update Installation Page for Windows Requirements
- candle tensor operations are bit slower than pytorch tensor operations HOT 9
- Tensor copy from noncontiguous tensor still make noncontiguous tensor HOT 2
- bias is not randomly initialized on metal HOT 4
- Flash Attention not working on CUDA 12.1 HOT 10
- Quantized much slower than llama.cpp with same model and settings... HOT 22
- [help] how to update a portion of a long tensor HOT 5
- The current `squeeze` and `unsqueeze` implementations may be degreasing in some EXAMPLES. HOT 2
- .sum() with a u8 tensor overflows after 255 HOT 1
- support for json (or other?) grammar? HOT 1
- Support multimodal LLMs? HOT 2
- Latest tensor squeeze impl make cuda matmal fail HOT 13
- Issue Labelling for Good First Issues
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from candle.