Comments (10)
If I run the GC in every other iteration instead of reclaim
:
0.001 s or 0.05 s for the line (every other iteration - longer on iterations where the GC runs) and 9 s total, though 41% of that is GC.
If I run the GC every iteration, 0.002 s for the line consistently, but 11 s total and 67% GC time.
EDIT: Ah, sweet spot... GC.gc(false)
in every other iteration: 0.001 s for the line and 0.8 seconds total, 3% GC time. And CUDA.reclaim()
at the end of the run drops GPU memory to minimal.
from flux.jl.
closing as addressed by #2416, feel free to reopen if needed though
from flux.jl.
Do you have a MWE which only captures the CPU <-> GPU data movement? 95% of the code here is unrelated, so it'd be tricky to determine what the culprit might be.
from flux.jl.
I trimmed a bit but I'm not sure if I can minimize it much further? This is already quite reduced from the real network. If I replace the body of loss_func
with a call to randn()
, the amount of time that this line, training_data = imagegen_test(training_batch_size) |> gpu
takes in test()
goes from a profiler count of 2700 to 84.
from flux.jl.
In that case, it's possible that the allocation or copying required for gpu
is having to wait for a backlog of other CUDA functions or kernels. You could look into preallocating GPU buffers for the input data, using CUDA.unsafe_free!
or wrapping your batches in https://cuda.juliagpu.org/stable/usage/memory/#Batching-iterator . Otherwise, the only other suggestions I can make without a deep dive into the example code would be to do the usual GPU library memory handling tricks (e.g. CUDA.reclaim()
) and to try to reduce memory allocations in your loss function's call stack.
from flux.jl.
Hmm okay, thanks. Putting CUDA.reclaim()
at the end of each training loop shrinks the time spent on that line, but it's almost entirely replaced by reclaim()
time-wise. unsafe_free!
on the training data has no effect.
I must not be understanding something about GPUs here. The total size of the training data generated in each iteration in this example is 2 MB. The weights and biases of the network are 3 MB. The optimizer state is 6 MB. How am I saturating 16 GB of VRAM and 16 GB of shared memory?
from flux.jl.
reclaim
is quite expensive, so I'd only recommend running it every few iterations unless you're hitting OOMs (which is clearly not a problem here). Can you check if gpu
is taking less time (not samples) now? If it's still taking around the same amount of time, then the problem may lie elsewhere. If it's taking significantly less time, we can get more into what might be happening (e.g. how the Julia GC sucks for GPU because it doesn't know about GPU memory pressure).
from flux.jl.
Watching Task Manager's record of GPU memory, reclaim
doesn't seem to do much until memory is full - If I have reclaim theoretically run 6 times, I only see 2-3 drops in GPU memory usage.
Using @timed
on that line and @time
on the whole call:
reclaim frequency | gpu time |
total time |
---|---|---|
every | 0.002 s | 35.1 s |
skip 1 | 0.01 s | 21.5 s |
none | 0.1 s | 28.5 s |
EDIT: And when I check what reclaim
returns, it always returns nothing
, which according to the docstring means it didn't do anything?
from flux.jl.
Yes, that sounds about right. I neglected to mention GC.gc
because the linked CUDA.jl docs do, but it's usually required before calling reclaim
. Looks like this is a classic case of the GC not playing nice with allocation-heavy GPU code then.
from flux.jl.
Thanks for your help! I'm new to GPU work so I was primarily relying on Flux docs. I'll see if there's something to note there. Meanwhile, I saw there's initial work being done over on CUDA.jl to run the GC heuristically, which would be nice.
from flux.jl.
Related Issues (20)
- Dimensions check for `Conv` is incomplete, leading to confusing error HOT 1
- 2x performance regression due to 5e80211c3302b5e7b79b4f670498f5a68af6659b HOT 2
- Why is Flux.destructure type unstable? HOT 3
- bad formatting for PairwiseFusion docstring HOT 1
- Zero-sized arrays cannot be applied to Dense layers. HOT 4
- Adding Simple Recurrent Unit as a recurrent layer
- Collecting PyTorch -> Flux migration notes
- tests are failing due to ComponentArrays HOT 2
- deprecate Flux.params HOT 7
- ConvTranspose errors with symmetric non-constant pad
- SamePad() for even sized filters.
- Dense layers with shared parameters HOT 5
- Implementation of `AdamW` differs from PyTorch HOT 10
- `gpu` should warn if cuDNN is not installed HOT 2
- Cannot take `gradient` of L2 regularization loss HOT 1
- Create a flag to use Enzyme as the AD in training/etc. HOT 13
- test Enzyme gradient for loss functions
- test Enzyme gpu support
- Enzyme fails with MultiHeadAttention layer HOT 13
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flux.jl.