Comments (2)
I think this just means that you ran out of memory. Below is your stack-trace with more formatting.
Looking at the man page of posix_memalign, it can return the error code ENOMEM which value is 12 and corresponds to your error.
As of why you ran out of memory, it's hard to tell without more context. The only thing that the stack-trace tells us is that it happens on the backward pass of some matrix multiplication.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err`
value: TorchError { c_error: "[enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment,
nbytes) == 0. 12 vs 0
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7ffbfd74c441
inn/target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libc10.so)
frame #1:
c10::ThrowEnforceNotMet(char const*, int, char const*, std::string const&, void const*) + 0x49
(0x7ffbfd74c259 inn/target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch
/lib/libc10.so)
frame #2: c10::alloc_cpu(unsigned long) + 0x65e (0x7ffbfd73546e inn/target/debug
/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libc10.so)
frame #3: <unknown function> +
0x13dca (0x7ffbfd736dca inn/target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch
/lib/libc10.so)
frame #4: THStorage_resize + 0x76 (0x7ffbf40aa8b6 inn/target/debug/build/torch-
sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libcaffe2.so)
frame #5:
at::native::resize_cpu_(at::Tensor&, c10::ArrayRef<long>) + 0x38f (0x7ffbf3d014df inn/target/debug
/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libcaffe2.so)
frame #6: <unknown function>
+ 0xb2cb6e (0x7ffbf3e5ab6e inn/target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch
/lib/libcaffe2.so)
frame #7: at::Tensor::resize_(c10::ArrayRef<long>) + 0x4d (0x55777ea3df2b in
target/debug/pytorch-image-classification)
frame #8: <unknown function> + 0xb3d18c
(0x7ffbf3e6b18c inn/target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch
/lib/libcaffe2.so)
frame #9: at::native::mm(at::Tensor const&, at::Tensor const&) + 0x65
(0x7ffbf3c7a485 inn/target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch
/lib/libcaffe2.so)
frame #10: at::TypeDefault::mm(at::Tensor const&, at::Tensor const&) const + 0x5d
(0x7ffbf4013a8d inn/target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch
/lib/libcaffe2.so)
frame #11: torch::autograd::VariableType::mm(at::Tensor const&, at::Tensor const&)
const + 0x6ea (0x7ffbf27059fa inn/target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch
/lib/libtorch.so.1)
frame #12: <unknown function> + 0x3238ea (0x7ffbf22488ea in target/debug/build
/torch-sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libtorch.so.1)
frame #13:
torch::autograd::generated::MmBackward::apply(std::vector<torch::autograd::Variable,
std::allocator<torch::autograd::Variable> >&&) + 0x170 (0x7ffbf227b750 in target/debug/build/torch-
sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libtorch.so.1)
frame #14: <unknown function> +
0x30cd5a (0x7ffbf2231d5a in target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch
/lib/libtorch.so.1)
frame #15:
torch::autograd::Engine::evaluate_function(torch::autograd::FunctionTask&) + 0x385 (0x7ffbf222ae25
in /target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libtorch.so.1)
frame #16:
torch::autograd::Engine::thread_main(torch::autograd::GraphTask*) + 0xc0 (0x7ffbf222ce20 in
target/debug/build/torch-sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libtorch.so.1)
frame #17:
torch::autograd::Engine::thread_init(int) + 0x136 (0x7ffbf222a1f6 inn/target/debug/build/torch-
sys-8ab344225acfe8de/out/libtorch/libtorch/lib/libtorch.so.1)
frame #18: <unknown function> +
0xbd9e0 (0x7ffbfda229e0 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #19: <unknown function>
+ 0x76db (0x7ffbf19016db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #20: clone + 0x3f
(0x7ffbf141288f in /lib/x86_64-linux-gnu/libc.so.6)
" }', src/libcore/result.rs:997:5
from tch-rs.
So this error means that the process ran out of memory. Thanks for the clarification.
from tch-rs.
Related Issues (20)
- Double free or corruption (fasttop) HOT 1
- grads become zeros after a short period of training on metal backend
- Can we `.set_retains_grad(true)` ?
- model in rust, optimizer.step in python
- Preserving gradients with copy()? HOT 2
- Can't compile or test tch-rs HOT 3
- What if I am not using `pyo3==0.18.3`? HOT 4
- Error when building burn on Windows when upgrading to tch 0.16 HOT 4
- Can you help me setting environment variables? HOT 2
- la HOT 1
- Publish pyo3-tch 0.16 HOT 3
- getting gradient for intermediate tensors
- Unable to compile on arch-linux HOT 2
- torch download and build location
- Can't compile static HOT 3
- Rust-bert does not work with Debian 12 errors HOT 1
- Any ideas? HOT 4
- M2 mac os throw error: "found architecture 'x86_64', required architecture 'arm64'" HOT 2
- error adding symbols: DSO missing from command line HOT 2
- Support of X86 quantization engine as in pytorch?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tch-rs.