The core types and functions of the SciNim ecosystem
scinim / flambeau Goto Github PK
View Code? Open in Web Editor NEWNim bindings to libtorch
Nim bindings to libtorch
The hasCuda
procedure, which should return whether the linked Torch library was compiled with CUDA support, does not live in the global Torch namespace.
In the Torch documentation that procedure isn't even found, but checking locally in the header files, the ATen library contains it in the Context.h
.
Right now we use lent UncheckedArray
instead of ptr UncheckedArray
in some places. And at the moment lent
is handled behind the scenes with pointers but from this conversation with @Clyybber it isn't guaranteed to stay that way in the future. Should we just change all functions returning lent UncheckedArray
to ptr UncheckedArray
to be sure it won't break in the future?
Unlike regular PyTorch/LibTorch, torchvision, torchtext and torchaudio do not have a nice prebuilt executable that we can download.
So we need to either:
Note: we can skip the hell that is CMake and directly use Nim as a C++ build system
For example, for the Arcade learning Environment (Atari games emulator for Reinforcement Learning):
Relevant thread: pytorch/vision#2692 (Using torchvision C++ API without installing python)
Yeah, first commit, first issue.
2 practical ways forward for generating the wrapper:
toast
from nimterop can help with that with the recurse optionnimtorch
style: https://github.com/sinkingsugar/nimtorch/blob/master/torch/torch_cpp.nimBut it seems like nimterop -recurse
does see the include
but does not go deep? cc @genotrance
After running torch_installer
toast -f:ast2 -n -m=cpp -k -I={/path/to/minitorch/libtorch/include/} -r minitorch/libtorch/include/ATen/ATen.h
We want a nimble file that build and run the torch installer so that the required library are downloaded.
We can use a before install task for that similar to nimgen/nimterop libraries:
See https://github.com/genotrance/nim7z/blob/cc6c2b0/nim7z.nimble#L20-L27
Right now there are a few tests in https://github.com/SciNim/flambeau/blob/master/tests/arraymancerTestSuite/tensor/test_accessors.nim that fail. Most notably:
a[1, 1] += 10
isn't possible because a[1, 1]
is immutable.Latest examples is using data_ptr
: it return an ptr UncheckedArray[C10_Complex[float]]
.
For ease of use, it is better to convert it to ptr UncheckedArray[Complex]
.
item
procs, various FFTs procs already have special implementation for Complex; a code review should be done on the API to make sure Complex is handled properly on the idiomatic Nim API
Currently we use a mix of dynamic linking with passL: -lc10 -ltorch_cpu
and static linking with {.link.}
We want to cleanly use dynamic linking for starter as the linker step is very slow and c10/torch have no static lib provided, at least on Linux.
So we want either dynlib or the passL strategy but without any {.link.}
.
Furthermore we need to check where we need the library o minimize path issues, is it nimble bin
or does it have a lib
folder?
Currently we need to inline C++ code due to Nim not being able to generate C++ code with default value or wrap types without default constructors nim-lang/Nim#4687
This leads to the following code
flambeau/proof_of_concepts/poc09_end_to_end.nim
Lines 12 to 26 in fe82c68
with emittypes a workaround nim-lang/Nim#16664
flambeau/flambeau/cpp/emitters.nim
Lines 3 to 13 in fe82c68
Instead it would be more convenient to have a macro defModule
(name subject to bikeshedding) with the following syntax:
defModule:
type Net = object of Module
fc1: Linear = nil
fc2: Linear = nil
fc3: Linear = nil
and generates the proper C++ code (nil mapped to {nullptr}) and rewrite the typesection with the proper {.pure, importcpp.} pragma.
This is similar to nim-lang/RFCs#126, nim-lang/RFCs#252
Implementation details:
{.emit:["type something {", NimTypeInterpolated, " field{nullptr};"].}
Tensor comparaison cause segfault :
let shape = [2'i64, 3]
let t = zeros(shape.asTorchView(), kFloat32)
check t == [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]].toTensor
This is caused by toTensor
creating "implicitly" a float64 Tensor.
This can be solved by :
let shape = [2'i64, 3]
let t = zeros(shape.asTorchView(), kFloat32)
check t == [[0.0'f32, 0.0, 0.0], [0.0'f32, 0.0, 0.0]].toTensor
Can be reproduced with equal
function.
==
, equal
operator should check for Tensor type and potentially raise an exception (?) to avoid crashing the program.
This can be solved with generic Tensor type ;)
let t = zeros([1, 2, 3].asTorchView(), kInt64)
do not compile.
The current workaround is :
let shape = [1'i64, 2, 3]
let t = zeros(shape.asTorchView(), kInt64)
We should either wrap functions taking a IntArrayRef to accept openArray in a higherl evel wrapping or modify as TorchView
On the latest master commit running poc03_arrayrefs.nim
using
nim r --cc:vcc --backend:cpp --gc:arc --experimental:views poc03_arrayrefs.nim
fails in the c++ compiler with the following error message:
C:\Users\hugog\nimcache\poc03_arrayrefs_d\@mpoc03_arrayrefs.nim.cpp(263): error C2440: '=': cannot convert from 'const T *' to 'NI64 *'
with
[
T=int64_t
]
C:\Users\hugog\nimcache\poc03_arrayrefs_d\@mpoc03_arrayrefs.nim.cpp(263): note: Conversion loses qualifiers
C:\Users\hugog\nimcache\poc03_arrayrefs_d\@mpoc03_arrayrefs.nim.cpp(282): error C2440: '=': cannot convert from 'const T *' to 'NI64 *'
with
[
T=int64_t
]
C:\Users\hugog\nimcache\poc03_arrayrefs_d\@mpoc03_arrayrefs.nim.cpp(282): note: Conversion loses qualifiers
Lines 262 and 263 are:
NI64* T4_;
...
...
...
T4_ = (NI64*)0; // 262
T4_ = t__9bjO9cCSag10Z9adt9cTzQW9a4A.sizes().data(); //263
And they are generated from line 7 in the nim file:
echo t.sizes().asNimView() # line 7
The definitions of sizes
and data
is:
func sizes*(a: Tensor): IntArrayRef {.importcpp:"#.sizes()".}
func data*[T](ar: ArrayRef[T]): ptr UncheckedArray[T] {.importcpp: "#.data()".}
To me, it looks like perhaps the compiler can't convert between NI64*
and int64_t*
.
Edit: Or it probably has something to do with const
. Is it because T4_
is initialized that it complains? Tried adding noInit
everywhere I could but it still initialized it to 0 on the line above :/ Otherwise we could go around this by using const_cast
from C++ somehow.
When trying to obtain a Scalar value from a Tensor of dimension 0 using self.item(Complex64) fails during linkage with undefined reference to `tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA at::Tensor::item<tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA>() const'
Full ld error :
Hint: [Link]
/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /home/rcaillaud/.cache/nim/test_tensor_d/@mtest_tensor.nim.cpp.o: in function `main__5ERS50C2yKrnV9cScP0LGow()':
@mtest_tensor.nim.cpp:(.text+0xa967): undefined reference to `tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA at::Tensor::item<tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA>() const'
/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: @mtest_tensor.nim.cpp:(.text+0xaa0c): undefined reference to `tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA at::Tensor::item<tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA>() const'
collect2: error: ld returned 1 exit status
Error: execution of an external program failed: 'g++ -o /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/tests/build/test_tensor /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_assertions.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_formatfloat.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_io.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_system.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_exitprocs.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_parseutils.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_math.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_unicode.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_strutils.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_streams.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_times.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_hashes.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_sets.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_sequtils.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_os.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_strformat.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_terminal.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_unittest.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_complex.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/@m..@s..@s..@scppstl@[email protected] /home/rcaillaud/.cache/nim/test_tensor_d/@m..@sflambeau@[email protected] /home/rcaillaud/.cache/nim/test_tensor_d/@m..@sflambeau@[email protected] /home/rcaillaud/.cache/nim/test_tensor_d/@mtest_tensor.nim.cpp.o -lm -lrt -L/home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/vendor/libtorch/lib -lc10 -ltorch_cpu -Wl,-rpath,/home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/vendor/libtorch/lib -ldl'
When we for example slice a Tensor or perform a reduction along an axis we have two options: either to keep the dimensions of the original Tensor even if its size along that axis is 1. Or we can squeeze away that extra dimension. For example, let's say we have this Tensor:
1 2
3 4
Tensor[2, 2]
If we slice it like this: t[_, 0]
should we get back a tensor like
1
3
Tensor[2, 1] # Arraymancer
or
1 3
Tensor[2] # Pytorch
I vouch for going against libtorch's defaults and use the Arraymancer way. The reason for this is that it is trivial to squeeze away the unnecessary dimensions with t.squezze
while it is non-trivial to restore that extra dimension(s).
I've got "flambeau-0.0.1/flambeau/raw_bindings/neural_nets.nim(111, 46) Error: undeclared identifier: 'IntArrayRef'" In Cpu version on ubuntu 20.04 (wsl 2)
Adding ./c10 to import fix it :)
When trying to run the nn
test I got this error on Windows:
C:\Users\hugog\code\nim\flambeau\vendor\libtorch\include\c10/util/C++17.h(28): fatal error C1189: #error: Macro clash with min and max -- define NOMINMAX when compiling your program on Windows
It was solved by passing this with the compiler:
-t:-DNOMINMAX
Not sure where and how to define it so that it is applied to all files in Flambeau. :/
Hi, I see that there hasn't been a commit for over a year. Is this abandoned?
I'm excited to see this project being developed. I saw that this project was recently rebranded from minitorch to Flambeau. I was wondering what your thoughts were on potentially trying to instead adopt the name NimTorch for this project from the previous effort. Given how popular PyTorch is, if this is going to be the official Nim wrapper for the Torch library, I feel like the name NimTorch would be the most fitting and intuitive, especially for those coming from Python. I'm assuming the possibility of using the NimTorch name would depend on if Giovanni was okay with it, and there'd be some issues to figure out with versioning, but I think this idea would be worth exploring. What do you think?
Title.
Hi. This is a good start. I have a request: Could you name precisely which C++ library you're wrapping? Maybe in a readme?
It should throw an exception instead
block:
let shape2 = [8'i64, 8'i64]
let shape3 = [7'i64, 8'i64]
var f64input2 = rand(shape2.asTorchView(), kfloat64)
var f64input3 = rand(shape3.asTorchView(), kfloat64)
echo f64input2 + f64input3
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.