GithubHelp home page GithubHelp logo

flambeau's Introduction

scinim

The core types and functions of the SciNim ecosystem

flambeau's People

Contributors

clonkk avatar hugogranstrom avatar machineko avatar mratsim avatar vindaar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flambeau's Issues

`hasCuda` is not in global Torch scope

The hasCuda procedure, which should return whether the linked Torch library was compiled with CUDA support, does not live in the global Torch namespace.

In the Torch documentation that procedure isn't even found, but checking locally in the header files, the ATen library contains it in the Context.h.

To `lent` or not to `lent`

Right now we use lent UncheckedArray instead of ptr UncheckedArray in some places. And at the moment lent is handled behind the scenes with pointers but from this conversation with @Clyybber it isn't guaranteed to stay that way in the future. Should we just change all functions returning lent UncheckedArray to ptr UncheckedArray to be sure it won't break in the future?

vendoring and wrapping torchvision, torchtext and torchaudio

Unlike regular PyTorch/LibTorch, torchvision, torchtext and torchaudio do not have a nice prebuilt executable that we can download.

So we need to either:

  • get the library from conda or pip and discarding the Python part
  • build the library ourselves on install

Note: we can skip the hell that is CMake and directly use Nim as a C++ build system

For example, for the Arcade learning Environment (Atari games emulator for Reinforcement Learning):

Relevant thread: pytorch/vision#2692 (Using torchvision C++ API without installing python)

Wrapping

Yeah, first commit, first issue.

2 practical ways forward for generating the wrapper:

But it seems like nimterop -recurse does see the include but does not go deep? cc @genotrance

image

After running torch_installer

toast -f:ast2 -n -m=cpp -k -I={/path/to/minitorch/libtorch/include/} -r minitorch/libtorch/include/ATen/ATen.h 

Convert C10_Complex / CppComplex to Nim Complex[T] on high level API

Latest examples is using data_ptr : it return an ptr UncheckedArray[C10_Complex[float]].

For ease of use, it is better to convert it to ptr UncheckedArray[Complex].

item procs, various FFTs procs already have special implementation for Complex; a code review should be done on the API to make sure Complex is handled properly on the idiomatic Nim API

Fix linking

Currently we use a mix of dynamic linking with passL: -lc10 -ltorch_cpu and static linking with {.link.}

We want to cleanly use dynamic linking for starter as the linker step is very slow and c10/torch have no static lib provided, at least on Linux.

So we want either dynlib or the passL strategy but without any {.link.}.
Furthermore we need to check where we need the library o minimize path issues, is it nimble bin or does it have a lib folder?

defModule macro to define neural nets without inline C++

Currently we need to inline C++ code due to Nim not being able to generate C++ code with default value or wrap types without default constructors nim-lang/Nim#4687

This leads to the following code

emitTypes:
"""
struct Net: public torch::nn::Module {
torch::nn::Linear fc1{nullptr};
torch::nn::Linear fc2{nullptr};
torch::nn::Linear fc3{nullptr};
};
"""
type Net
{.pure, importcpp.}
= object of Module
fc1: Linear
fc2: Linear
fc3: Linear

with emittypes a workaround nim-lang/Nim#16664

template emitTypes*(typesection: static string): untyped =
## Emit a C/C++ typesection
{.emit: "/*TYPESECTION*/\n" & typesection.}
template emitGlobals*(globals: static string): untyped =
## Emit a C/C++ global variable declaration
{.emit: "/*VARSECTION*/\n" & globals.}
template emitIncludes*(includes: static string): untyped =
## Emit a C/C++ global variable declaration
{.emit: "/*INCLUDESECTION*/\n" & includes.}

Instead it would be more convenient to have a macro defModule (name subject to bikeshedding) with the following syntax:

defModule:
  type Net = object of Module
    fc1: Linear = nil
    fc2: Linear = nil
    fc3: Linear = nil

and generates the proper C++ code (nil mapped to {nullptr}) and rewrite the typesection with the proper {.pure, importcpp.} pragma.

This is similar to nim-lang/RFCs#126, nim-lang/RFCs#252

Implementation details:

  • To interpolate with the proper symbol, use the syntax {.emit:["type something {", NimTypeInterpolated, " field{nullptr};"].}
  • With interpolation we should properly support both C++ types without extracting from their importcpp pragma and Nim types without fiddling with gensym.

Comparing Tensor of different type segfault

Tensor comparaison cause segfault :

      let shape = [2'i64, 3]
      let t = zeros(shape.asTorchView(), kFloat32)
      check t == [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]].toTensor

This is caused by toTensor creating "implicitly" a float64 Tensor.

This can be solved by :

      let shape = [2'i64, 3]
      let t = zeros(shape.asTorchView(), kFloat32)
      check t == [[0.0'f32, 0.0, 0.0], [0.0'f32, 0.0, 0.0]].toTensor

Can be reproduced with equal function.

==, equal operator should check for Tensor type and potentially raise an exception (?) to avoid crashing the program.
This can be solved with generic Tensor type ;)

asTorchView doesn't work on non-declared seq

let t = zeros([1, 2, 3].asTorchView(), kInt64)

do not compile.

The current workaround is :

let shape = [1'i64, 2, 3]
let t = zeros(shape.asTorchView(), kInt64)

We should either wrap functions taking a IntArrayRef to accept openArray in a higherl evel wrapping or modify as TorchView

Problems with ArrayRef in the C++ compiler

On the latest master commit running poc03_arrayrefs.nim using
nim r --cc:vcc --backend:cpp --gc:arc --experimental:views poc03_arrayrefs.nim
fails in the c++ compiler with the following error message:

C:\Users\hugog\nimcache\poc03_arrayrefs_d\@mpoc03_arrayrefs.nim.cpp(263): error C2440: '=': cannot convert from 'const T *' to 'NI64 *'
        with
        [
            T=int64_t
        ]
C:\Users\hugog\nimcache\poc03_arrayrefs_d\@mpoc03_arrayrefs.nim.cpp(263): note: Conversion loses qualifiers
C:\Users\hugog\nimcache\poc03_arrayrefs_d\@mpoc03_arrayrefs.nim.cpp(282): error C2440: '=': cannot convert from 'const T *' to 'NI64 *'
        with
        [
            T=int64_t
        ]
C:\Users\hugog\nimcache\poc03_arrayrefs_d\@mpoc03_arrayrefs.nim.cpp(282): note: Conversion loses qualifiers

Lines 262 and 263 are:

NI64* T4_;
...
...
...
T4_ = (NI64*)0; // 262
T4_ = t__9bjO9cCSag10Z9adt9cTzQW9a4A.sizes().data(); //263

And they are generated from line 7 in the nim file:

echo t.sizes().asNimView() # line 7

The definitions of sizes and data is:

func sizes*(a: Tensor): IntArrayRef {.importcpp:"#.sizes()".}

func data*[T](ar: ArrayRef[T]): ptr UncheckedArray[T] {.importcpp: "#.data()".}

To me, it looks like perhaps the compiler can't convert between NI64* and int64_t*.
Edit: Or it probably has something to do with const. Is it because T4_ is initialized that it complains? Tried adding noInit everywhere I could but it still initialized it to 0 on the line above :/ Otherwise we could go around this by using const_cast from C++ somehow.

item(Complex64) does not compile

When trying to obtain a Scalar value from a Tensor of dimension 0 using self.item(Complex64) fails during linkage with undefined reference to `tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA at::Tensor::item<tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA>() const'

Full ld error :

Hint:  [Link]
/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /home/rcaillaud/.cache/nim/test_tensor_d/@mtest_tensor.nim.cpp.o: in function `main__5ERS50C2yKrnV9cScP0LGow()':
@mtest_tensor.nim.cpp:(.text+0xa967): undefined reference to `tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA at::Tensor::item<tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA>() const'
/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: @mtest_tensor.nim.cpp:(.text+0xaa0c): undefined reference to `tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA at::Tensor::item<tyObject_Complex__zWadV1X9aMO7qS9bsQFB0JFA>() const'
collect2: error: ld returned 1 exit status
Error: execution of an external program failed: 'g++   -o /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/tests/build/test_tensor  /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_assertions.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_formatfloat.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_io.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_system.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_exitprocs.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_parseutils.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_math.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_unicode.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_strutils.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_streams.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_times.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_hashes.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_sets.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_sequtils.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_os.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_strformat.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_terminal.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_unittest.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/stdlib_complex.nim.cpp.o /home/rcaillaud/.cache/nim/test_tensor_d/@m..@s..@s..@scppstl@[email protected] /home/rcaillaud/.cache/nim/test_tensor_d/@m..@sflambeau@[email protected] /home/rcaillaud/.cache/nim/test_tensor_d/@m..@sflambeau@[email protected] /home/rcaillaud/.cache/nim/test_tensor_d/@mtest_tensor.nim.cpp.o  -lm -lrt -L/home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/vendor/libtorch/lib -lc10 -ltorch_cpu -Wl,-rpath,/home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/vendor/libtorch/lib   -ldl'

[RFC] Should we keep dimensions (like Arraymancer) or squezze dimensions (like Pytorch) by default

When we for example slice a Tensor or perform a reduction along an axis we have two options: either to keep the dimensions of the original Tensor even if its size along that axis is 1. Or we can squeeze away that extra dimension. For example, let's say we have this Tensor:

1 2
3 4
Tensor[2, 2]

If we slice it like this: t[_, 0] should we get back a tensor like

1
3
Tensor[2, 1] # Arraymancer

or

1 3
Tensor[2] # Pytorch

I vouch for going against libtorch's defaults and use the Arraymancer way. The reason for this is that it is trivial to squeeze away the unnecessary dimensions with t.squezze while it is non-trivial to restore that extra dimension(s).

Error: undeclared identifier: 'IntArrayRef'

I've got "flambeau-0.0.1/flambeau/raw_bindings/neural_nets.nim(111, 46) Error: undeclared identifier: 'IntArrayRef'" In Cpu version on ubuntu 20.04 (wsl 2)

Adding ./c10 to import fix it :)

define NOMINMAX when compiling your program on Windows

When trying to run the nn test I got this error on Windows:

C:\Users\hugog\code\nim\flambeau\vendor\libtorch\include\c10/util/C++17.h(28): fatal error C1189: #error:  Macro clash with min and max -- define NOMINMAX when compiling your program on Windows

It was solved by passing this with the compiler:
-t:-DNOMINMAX
Not sure where and how to define it so that it is applied to all files in Flambeau. :/

Project status

Hi, I see that there hasn't been a commit for over a year. Is this abandoned?

Brand Name

I'm excited to see this project being developed. I saw that this project was recently rebranded from minitorch to Flambeau. I was wondering what your thoughts were on potentially trying to instead adopt the name NimTorch for this project from the previous effort. Given how popular PyTorch is, if this is going to be the official Nim wrapper for the Torch library, I feel like the name NimTorch would be the most fitting and intuitive, especially for those coming from Python. I'm assuming the possibility of using the NimTorch name would depend on if Giovanni was okay with it, and there'd be some issues to figure out with versioning, but I think this idea would be worth exploring. What do you think?

Which C++ library?

Hi. This is a good start. I have a request: Could you name precisely which C++ library you're wrapping? Maybe in a readme?

Operation on Tensor with different shape cause a segfault

It should throw an exception instead

block:
      let shape2 = [8'i64, 8'i64]
      let shape3 = [7'i64, 8'i64]
      var f64input2 = rand(shape2.asTorchView(), kfloat64)
      var f64input3 = rand(shape3.asTorchView(), kfloat64)
      echo f64input2 + f64input3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.