GithubHelp home page GithubHelp logo

tvm's Introduction

Pytorch TVM Extension

CircleCI

Build

Install the latest Nightly build of PyTorch.

Then, build this repo

# Make sure the right llvm-config is in your PATH
python setup.py install

Test

python setup.py test 

Usage

This package transparently hooks into PyTorch's JIT, so the same tooling is applicable (see @torch.jit.script, torch.jit.trace and graph_for). See below for an example.

import torch
import torch_tvm

torch_tvm.enable()

# The following function will be compiled with TVM
@torch.jit.script
def my_func(a, b, c):
    return a * b + c

To disable the JIT hooks, use torch_tvm.disable().

Code Layout

  • register.cpp: Sets up pybind bindings and invokes the registration of a TVM backend.
  • compiler.{h,cpp}: Main logic to compile a PyTorch JIT graph with TVM.
  • operators.{h,cpp}: Location of mapping from JIT IR to TVM operators.

TVM Integration

FAQ

How do I configure TVM compilation?

All options are available as keyword arguments in the enable function exposed by torch_tvm. The optimization level, device type, device and host compilation targets are all exposed directly from TVM.

torch_tvm.enable(
   opt_level=3,
   device_type="cpu",
   device="llvm",
   host="llvm")

How do I register a new TVM operator?

First, ensure the operator is registered with Relay.

Then, register a map from PyTorch symbols to a Relay CallNode with RegisterTVMOperator. This can be done in any compilation unit provided it is linked into the final torch_tvm library. See torch_tvm/operators.cpp for examples.

RegisterTVMOperator reg_relu({
    {Symbol::fromQualString("aten::relu"),
     [](Node* node, tvm::Array<tvm::relay::Expr> inputs) {
       auto op = tvm::relay::Op::Get("nn.relu");
       return tvm::relay::CallNode::make(op, inputs, tvm::Attrs(), {});
     }},
});

How do I extract the Relay expression associated with a PyTorch Graph?

If the PyTorch function can be fully converted to Relay, it is possible to extract the expression itself using torch_tvm.to_relay(func, inputs). Example inputs must be passed in to calculate type information.

def add(a, b, c):
    return a + b + c

# via tracing
relay_graph = torch_tvm.to_relay(add, inputs)

@torch.jit.script
def mul(a, b, c):
    return a * b * c

# via script
relay_graph = torch_tvm.to_relay(mul, inputs)

Note that not all functions can be converted to Relay in their entirety and will raise exceptions if expression extraction is attempted. To solve this isse, simply refactor the function.

v0.1 Roadmap

Below, in order, is a prioritized list of tasks for this repository.

  • End to end build and runtime
  • Operator translation
    • Add
    • Multiply
    • Convolution
    • BatchNorm
    • Relu
    • AveragePool
    • MaxPool
    • Linear
    • Reshape
    • AdaptiveAveragePool
  • Tooling
    • Model coverage checks
    • Benchmarks for master
  • User exposed configurations
    • Backend selection (CPU/Cuda/OpenCL)
    • Optimization level
  • Custom TVM operator registration
    • Enable Python/C++ mechanism to use custom TVM operators and schedules
    • Enable Relay op registration
  • Bail-out mechanism
    • When TVM cannot compile a subgraph, invoke PyTorch JIT fallback
  • Extract Relay expression
  • Enable exposure of ops registered in eager mode under torch.ops.tvm.*

v0.2 Plan

  • View support
  • Zero copy set_input
  • Subsystem integration
    • Threadpool integration
    • Allocator integration
      • tvm/include/tvm/runtime/device_api.h
    • Distributed communication
  • IR integration
    • Control flow
    • Aliasing
  • Operators
    • transpose
    • chunk
    • repeat
    • cat
    • unsqueeze
    • slice
    • softmax
    • bmm
    • layernorm

tvm's People

Contributors

bddppq avatar bertmaher avatar bwasti avatar ilia-cher avatar jroesch avatar kimishpatel avatar krovatkin avatar lly-zero-one avatar wanchaol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tvm's Issues

tests error

I am using Ubuntu 16.04, pytorch 1.4.0, latest master version of pytorch-tvm, and it builds correctly. However, when running tests, it appears to fail to import this library:

running pytest
running egg_info
writing dependency_links to torch_tvm.egg-info/dependency_links.txt
writing torch_tvm.egg-info/PKG-INFO
writing top-level names to torch_tvm.egg-info/top_level.txt
reading manifest file 'torch_tvm.egg-info/SOURCES.txt'
writing manifest file 'torch_tvm.egg-info/SOURCES.txt'
running build_ext
running cmake_build
[  8%] Built target tvm_runtime
[ 81%] Built target tvm
[ 82%] Built target tvm_topi
[ 94%] Built target nnvm_compiler
[100%] Built target _torch_tvm
copying build/lib.linux-x86_64-3.5/torch_tvm/_torch_tvm.cpython-35m-x86_64-linux-gnu.so -> torch_tvm
=============================================================================================== test session starts ================================================================================================platform linux -- Python 3.5.2, pytest-3.2.1, py-1.4.34, pluggy-0.4.0
rootdir: /home/username/git/torch-tvm, inifile: setup.cfg
collected 0 items / 3 errors

====================================================================================================== ERRORS ======================================================================================================________________________________________________________________________________________ ERROR collecting test/test_core.py ________________________________________________________________________________________ImportError while importing test module '/home/username/git/torch-tvm/test/test_core.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test/test_core.py:2: in <module>
    from test.util import TVMTest
test/util.py:12: in <module>
    import torch_tvm
torch_tvm/__init__.py:9: in <module>
    from ._torch_tvm import *
E   ImportError: /home/username/git/torch-tvm/torch_tvm/_torch_tvm.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZTIN3tvm4NodeE
_______________________________________________________________________________________ ERROR collecting test/test_models.py _______________________________________________________________________________________ImportError while importing test module '/home/username/git/torch-tvm/test/test_models.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test/test_models.py:5: in <module>
    from skimage import io, transform
/usr/local/lib/python3.5/dist-packages/skimage/__init__.py:158: in <module>
    from .util.dtype import *
/usr/local/lib/python3.5/dist-packages/skimage/util/__init__.py:7: in <module>
    from .arraycrop import crop
/usr/local/lib/python3.5/dist-packages/skimage/util/arraycrop.py:8: in <module>
    from numpy.lib.arraypad import _validate_lengths
E   ImportError: cannot import name '_validate_lengths'
_____________________________________________________________________________________ ERROR collecting test/test_operators.py ______________________________________________________________________________________ImportError while importing test module '/home/username/git/torch-tvm/test/test_operators.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test/test_operators.py:2: in <module>
    from test.util import TVMTest
test/util.py:12: in <module>
    import torch_tvm
torch_tvm/__init__.py:9: in <module>
    from ._torch_tvm import *
E   ImportError: /home/username/git/torch-tvm/torch_tvm/_torch_tvm.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZTIN3tvm4NodeE
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!============================================================================================= 3 error in 0.39 seconds ============================================================================================

This is a different undefined symbol than other issues (#149, #93).

Error while installing torch_tvm: Pybind11

Hello,

I'm trying to install torch_tvm but I got stuck with this error :

error: ‘PURE’ is not a member of ‘torch::jit::AliasAnalysisKind {aka c10::AliasAnalysisKind}’
options.setAliasAnalysis(AliasAnalysisKind::PURE);

Thanks

prim::ListConstruct fusion error

Hi,

I'm trying to compile maskrcnn-benchmark with pytorch/tvm (directly from the master branch), and even though I know there are currently several operators that are yet to be implemented, I wanted to advance as much as possible before they are made available, so I'm reporting this in case the issue is unrelated.

I'm starting from the following PR from @t-vi that implements several missing pieces needed for JIT in maskrcnn-benchmark: facebookresearch/maskrcnn-benchmark. But after adding torch_tvm.enable() to trace_model.py, it throws an error during trace checking:

RuntimeError: input->node()->kind() == prim::Constant INTERNAL ASSERT FAILED at ../torch/csrc/jit/passes/graph_fuser.cpp:359, please report a bug to PyTorch. (mergeNodeIntoGroup at ../torch/csrc/jit/passes/graph_fuser.cpp:359)

I've found that the offending Node kind() is prim::ListConstruct, but I don't know how to proceed any further. My bet is some issue with BoxList (the return type of the model), but I'm not sure how to confirm this point. Any suggestion is sincerely welcome.

Thx in advance.

Supported model list?

Could you guys share the supported model list? I saw ResNet was supported, am wondering if Inception/MobileNet/VGG/DenseNet/SSD/Mask-RCNN is supported.

Bug fix in setting inputs.

Currently input is setup by a vector in which Value* are pushed. And we call set_input on graph to set input at particular index This makes assumption that the index i of the graph inputs correspond to index i of the vector we have. But since the graph is dumped to json and loaded and compiled this coupling may not hold.

Converting torch.RNN to TVM Relay

Hi all,

I'm trying to pass a torch model I trained to TVM. It's a RNN that takes a PackedSequence as an input. torch_tvm.to_relay only seems to like tensors as input so I'm getting the following error.

RuntimeError: Tracer cannot infer type of (PackedSequence(...

Any idea how I can work around this? Let me know if I'm using the TVM bindings the wrong way.

Thanks!

Can't Build the Project on MacOs

I am trying to build the project but it is failing errors on llvm-project directory. Here are steps I am following to build the project on my mac laptop:

# Follow steps to install pytorch on: https://github.com/pytorch/pytorch#from-source
# in torch/tvm folder running command to build
python setup.py develop

And this is my environment setup of the tools:

(pytorch) [11:23:56 AM] /Users/serhaty/local/tvm
 $ cmake --version
cmake version 3.14.0

CMake suite maintained and supported by Kitware (kitware.com/cmake).
(pytorch) [11:24:07 AM] /Users/serhaty/local/tvm
 $ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 11.0.0 (clang-1100.0.33.8)
Target: x86_64-apple-darwin18.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
(pytorch) [11:24:12 AM] /Users/serhaty/local/tvm
 $ llvm-config --version
10.0.0svn
(pytorch) [11:24:20 AM] /Users/serhaty/local/tvm
 $ clang --version
clang version 10.0.0 (https://github.com/llvm/llvm-project.git 77297f0761d2009e25d5d709cdcb041229f3493c)
Target: x86_64-apple-darwin18.7.0
Thread model: posix
InstalledDir: /Users/serhaty/local/llvm-project/build/bin
(pytorch) [11:24:24 AM] /Users/serhaty/local/tvm
 $ python --version
Python 3.7.4

Here is the output from the above command:

$ python setup.py develop
running develop
running build_ext
running cmake_build
[  8%] Built target tvm_runtime
[  8%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/codegen_amdgpu.cc.o
[  8%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/codegen_arm.cc.o
[  9%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/codegen_llvm.cc.o
[  9%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/codegen_cpu.cc.o
[ 10%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/codegen_x86_64.cc.o
[ 10%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/codegen_nvptx.cc.o
[ 11%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/llvm_module.cc.o
[ 11%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/intrin_rule_rocm.cc.o
[ 11%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/llvm_common.cc.o
[ 11%] Building CXX object tvm/CMakeFiles/tvm.dir/src/codegen/llvm/intrin_rule_llvm.cc.o
[ 11%] Building CXX object tvm/CMakeFiles/tvm.dir/src/runtime/c_dsl_api.cc.o
[ 11%] Building CXX object tvm/CMakeFiles/tvm.dir/src/contrib/hybrid/codegen_hybrid.cc.o
[ 12%] Building CXX object tvm/CMakeFiles/tvm.dir/src/runtime/c_runtime_api.cc.o
[ 12%] Building CXX object tvm/CMakeFiles/tvm.dir/src/runtime/builtin_fp16.cc.o
[ 13%] Building CXX object tvm/CMakeFiles/tvm.dir/src/runtime/cpu_device_api.cc.o
[ 13%] Building CXX object tvm/CMakeFiles/tvm.dir/src/runtime/dso_module.cc.o
[ 13%] Building CXX object tvm/CMakeFiles/tvm.dir/src/runtime/file_util.cc.o
In file included from /Users/serhaty/local/tvm/tvm/src/codegen/llvm/llvm_common.cc:30:
In file included from /Users/serhaty/local/tvm/tvm/src/codegen/llvm/llvm_common.h:29:
In file included from /Users/serhaty/local/llvm-project/llvm/include/llvm/ExecutionEngine/MCJIT.h:17:
In file included from /Users/serhaty/local/llvm-project/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h:18:
In file included from /Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/ArrayRef.h:12:
In file included from /Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/Hashing.h:48:
In file included from /Users/serhaty/local/llvm-project/llvm/include/llvm/Support/Host.h:16:
In file included from /Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/StringMap.h:16:
In file included from /Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/StringRef.h:12:
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:555:49: error: no template named 'index_sequence' in namespace 'std'
  template <size_t... Ns> value_type deref(std::index_sequence<Ns...>) const {
                                           ~~~~~^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:560:36: error: no template named 'index_sequence' in namespace 'std'
  decltype(iterators) tup_inc(std::index_sequence<Ns...>) const {
                              ~~~~~^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:565:36: error: no template named 'index_sequence' in namespace 'std'
  decltype(iterators) tup_dec(std::index_sequence<Ns...>) const {
                              ~~~~~^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:572:46: error: no member named 'index_sequence_for' in namespace 'std'
  value_type operator*() { return deref(std::index_sequence_for<Iters...>{}); }
                                        ~~~~~^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:572:65: error: 'Iters' does not refer to a value
  value_type operator*() { return deref(std::index_sequence_for<Iters...>{}); }
                                                                ^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:547:41: note: declared here
template <typename ZipType, typename... Iters>
                                        ^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:575:23: error: no member named 'index_sequence_for' in namespace 'std'
    return deref(std::index_sequence_for<Iters...>{});
                 ~~~~~^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:575:42: error: 'Iters' does not refer to a value
    return deref(std::index_sequence_for<Iters...>{});
                                         ^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:547:41: note: declared here
template <typename ZipType, typename... Iters>
                                        ^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:579:30: error: no member named 'index_sequence_for' in namespace 'std'
    iterators = tup_inc(std::index_sequence_for<Iters...>{});
                        ~~~~~^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:579:49: error: 'Iters' does not refer to a value
    iterators = tup_inc(std::index_sequence_for<Iters...>{});
                                                ^
/Users/serhaty/local/llvm-project/llvm/include/llvm/ADT/STLExtras.h:547:41: note: declared here
template <typename ZipType, typename... Iters>
...

RuntimeError: _Map_base::at

python3 test/benchmarks.py

output:

Warming JIT up with 10 runs
Running JIT 100 times
Done benchmarking JIT
Tracing model with TVM
Warming TVM up with 10 iters
Traceback (most recent call last):

  File "test/benchmarks.py", line 82, in <module>
    run_benchmark(csv_file)

  File "test/benchmarks.py", line 75, in run_benchmark
    benchmark(model, csv_file)

  File "test/benchmarks.py", line 44, in benchmark
    _ = trace_tvm(*inputs)

  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 539, in __call__
    result = self.forward(*input, **kwargs)

RuntimeError: _Map_base::at
The above operation failed in interpreter.
Traceback (most recent call last):

packages:

torch: 1.4.0a0+6f62c31
llvm-config --version: 6.0.0

Make build C++ ABI aware

Detect the C++ ABI of LLVM and PyTorch libs, ensure they are compatible and then build torch_tvm accordingly.

Saving the compiled graph

Hi,

I have two questions:

  • How to save the compiled graph? Does saving the compiled graph save the tvm::CompilationGroup symbols along with the compiled subgraphs? I hope it will not re-compile while loading the saved graph/ScriptModule.
  • I am facing an issue in the basic test with the HEAD. Any ideas what is going wrong?

Appreciate your help

import torch
import torch_tvm

shape = 8
x = torch.rand(shape)
y = torch.rand(shape)
z = torch.rand(shape)

def add(a, b, c):
    return a + b + c

inputs = [x, y, z]

torch_tvm.enable()

trace_tvm = torch.jit.trace(add, inputs)

relay_graph = torch_tvm.to_relay(trace_tvm, inputs)

print(relay_graph)

Traceback (most recent call last):

File "basic_tvm.py", line 18, in
relay_graph = torch_tvm.to_relay(trace_tvm, inputs)

File "/root/inferentia/tvm/torch_tvm/init.py", line 18, in to_relay
handle = _push_relay_expr(pt_func.graph_for(*inputs), inputs)

RuntimeError: This program cannot be exported as a single Relay expression. (operator() at /root/inferentia/tvm/torch_tvm/register.cpp:53)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0x6c (0x7f4bed67ba4c in /root/inferentia/tvm/env/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: + 0x8f8bf (0x7f4bdd4fd8bf in /root/inferentia/tvm/torch_tvm/_torch_tvm.cpython-36m-x86_64-linux-gnu.so)
frame #2: + 0x86cbb (0x7f4bdd4f4cbb in /root/inferentia/tvm/torch_tvm/_torch_tvm.cpython-36m-x86_64-linux-gnu.so)
frame #3: python() [0x50abc5]

frame #5: python() [0x509ce8]
frame #6: python() [0x50aa1d]
frame #8: python() [0x5081d5]
frame #10: python() [0x635082]
frame #15: __libc_start_main + 0xe7 (0x7f4bf243fb97 in /lib/x86_64-linux-gnu/libc.so.6)


torch.jit.trace() error: Tracer cannot infer type of BoxList

Hi,

I'm trying to run maskrcnn_benchmark on TVM using pytorch/tvm (specifically e2e_mask_rcnn_X-152-32x8d-FPN-IN5k_1.44x_caffe2). During torch.jit.trace() invocation, it throws the following error: "Tracer cannot infer type of BoxList". I'm aware that BoxList is an specific class of maskrcnn_benchmark, and I assume that I'd need to implement custom ops for symbolic_mult_label_nms() and symbolic_roi_align() (where I think BoxLists are used, although I'm not 100% sure). Could you please provide me further directions on this, or tell me whether there is any other approach to make the Tracer process BoxList objects?

Thanks in advance.

llvm version

hi, which version of llvm has to be installed in ubuntu 18.04?

Problems about installing torch_tvm ?

Hi,
when I clone this repo to my local folder and I use 'python setup.py install'. It has the following errors:

running install
running build
running build_py
copying torch_tvm/version.py -> build/lib.macosx-10.9-x86_64-3.7/torch_tvm
running egg_info
writing torch_tvm.egg-info/PKG-INFO
writing dependency_links to torch_tvm.egg-info/dependency_links.txt
writing top-level names to torch_tvm.egg-info/top_level.txt
reading manifest file 'torch_tvm.egg-info/SOURCES.txt'
writing manifest file 'torch_tvm.egg-info/SOURCES.txt'
running build_ext
running cmake_build
make: *** No targets specified and no makefile found. Stop.
Traceback (most recent call last):
File "setup.py", line 273, in
url='https://github.com/pytorch/tvm',
File "/Users/baiyang/miniconda3/lib/python3.7/site-packages/setuptools/init.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 203, in run
setuptools.command.install.install.run(self)
File "/Users/baiyang/miniconda3/lib/python3.7/site-packages/setuptools/command/install.py", line 65, in run
orig.install.run(self)
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 187, in run
self.run_command('cmake_build')
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/baiyang/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 176, in run
self._run_build()
File "setup.py", line 165, in _run_build
subprocess.check_call(build_args)
File "/Users/baiyang/miniconda3/lib/python3.7/subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '.', '--', '-j', '12']' returned non-zero exit status 2.

Can you help me to solve this problem? Thanks a lot.

error when importing torch_tvm

I have installed pytorch 1.4.0a0 from source, and pytorch_tvm with the command "SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_GLIBCXX_USE_CXX11_ABI=0")"
but it still failed, when i import torch_tvm.

This is the error information:

import torch
print(torch.version)
1.4.0a0+1f158ad
import torch_tvm
Traceback (most recent call last):

File "", line 1, in

File "/home/lixiuhong/anaconda3/envs/pytorch_tvm/lib/python3.7/site-packages/torch_tvm/init.py", line 9, in
from ._torch_tvm import *

ImportError: /home/lixiuhong/anaconda3/envs/pytorch_tvm/lib/python3.7/site-packages/torch_tvm/_torch_tvm.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN3tvm5relay2Op3GetERKSs

Trouble installing torch_tvm. Error "recipe for target 'all' failed"

Pre-installation, I have installed llvm and cmake both of which seem to be working fine. Not sure if this is relevant but i also have a separate install of tvm in the same environment which also works fine.

Whilst attempting to install torch_tvm the error we keep getting is:
image

I have already checked the path to the llvm-config (running llvm-9) is correct, used a nightly build of pytorch as well installing it from source and neither works.
We have also confirmed cmake meets the requirements (3.14.0).

Any help would be greatly appreciated.

Error installing torch_tvm: JIT

Pre-installation, I have installed llvm and cmake both of which seem to be working fine.
Whilst attempting to install torch_tvm the error we keep getting is the following: please note that I am trying to install torch_tvm with building pythorch 1.0.0 from source
torch_tvm

Needs LLVM 8+ but doesn't check

Hi,

I get a build failure with the default LLVM (7). Apparently toSPFlags is LLVM 8+.

codegen_llvm.cc: In member function ‘void tvm::codegen::CodeGenLLVM::AddDebugInformation(llvm::Function*)’:

codegen_llvm.cc:214:27: error: ‘toSPFlags’ is not a member of ‘llvm::DISubprogram’
       llvm::DISubprogram::toSPFlags(true, true, true)

Quite likely, one would also want to change the detection to not try with llvm 7 etc.

Best regards

Thomas

P.S.: I'd be happy to send a PR.

Cannot Import torch_tvm

I have installed the torch nightly build (torch-1.3.0.dev20190819-cp36-cp36m-linux_x86_64.whl) and successfully build torch_tvm.

However, when I try to import torch_tvm I get the following error:

>>> import torch_tvm
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/hummingbird/.local/lib/python3.6/site-packages/torch_tvm/__init__.py", line 9, in <module>
    from ._torch_tvm import *
ImportError: /home/hummingbird/.local/lib/python3.6/site-packages/torch_tvm/_torch_tvm.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN5torch3jit16SubgraphRewriter22RegisterRewritePatternERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES9_

Do I need to add anything to the library path?

No significant change in iters/sec while comparing cpu vs gpu performance

I have installed torch_tvm with cuda/opencl support by enabling the following options:
https://github.com/dmlc/tvm/blob/master/cmake/config.cmake#L32
https://github.com/dmlc/tvm/blob/master/cmake/config.cmake#L129
https://github.com/dmlc/tvm/blob/master/cmake/config.cmake#L132

Trying to compare the cpu vs gpu performance by running the following test: https://github.com/pytorch/tvm/blob/master/test/benchmarks.py

  • CPU version:
$ CUDA_VISIBLE_DEVICES='' PYTHONPATH=../:$PYTHONPATH python3 benchmarks.py 

Execution Log:

root@ccf26f0f9541:/opt/work/tvm/test# CUDA_VISIBLE_DEVICES='' PYTHONPATH=../:$PYTHONPATH python3 benchmarks.py 
Tracing model with JIT
Warming JIT up with 10 runs
Running JIT 10 times
Done benchmarking JIT
Tracing model with TVM
WARNING: reshape with -1 as the first value has known incompatibility with PyTorch semantics.
Cannot find config for target=llvm -mcpu=core-avx2, workload=('dense', (1, 512, 'float32'), (125, 512, 8, 'float32'), 0, 'float32'). A fallback configuration is used, which may bring great performance regression.
[08:58:08] /opt/work/tvm/tvm/src/pass/vectorize_loop.cc:389: Detect vector condition in Vectorized Loop, scalarizing...
[08:58:08] /opt/work/tvm/tvm/src/pass/vectorize_loop.cc:389: Detect vector condition in Vectorized Loop, scalarizing...
[08:58:08] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (64 - (ax0.ax1.outer.fused.ax2.fused/7))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (64 - (ax0.ax1.outer.fused.ax2.fused/7))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (64 - (ax0.ax1.outer.fused.ax2.fused/7))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (32 - (ax0.ax1.outer.fused.ax2.fused/14))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (32 - (ax0.ax1.outer.fused.ax2.fused/14))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (32 - (ax0.ax1.outer.fused.ax2.fused/14))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (16 - (ax0.ax1.outer.fused.ax2.fused/28))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (16 - (ax0.ax1.outer.fused.ax2.fused/28))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (8 - (ax0.ax1.outer.fused.ax2.fused/28))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (4 - (ax0.ax1.outer.fused.ax2.fused/56))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (4 - (ax0.ax1.outer.fused.ax2.fused/56))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (8 - (ax0.ax1.outer.fused.ax2.fused/112))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (2 - (ax0.ax1.outer.fused.ax2.outer.fused/28))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (16 - (ax0.ax1.outer.fused.ax2.outer.fused/7))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (32 - (ax0.ax1.outer.fused.ax2.outer.fused/4))) + 1) >= 0), when generating the post doubt loop
[08:58:09] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((1 - (7 - ((ax0.ax1.outer.fused.ax2.outer.fused % 4)*2))) + 1) >= 0), when generating the post doubt loop
/usr/local/lib/python3.5/dist-packages/torch/jit/__init__.py:1030: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 255] (-0.710386335849762 vs. -0.7103500366210938) and 5 other locations (0.00%)
  check_tolerance, _force_outplace, True, _module_class)
Warming TVM up with 10 iters
Running TVM 10 times
Done benchmarking TVM, which compiled 100.00% of compute
JIT: 39.134256974191366 iter/s
TVM: 62.80919757107452 iter/s
root@ccf26f0f9541:/opt/work/tvm/test# 
  • GPU version:
    Edit L39 of benchmarks.py to torch_tvm.enable(opt_level=3, device_type='cuda')
$ CUDA_VISIBLE_DEVICES='0' PYTHONPATH=../:$PYTHONPATH python3 benchmarks.py 

Execution Log:

root@ccf26f0f9541:/opt/work/tvm/test# CUDA_VISIBLE_DEVICES='0' PYTHONPATH=../:$PYTHONPATH python3 benchmarks.py 
Tracing model with JIT
Warming JIT up with 10 runs
Running JIT 10 times
Done benchmarking JIT
Tracing model with TVM
WARNING: reshape with -1 as the first value has known incompatibility with PyTorch semantics.
Cannot find config for target=llvm -mcpu=core-avx2, workload=('dense', (1, 512, 'float32'), (125, 512, 8, 'float32'), 0, 'float32'). A fallback configuration is used, which may bring great performance regression.
[08:58:43] /opt/work/tvm/tvm/src/pass/vectorize_loop.cc:389: Detect vector condition in Vectorized Loop, scalarizing...
[08:58:43] /opt/work/tvm/tvm/src/pass/vectorize_loop.cc:389: Detect vector condition in Vectorized Loop, scalarizing...
[08:58:43] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (64 - (ax0.ax1.outer.fused.ax2.fused/7))) + 1) >= 0), when generating the post doubt loop
[08:58:43] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (64 - (ax0.ax1.outer.fused.ax2.fused/7))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (64 - (ax0.ax1.outer.fused.ax2.fused/7))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (32 - (ax0.ax1.outer.fused.ax2.fused/14))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (32 - (ax0.ax1.outer.fused.ax2.fused/14))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (32 - (ax0.ax1.outer.fused.ax2.fused/14))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (16 - (ax0.ax1.outer.fused.ax2.fused/28))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (16 - (ax0.ax1.outer.fused.ax2.fused/28))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (8 - (ax0.ax1.outer.fused.ax2.fused/28))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (4 - (ax0.ax1.outer.fused.ax2.fused/56))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (4 - (ax0.ax1.outer.fused.ax2.fused/56))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (8 - (ax0.ax1.outer.fused.ax2.fused/112))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (2 - (ax0.ax1.outer.fused.ax2.outer.fused/28))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (16 - (ax0.ax1.outer.fused.ax2.outer.fused/7))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((0 - (32 - (ax0.ax1.outer.fused.ax2.outer.fused/4))) + 1) >= 0), when generating the post doubt loop
[08:58:44] /opt/work/tvm/tvm/src/pass/loop_partition.cc:550: Cannot prove: (((1 - (7 - ((ax0.ax1.outer.fused.ax2.outer.fused % 4)*2))) + 1) >= 0), when generating the post doubt loop
/usr/local/lib/python3.5/dist-packages/torch/jit/__init__.py:1030: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 255] (-0.710386335849762 vs. -0.7103500366210938) and 5 other locations (0.00%)
  check_tolerance, _force_outplace, True, _module_class)
Warming TVM up with 10 iters
Running TVM 10 times
Done benchmarking TVM, which compiled 100.00% of compute
JIT: 39.478923510188096 iter/s
TVM: 64.52328684937197 iter/s
root@ccf26f0f9541:/opt/work/tvm/test# 

As seen above there is no significant change in iter/s.
CPU version: 62.80919757107452 iter/s
GPU version: 64.52328684937197 iter/s

If I check the GPU memory usage with nvidia-smi command, as expected, the GPU is idle.
Is there any other configuration necessary to enable GPU backend?

(Apart from setting set(USE_CUDA ON) , set(USE_CUDNN ON), set(USE_CUBLAS ON) in https://github.com/dmlc/tvm/blob/master/cmake/config.cmake
And setting torch_tvm.enable(opt_level=3, device_type='cuda') in https://github.com/pytorch/tvm/blob/master/test/benchmarks.py)

Error when installing TVM on Raspberry (PyTorch 1.1)

This is the error that I get incl. some text before. I make it to ca. 95%. Any idea anyone what is wrong?

/usr/include/c++/8/bits/range_access.h:87:5: note: 'std::begin'
begin(_Tp (&__arr)[_Nm])
^~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:201:43: error: 'end' was not declared in this scope
for (const auto& elem : val.toIntList()) {
^
/home/pi/code/tvm/torch_tvm/compiler.cpp:201:43: note: suggested alternatives:
In file included from /usr/include/c++/8/string:51,
from /usr/include/c++/8/bits/locale_classes.h:40,
from /usr/include/c++/8/bits/ios_base.h:41,
from /usr/include/c++/8/ios:42,
from /usr/include/c++/8/istream:38,
from /usr/include/c++/8/sstream:38,
from /usr/local/lib/python2.7/dist-packages/torch/include/c10/macros/Macros.h:111,
from /usr/local/lib/python2.7/dist-packages/torch/include/c10/core/DeviceType.h:8,
from /usr/local/lib/python2.7/dist-packages/torch/include/c10/core/Device.h:3,
from /usr/local/lib/python2.7/dist-packages/torch/include/ATen/core/Tensor.h:3,
from /usr/local/lib/python2.7/dist-packages/torch/include/ATen/core/TensorMethods.h:3,
from /usr/local/lib/python2.7/dist-packages/torch/include/ATen/core/jit_type.h:3,
from /usr/local/lib/python2.7/dist-packages/torch/include/torch/csrc/jit/argument_spec.h:3,
from /home/pi/code/tvm/torch_tvm/compiler.h:3,
from /home/pi/code/tvm/torch_tvm/compiler.cpp:1:
/usr/include/c++/8/bits/range_access.h:97:5: note: 'std::end'
end(_Tp (&__arr)[_Nm])
^~~
In file included from /usr/local/lib/python2.7/dist-packages/torch/include/ATen/core/jit_type.h:6,
from /usr/local/lib/python2.7/dist-packages/torch/include/torch/csrc/jit/argument_spec.h:3,
from /home/pi/code/tvm/torch_tvm/compiler.h:3,
from /home/pi/code/tvm/torch_tvm/compiler.cpp:1:
/usr/local/lib/python2.7/dist-packages/torch/include/ATen/core/aten_interned_strings.h:776:9: note: 'c10::attr::end'
(attr, end)
^~~
/usr/local/lib/python2.7/dist-packages/torch/include/ATen/core/interned_strings.h:322:35: note: in definition of macro 'DEFINE_SYMBOL'
namespace ns { constexpr Symbol s(static_cast<unique_t>(keys::ns####s)); }
^
/usr/local/lib/python2.7/dist-packages/torch/include/ATen/core/interned_strings.h:159:3: note: in expansion of macro 'FORALL_ATTR_BASE_SYMBOLS'
FORALL_ATTR_BASE_SYMBOLS(
)
^~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/lib/python2.7/dist-packages/torch/include/ATen/core/interned_strings.h:323:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
^~~~~~~~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:209:3: error: 'TORCH_CHECK' was not declared in this scope
TORCH_CHECK(
^~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:209:3: note: suggested alternative: 'AT_CHECK'
TORCH_CHECK(
^~~~~~~~~~~
AT_CHECK
/home/pi/code/tvm/torch_tvm/compiler.cpp: In static member function 'static tvm::relay::Function TVMCompiler::convertToRelay(std::shared_ptrtorch::jit::Graph, TVMContext, std::unordered_map<torch::jit::Value*, TVMGraphInputInfo>)':
/home/pi/code/tvm/torch_tvm/compiler.cpp:221:5: error: 'TORCH_INTERNAL_ASSERT' was not declared in this scope
TORCH_INTERNAL_ASSERT(input->isCompleteTensor());
^~~~~~~~~~~~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:319:5: error: 'TORCH_INTERNAL_ASSERT' was not declared in this scope
TORCH_INTERNAL_ASSERT(value_map.find(sg_output) != value_map.end());
^~~~~~~~~~~~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:326:3: error: 'TORCH_CHECK' was not declared in this scope
TORCH_CHECK(
^~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:326:3: note: suggested alternative: 'AT_CHECK'
TORCH_CHECK(
^~~~~~~~~~~
AT_CHECK
/home/pi/code/tvm/torch_tvm/compiler.cpp: In constructor 'TVMCompiler::TVMCompiler(const torch::jit::Node
, int, bool, bool, bool, std::__cxx11::string, std::__cxx11::string, std::__cxx11::string)':
/home/pi/code/tvm/torch_tvm/compiler.cpp:362:3: error: 'TORCH_INTERNAL_ASSERT' was not declared in this scope
TORCH_INTERNAL_ASSERT(pfb);
^~~~~~~~~~~~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp: In member function 'void TVMCompiler::run(torch::jit::Stack&)':
/home/pi/code/tvm/torch_tvm/compiler.cpp:440:5: error: 'TORCH_INTERNAL_ASSERT' was not declared in this scope
TORCH_INTERNAL_ASSERT(pfr);
^~~~~~~~~~~~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:443:9: error: 'TORCH_CHECK' was not declared in this scope
TORCH_CHECK(pfr, "TVM must be compiled with debug runtime. "
^~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:443:9: note: suggested alternative: 'AT_CHECK'
TORCH_CHECK(pfr, "TVM must be compiled with debug runtime. "
^~~~~~~~~~~
AT_CHECK
/home/pi/code/tvm/torch_tvm/compiler.cpp:466:5: error: 'TORCH_CHECK' was not declared in this scope
TORCH_CHECK(
^~~~~~~~~~~
/home/pi/code/tvm/torch_tvm/compiler.cpp:466:5: note: suggested alternative: 'AT_CHECK'
TORCH_CHECK(
^~~~~~~~~~~
AT_CHECK
make[2]: *** [CMakeFiles/_torch_tvm.dir/build.make:284: CMakeFiles/_torch_tvm.dir/torch_tvm/memory_utils.cpp.o] Error 1
/home/pi/code/tvm/torch_tvm/compiler.cpp: In static member function 'static tvm::relay::Expr TVMCompiler::convertToRelay(const c10::IValue&, TVMContext)':
/home/pi/code/tvm/torch_tvm/compiler.cpp:211:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
make[2]: *** [CMakeFiles/_torch_tvm.dir/build.make:63: CMakeFiles/_torch_tvm.dir/torch_tvm/compiler.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:74: CMakeFiles/_torch_tvm.dir/all] Error 2
make: *** [Makefile:130: all] Error 2
Traceback (most recent call last):
File "setup.py", line 273, in
url='https://github.com/pytorch/tvm',
File "/usr/lib/python2.7/dist-packages/setuptools/init.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 203, in run
setuptools.command.install.install.run(self)
File "/usr/lib/python2.7/dist-packages/setuptools/command/install.py", line 65, in run
orig.install.run(self)
File "/usr/lib/python2.7/distutils/command/install.py", line 601, in run
self.run_command('build')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/distutils/command/build.py", line 128, in run
self.run_command(cmd_name)
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 187, in run
self.run_command('cmake_build')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 176, in run
self._run_build()
File "setup.py", line 165, in _run_build
subprocess.check_call(build_args)
File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '[u'/usr/local/bin/cmake', u'--build', '.', u'--', u'-j', '4']' returned non-zero exit status 2

"malloc(): memory corruption" when running benchmark.py

Hi, there's an error when I was running python test/benchmark.py.

Environment:

  • CentOS 7
  • GCC 6.3
  • LLVM 8.0
  • Pytorch 1.4.0 (build from source)

Error message:

Tracing model with JIT
Warming JIT up with 10 runs
Running JIT 100 times
Done benchmarking JIT
Tracing model with TVM
*** Error in `python': malloc(): memory corruption (fast): 0x00007fb16f942920 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x7ab54)[0x7fb167ccfb54]
/lib64/libc.so.6(+0x7ddf7)[0x7fb167cd2df7]
/lib64/libc.so.6(__libc_malloc+0x4c)[0x7fb167cd510c]
/data1/root/anaconda3/envs/tvm/bin/../lib/libstdc++.so.6(_Znwm+0x15)[0x7fb0d6a934e5]
/data1/root/anaconda3/envs/tvm/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(_ZN3tvm5relay12ConstantNode4makeENS_7runtime7NDArrayE+0x18)[0x7fb0836646e8]
/data1/root/workspace/tvm_trial/pytorch-tvm/torch_tvm/_torch_tvm.cpython-36m-x86_64-linux-gnu.so(+0x34d5a)[0x7fb1278cad5a]
/data1/root/workspace/tvm_trial/pytorch-tvm/torch_tvm/_torch_tvm.cpython-36m-x86_64-linux-gnu.so(+0x38200)[0x7fb1278ce200]
/data1/root/workspace/tvm_trial/pytorch-tvm/torch_tvm/_torch_tvm.cpython-36m-x86_64-linux-gnu.so(+0x3cdd4)[0x7fb1278d2dd4]
/data1/root/workspace/tvm_trial/pytorch-tvm/torch_tvm/_torch_tvm.cpython-36m-x86_64-linux-gnu.so(+0x9904c)[0x7fb12792f04c]
/root/anaconda3/envs/tvm/lib/python3.6/site-packages/torch/lib/libtorch.so(+0x392a7b6)[0x7fb0bf4717b6]
/root/anaconda3/envs/tvm/lib/python3.6/site-packages/torch/lib/libtorch.so(_ZN5torch3jit16InterpreterState3runERSt6vectorIN3c106IValueESaIS4_EE+0x1c)[0x7fb0bf466a4c]
/root/anaconda3/envs/tvm/lib/python3.6/site-packages/torch/lib/libtorch.so(+0x38fab71)[0x7fb0bf441b71]
/root/anaconda3/envs/tvm/lib/python3.6/site-packages/torch/lib/libtorch.so(_ZN5torch3jit8Function3runERSt6vectorIN3c106IValueESaIS4_EE+0x63)[0x7fb0bf730df3]
/root/anaconda3/envs/tvm/lib/python3.6/site-packages/torch/lib/libtorch_python.so(+0x7871ba)[0x7fb1186701ba]
/root/anaconda3/envs/tvm/lib/python3.6/site-packages/torch/lib/libtorch_python.so(+0x787b1d)[0x7fb118670b1d]
/root/anaconda3/envs/tvm/lib/python3.6/site-packages/torch/lib/libtorch_python.so(+0x746b7d)[0x7fb11862fb7d]
/root/anaconda3/envs/tvm/lib/python3.6/site-packages/torch/lib/libtorch_python.so(+0x2832f2)[0x7fb11816c2f2]
python(_PyCFunction_FastCallDict+0x154)[0x7fb16856a334]
python(_PyObject_FastCallDict+0x2bf)[0x7fb16856a74f]
python(_PyObject_Call_Prepend+0x63)[0x7fb16856f173]
python(PyObject_Call+0x3e)[0x7fb16856a13e]
python(+0x16a101)[0x7fb1685c3101]
python(PyObject_Call+0x3e)[0x7fb16856a13e]
python(_PyEval_EvalFrameDefault+0x1ab0)[0x7fb168615d00]
python(+0x191fae)[0x7fb1685eafae]
python(+0x192be6)[0x7fb1685ebbe6]
python(+0x198a65)[0x7fb1685f1a65]
python(_PyEval_EvalFrameDefault+0x30a)[0x7fb16861455a]
python(PyEval_EvalCodeEx+0x966)[0x7fb1685ecd06]
python(+0x1945e6)[0x7fb1685ed5e6]
python(PyObject_Call+0x3e)[0x7fb16856a13e]
python(_PyEval_EvalFrameDefault+0x1ab0)[0x7fb168615d00]
python(+0x191fae)[0x7fb1685eafae]
python(+0x192be6)[0x7fb1685ebbe6]
python(+0x198a65)[0x7fb1685f1a65]
python(_PyEval_EvalFrameDefault+0x30a)[0x7fb16861455a]
python(+0x191fae)[0x7fb1685eafae]
python(+0x192be6)[0x7fb1685ebbe6]
python(+0x198a65)[0x7fb1685f1a65]
python(_PyEval_EvalFrameDefault+0x30a)[0x7fb16861455a]
python(+0x191b76)[0x7fb1685eab76]
python(+0x192b83)[0x7fb1685ebb83]
python(+0x198a65)[0x7fb1685f1a65]
python(_PyEval_EvalFrameDefault+0x30a)[0x7fb16861455a]
python(+0x191b76)[0x7fb1685eab76]
python(+0x192b83)[0x7fb1685ebb83]
python(+0x198a65)[0x7fb1685f1a65]
python(_PyEval_EvalFrameDefault+0x30a)[0x7fb16861455a]
python(+0x19296b)[0x7fb1685eb96b]
python(+0x198a65)[0x7fb1685f1a65]
python(_PyEval_EvalFrameDefault+0x30a)[0x7fb16861455a]
python(PyEval_EvalCodeEx+0x329)[0x7fb1685ec6c9]
python(PyEval_EvalCode+0x1c)[0x7fb1685ed45c]
python(+0x214d54)[0x7fb16866dd54]
python(PyRun_FileExFlags+0xa1)[0x7fb16866e151]
python(PyRun_SimpleFileExFlags+0x1c3)[0x7fb16866e353]
python(Py_Main+0x613)[0x7fb168671e43]
python(main+0xee)[0x7fb16853c28e]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7fb167c76c05]
python(+0x1c1fff)[0x7fb16861afff]

Has the project been deprecated?

This project has not been updated for almost three months. Has the project been deprecated? Will the planned tasks in the roadmap be implemented?

Memory misalignment during Tensor conversion

PyTorch uses offset to data pointers for view. This will break the 64-byte alignment requirement of TVM's input tensors, especially when we combine this we set_input_zero_copy. Temporary solution would be check the alignment of the data point and make an alignment copy if it's not aligned.

Question on FuseSupportedOps pass

I was just playing around with this optimization pass with the main PyTorch project. I was curious to understand why control flow operators are not handled here. What was the issue? I had lodged a question on the PyTorch discuss forum about the issue I was facing in my experimentation. Would anyone here be able to help with it?

TL;DR : In my experimentation, I was finding that subgraph_utils in PyTorch repo doesn't handle merging of nodes, when the inputs of the node belongs to outputs of another block's nodes. Is this understanding correct? Any pointers would be highly appreciated :)

Trouble Installing TVM Extension

When I try running python setup.py install to install the TVM extension, I get a subprocesss.CalledProcess Error:

make: *** No targets specified and no makefile found. Stop.
Traceback (most recent call last):
File "setup.py", line 273, in
url='https://github.com/pytorch/tvm',
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/setuptools/init.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 203, in run
setuptools.command.install.install.run(self)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/setuptools/command/install.py", line 65, in run
orig.install.run(self)
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 187, in run
self.run_command('cmake_build')
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/ubuntu/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 176, in run
self._run_build()
File "setup.py", line 165, in _run_build
subprocess.check_call(build_args)
File "/home/ubuntu/anaconda3/lib/python3.7/subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/cmake', '--build', '.', '--', '-j', '4']' returned non-zero exit status 2.

I'm running an 18.04 Ubuntu instance and the latest PyTorch nightly build. I have llvm-8 installed.

Error Building torch_tvm [NGC Container]

I am trying to build torch_tvm inside pytorch ngc container [19.08-py3]. However, I am encountering the same error as in #77 .

CMakeFiles/_torch_tvm.dir/build.make:218: recipe for target 'CMakeFiles/_torch_tvm.dir/torch_tvm/fusion_pass.cpp.o' failed
make[2]: *** [CMakeFiles/_torch_tvm.dir/torch_tvm/fusion_pass.cpp.o] Error 1
In file included from /tvm/torch_tvm/compiler.h:13:0,
                 from /tvm/torch_tvm/register.cpp:8:
/tvm/torch_tvm/memory_utils.h: In member function ‘void torch_tvm::utils::DLManagedTensorDeleter::operator()(DLManagedTensor*)’:
/tvm/torch_tvm/memory_utils.h:22:24: warning: deleting ‘void*’ is undefined [-Wdelete-incomplete]
       delete dl_tensor.data;
                        ^~~~
CMakeFiles/Makefile2:73: recipe for target 'CMakeFiles/_torch_tvm.dir/all' failed
make[1]: *** [CMakeFiles/_torch_tvm.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
  File "setup.py", line 273, in <module>
    url='https://github.com/pytorch/tvm',
  File "/opt/conda/lib/python3.6/site-packages/setuptools/__init__.py", line 145, in setup
    return distutils.core.setup(**attrs)
  File "/opt/conda/lib/python3.6/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "setup.py", line 203, in run
    setuptools.command.install.install.run(self)
  File "/opt/conda/lib/python3.6/site-packages/setuptools/command/install.py", line 65, in run
    orig.install.run(self)
  File "/opt/conda/lib/python3.6/distutils/command/install.py", line 545, in run
    self.run_command('build')
  File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/opt/conda/lib/python3.6/distutils/command/build.py", line 135, in run
    self.run_command(cmd_name)
  File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "setup.py", line 187, in run
    self.run_command('cmake_build')
  File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "setup.py", line 176, in run
    self._run_build()
  File "setup.py", line 165, in _run_build
    subprocess.check_call(build_args)
  File "/opt/conda/lib/python3.6/subprocess.py", line 311, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/cmake', '--build', '.', '--', '-j', '12']' returned non-zero exit status 2.

I tried different methods described here , here and here but I havent had any success.

How can this issue be fixed ?

convert demo needed

Could you please give us an example of how to convert pytroch model to tvm model? For example, resnet50? So we can follow the guide.
Thank you.

Funny fusion effects - constant as output of compilation group

Hi,

for fusing a not that much longer elementary pointwise computation, I get

with tvm::CompilationGroup_0 = graph(%0 : Float(*),
      %1 : Float(*)):
  %4 : int = prim::Constant[value=1]()
  %2 : int = prim::Constant[value=1]()
  %3 : Float(*) = aten::add(%0, %1, %2)
  return (%3, %4)

and the second output is then used outside the group, which is not good, as it'll hit errors when fusing the next thing.

Best regards

Thomas

Support the TVM fusing group with one node

More often than not, we need to "test" single TVM operator, which cannot be fused with other nodes so far. We expect the fusing pass still create one group for it and allow it to be lowered into TVM.

import torch_tvm error

python setup.py test

output:

========================================================================================================= test session starts =========================================================================================================
platform linux -- Python 3.7.4, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
rootdir: /tvm, inifile: setup.cfg, testpaths: test
collected 0 items / 3 errors

=============================================================================================================== ERRORS ================================================================================================================
_________________________________________________________________________________________________ ERROR collecting test/test_core.py __________________________________________________________________________________________________
ImportError while importing test module '/tvm/test/test_core.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test/test_core.py:2: in
from test.util import TVMTest
test/util.py:12: in
import torch_tvm
torch_tvm/init.py:9: in
from ._torch_tvm import *
E ImportError: /tvm/torch_tvm/_torch_tvm.cpython-37m-x86_64-linux-gnu.so: undefined symbol: ZN5torch3jit16SubgraphRewriter22RegisterRewritePatternERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES9
________________________________________________________________________________________________ ERROR collecting test/test_models.py _________________________________________________________________________________________________
ImportError while importing test module '/tvm/test/test_models.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test/test_models.py:6: in
import torch_tvm
torch_tvm/init.py:9: in
from ._torch_tvm import *
E ImportError: /tvm/torch_tvm/_torch_tvm.cpython-37m-x86_64-linux-gnu.so: undefined symbol: ZN5torch3jit16SubgraphRewriter22RegisterRewritePatternERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES9
_______________________________________________________________________________________________ ERROR collecting test/test_operators.py _______________________________________________________________________________________________
ImportError while importing test module '/tvm/test/test_operators.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test/test_operators.py:2: in
from test.util import TVMTest
test/util.py:12: in
import torch_tvm
torch_tvm/init.py:9: in
from ._torch_tvm import *
E ImportError: /tvm/torch_tvm/_torch_tvm.cpython-37m-x86_64-linux-gnu.so: undefined symbol: ZN5torch3jit16SubgraphRewriter22RegisterRewritePatternERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES9
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
========================================================================================================== 3 errors in 0.76s ==========================================================================================================

The concat seems to be not lowerable

For the following subgraph:

graph(%0 : Float(2048, 1024),
      %1 : Float(2048),
      %2 : Tensor[]):
  %3 : int = prim::Constant[value=0]()
  %input108.1 : Tensor = aten::cat(%2, %3) # code/new_my_qqq.py:1486:14
  %5 : Tensor = aten::linear(%input108.1, %0, %1)
  return (%5)

The %2 is a list of tensor, however, when we convert the node, we cannot see all tensors in the list, while the tvm needs to know that. According to discussion with @wanchaol , we may need to add a fusion process to explicitly extract all tensors in the list. cc @bwasti

"setup.py install --cmake" errors for type_traits, hashtable_policy.h and unordered_map.h

Hi,

Thanks in advance for your advises on
"python setup.py install --cmake" errors below

[ 94%] Built target nnvm_compiler
/usr/include/c++/5/type_traits:148:38: **error**: ‘value’ is not a member of ‘std::__and_<std::__is_fast_hash<std::hash<c10::ScalarType> >, std::__detail::__is_noexcept_hash<c10::ScalarType, std::hash<c10::ScalarType> > >’
     : public integral_constant<bool, !_Pp::value>

torch-tvm/tvm/torch_tvm/compiler.cpp:93:73:   required from here
/usr/include/c++/5/bits/hashtable_policy.h:85:34: **error**: no match for call to ‘(const std::hash<c10::ScalarType>) (const c10::ScalarType&)’
  noexcept(declval<const _Hash&>()(declval<const _Key&>()))>

/usr/include/c++/5/bits/unordered_map.h:649:7: **error**: ‘value’ is not a member of ‘std::__not_<std::__and_<std::__is_fast_hash<std::hash<c10::ScalarType> >, std::__detail::__is_noexcept_hash<c10::ScalarType, std::hash<c10::ScalarType> > > >’
       equal_range(const key_type& __x) const

system and setting information
Ubuntu 18.04
gcc 5.5.0
torch.__version__ '1.4.0.dev20191024+cpu'
Python 3.5.7

CMAKE setting
-- The C compiler identification is GNU 5.5.0
-- The CXX compiler identification is GNU 5.5.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
Using pytorch dir /opt/venv/usr-python/python3.5/torch-tvm/lib/python3.5/site-packages/torch
Using tvm dir /N/share/project/pose/MobilePose-pytorch/torch-tvm/tvm/tvm
-- Use llvm-config=/usr/bin/llvm-config
-- /usr/lib/llvm-6.0/include
-- Found LLVM_INCLUDE_DIRS=/usr/lib/llvm-6.0/include
-- Found LLVM_DEFINITIONS= -DNDEBUG -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS
-- Found TVM_LLVM_VERSION=60
-- Found PythonInterp: /opt/venv/usr-python/python3.5/torch-tvm/bin/python (found version "3.5.7") 
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.5m.so
-- pybind11 v2.3.dev0
-- Build with RPC support...
-- Build with Graph runtime support...
-- Build VTA runtime with target: sim

-- Use llvm-config=/usr/bin/llvm-config
-- /usr/lib/llvm-6.0/include
-- Found LLVM_INCLUDE_DIRS=/usr/lib/llvm-6.0/include
-- Found LLVM_DEFINITIONS= -DNDEBUG -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS
-- Found TVM_LLVM_VERSION=60
-- Build with LLVM 
-- Set TVM_LLVM_VERSION=60
-- Build with contrib.hybriddump
-- Performing Test SUPPORT_CXX11
-- Performing Test SUPPORT_CXX11 - Success
-- Build with c++11
-- Build with thread support...
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE  
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- LTO enabled
-- Configuring done
-- Generating done
-- Build files have been written to: /N/share/project/pose/MobilePose-pytorch/torch-tvm/tvm/build

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.