GithubHelp home page GithubHelp logo

onnx-docker's Introduction

PyPI - Version CI CII Best Practices OpenSSF Scorecard REUSE compliant Ruff Black

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.

Use ONNX

Learn about the ONNX spec

Programming utilities for working with ONNX Graphs

Contribute

ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.

Check out our contribution guide to get started.

If you think some operator should be added to ONNX specification, please read this document.

Community meetings

The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here

Community Meetups are held at least once a year. Content from previous community meetups are at:

Discuss

We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.

Follow Us

Stay up to date with the latest ONNX news. [Facebook] [Twitter]

Roadmap

A roadmap process takes place every year. More details can be found here

Installation

Official Python packages

ONNX released packages are published in PyPi.

pip install onnx  # or pip install onnx[reference] for optional reference implementation dependencies

ONNX weekly packages are published in PyPI to enable experimentation and early testing.

vcpkg packages

onnx is in the maintenance list of vcpkg, you can easily use vcpkg to build and install it.

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat # For powershell
./bootstrap-vcpkg.sh # For bash
./vcpkg install onnx

Conda packages

A binary build of ONNX is available from Conda, in conda-forge:

conda install -c conda-forge onnx

Build ONNX from Source

Before building from source uninstall any existing versions of onnx pip uninstall onnx.

c++17 or higher C++ compiler version is required to build ONNX from source. Still, users can specify their own CMAKE_CXX_STANDARD version for building ONNX.

If you don't have protobuf installed, ONNX will internally download and build protobuf for ONNX build.

Or, you can manually install protobuf C/C++ libraries and tools with specified version before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:

Linux:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

Windows:

set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.

Windows

If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the version of protobuf. The tested and recommended version is 3.21.12.

The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.

You can get protobuf by running the following commands:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v21.12
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release

Then it will be built as a static library and installed to <protobuf_install_dir>. Please add the bin directory(which contains protoc.exe) to your PATH.

set CMAKE_PREFIX_PATH=<protobuf_install_dir>;%CMAKE_PREFIX_PATH%

Please note: if your protobuf_install_dir contains spaces, do not add quotation marks around it.

Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.

set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Linux

First, you need to install protobuf. The minimum Protobuf compiler (protoc) version required by ONNX is 3.6.1. Please note that old protoc versions might not work with CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON.

Ubuntu 20.04 (and newer) users may choose to install protobuf via

apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler

In this case, it is required to add -DONNX_USE_PROTOBUF_SHARED_LIBS=ON to CMAKE_ARGS in the ONNX build step.

A more general way is to build and install it from source. See the instructions below for more details.

Installing Protobuf from source

Debian/Ubuntu:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

CentOS/RHEL/Fedora:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake  -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# Optional: prefer lite proto
export CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Mac

export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protobuf-cpp-3.21.12.tar.gz
tar -xvf protobuf-cpp-3.21.12.tar.gz
cd protobuf-3.21.12
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# Optional: prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Verify Installation

After installation, run

python -c "import onnx"

to verify it works.

Common Build Options

For full list refer to CMakeLists.txt

Environment variables

  • USE_MSVC_STATIC_RUNTIME should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library. Default: USE_MSVC_STATIC_RUNTIME=0

  • DEBUG should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d at the end of the package name lines. For example, NAMES protobuf-lite would become NAMES protobuf-lited. Default: Debug=0

CMake variables

  • ONNX_USE_PROTOBUF_SHARED_LIBS should be ON or OFF. Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0 ONNX_USE_PROTOBUF_SHARED_LIBS determines how onnx links to protobuf libraries.

    • When set to ON - onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF and USE_MSVC_STATIC_RUNTIME must be 0.
    • When set to OFF - onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON (to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME can be 0 or 1.
  • ONNX_USE_LITE_PROTO should be ON or OFF. When set to ON onnx uses lite protobuf instead of full protobuf. Default: ONNX_USE_LITE_PROTO=OFF

  • ONNX_WERROR should be ON or OFF. When set to ON warnings are treated as errors. Default: ONNX_WERROR=OFF in local builds, ON in CI and release pipelines.

Common Errors

  • Note: the import onnx command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'. Change into another directory to fix this error.

  • If you run into any issues while building Protobuf as a static library, please ensure that shared Protobuf libraries, like libprotobuf, are not installed on your device or in the conda environment. If these shared libraries exist, either remove them to build Protobuf from source as a static library, or skip the Protobuf build from source to use the shared version directly.

  • If you run into any issues while building ONNX from source, and your error message reads, Could not find pythonXX.lib, ensure that you have consistent Python versions for common commands, such as python and pip. Clean all existing build files and rebuild ONNX again.

Testing

ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest:

pip install pytest nbval

After installing pytest, use the following command to run tests.

pytest

Development

Check out the contributor guide for instructions.

License

Apache License v2.0

Code of Conduct

ONNX Open Source Code of Conduct

onnx-docker's People

Contributors

askhade avatar bowenbao avatar dependabot[bot] avatar faxu avatar gantman avatar happybrown avatar houseroad avatar jcwchen avatar kmkwon94 avatar linkerzhang avatar navanchauhan avatar prasanthpul avatar pridkett avatar randyshuai avatar rorlich avatar serkansokmen avatar snnn avatar szha avatar vinitra avatar vinitra-zz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnx-docker's Issues

Could Kashgari model which inherits from Keras be converted to onnx model?

import onnxmltools
from keras.models import load_model
import kashgari
input_keras_model = 'E:\KG\kashgari/train\gonghang_2020_12_10'

Change this path to the output name and path for the ONNX model

output_onnx_model = 'kashgari_model.onnx'

Load your Keras model

loaded_model = kashgari.utils.load_model(input_keras_model)

onnx_model = onnxmltools.convert_keras(loaded_model.tf_model)

Save as protobuf

onnxmltools.utils.save_model(onnx_model, output_onnx_model)

This is a bug as follows:
ValueError: Input 0 of node TFNodes50/Embedding-Token/embedding_lookup was passed float from TFNodes50/Embedding-Token/embeddings:0 incompatible with expected resource.

Process finished with exit code 1

Error while building docker onnx/onnx-ecosystem

Trying to build docker according to docs with command docker build . -t onnx/onnx-ecosystem got following error:
Downloading skl2onnx-1.4.4.2.tar.gz (522 kB)

ERROR: Command errored out with exit status 1:
  command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-jq5wgppw/skl2onnx_663c886148fb4540809230a21e152fac/setup.py'"'"'; __file__='"'"'/tmp/pip-install-jq5wgppw/skl2onnx_663c886148fb4540809230a21e152fac/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-yqb5_6fq
      cwd: /tmp/pip-install-jq5wgppw/skl2onnx_663c886148fb4540809230a21e152fac/
 Complete output (5 lines):
 Traceback (most recent call last):
   File "<string>", line 1, in <module>
   File "/tmp/pip-install-jq5wgppw/skl2onnx_663c886148fb4540809230a21e152fac/setup.py", line 35, in <module>
     import pypandoc
 ModuleNotFoundError: No module named 'pypandoc'
 ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Navigation

I am new to docker and onnx. I am converting from Caffe to another format. I read this line:

"Navigate to the converter_scripts folder in the container and edit the appropriate notebook to convert your model to ONNX, or test the accuracy of the conversion using ONNX Runtime."

I do not know how to navigate to the converter scripts. Is there a link or small contribution that could be made explaining this point? There could be more explanation in general to this point.

ImportError: cannot import name 'libcaffeconverter' from 'coremltools'

Hi I'm using the https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/converter_scripts/caffe_coreml_onnx.ipynb tutorial.
But, when I run it I get the ImportError. It's because the coremltools/converters/_cafe_converter.convert() method tries to import libcaffeconverter from coremltools but the coremltools library directory doesn't have libcaffeconverter.

Someone else had raised the same issue but it wasn't really solved and was closed.

Torchvision can't import PILLOW VERSION

ImportError Traceback (most recent call last)
in
2 from torch.autograd import Variable
3 import torch.onnx
----> 4 import torchvision

/usr/local/lib/python3.6/dist-packages/torchvision/init.py in
1 from torchvision import models
----> 2 from torchvision import datasets
3 from torchvision import transforms
4 from torchvision import utils
5

/usr/local/lib/python3.6/dist-packages/torchvision/datasets/init.py in
7 from .svhn import SVHN
8 from .phototour import PhotoTour
----> 9 from .fakedata import FakeData
10 from .semeion import SEMEION
11 from .omniglot import Omniglot

/usr/local/lib/python3.6/dist-packages/torchvision/datasets/fakedata.py in
1 import torch
2 import torch.utils.data as data
----> 3 from .. import transforms
4
5

/usr/local/lib/python3.6/dist-packages/torchvision/transforms/init.py in
----> 1 from .transforms import *

/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py in
14 import warnings
15
---> 16 from . import functional as F
17
18 all = ["Compose", "ToTensor", "ToPILImage", "Normalize", "Resize", "Scale", "CenterCrop", "Pad",

/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py in
3 import math
4 import random
----> 5 from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
6 try:
7 import accimage

ImportError: cannot import name 'PILLOW_VERSION'

Wrong python3 package specification in onnx-ecosystem Dockerfile

Problem description:

  • The python3 package name argument to apt contains a typo: python 3.6 - notice the space:

    python 3.6 python3-pip \

  • As a result, python 2.7 is installed. Python 3 (3.5) is being pulled in by python3-dev.

  • Worse, 3.6 is being interpreted as a wildcard matching 1301 packages, most likely unintended and causing image bloat.

Proposed fix:

  • Updating the base image to ubuntu:18.04, the latest LTS, would solve the issue, since python 3.6 is the default python version (currently 3.6.9-1~18.04) - assuming Python 3.6 is required. Ubuntu 16.04 provides only Python 3.5.
  • For being explicit about 3.6, python3-dev could be replaced with python3.6-dev, too.

libcaffeconverter import error

HI! I watched the tutorial, on it installed windows and one coremltools/onnxmltools. However, when you run the example, it take me error:

from ... import libcaffeconverter
ImportError: cannot import name 'libcaffeconverter'

Report typo.

Hello,

Report typo

image

I think it should be changed to accessibility.

thank you.

Correct model name in inference_demos

Please correct the model name in the jupyter notebook "yoloV3_object_detection_onnxruntime_inference" inside jupyter demos:
The model file is named: yolov3.onnx and not model.onnx

Bug in onnx-docker/onnx-ecosystem/converter_scripts/lightgbm_onnx.ipynb

When I execute the playbook, it raises ValueError:

Initial types are required. See usage of convert(...) in onnxmltools.convert.lightgbm.convert for details

My versions:
onnxmltools==1.6.0
The reason may be the old API compatibility issue. I notice there are many changes in latest onnxmltools version.

cant open caffe model in converting caffe model to ONNX

I am using the prebuilt docker to convert a Caffe model to ONNX and i faced this problem when trying with my model or the Alexnet model for testing.

`RuntimeError Traceback (most recent call last)
in
1 # Convert Caffe model to CoreML
----> 2 coreml_model = coremltools.converters.caffe.convert((caffe_model, proto_file))
3
4 # Save CoreML model
5 coreml_model.save(output_coreml_model)

/usr/local/lib/python3.5/dist-packages/coremltools/converters/caffe/_caffe_converter.py in convert(model, image_input_names, is_bgr, red_bias, blue_bias, green_bias, gray_bias, image_scale, class_labels, predicted_feature_name, model_precision)
189 blue_bias,
190 green_bias, gray_bias, image_scale, class_labels,
--> 191 predicted_feature_name)
192 model = MLModel(model_path)
193

/usr/local/lib/python3.5/dist-packages/coremltools/converters/caffe/_caffe_converter.py in _export(filename, model, image_input_names, is_bgr, red_bias, blue_bias, green_bias, gray_bias, image_scale, class_labels, predicted_feature_name)
253 prototxt_path,
254 class_labels,
--> 255 predicted_feature_name)

RuntimeError: Unable to open caffe model provided in the source model path: alex1.caffemodel`

I am :
1- runing ubuntu 16.04 (Virtual box) on windows 10.
2- Tried to rename the caffe model and prototxt to ' model, model1' and ' model.caffemodel, model .ptrototxt' . Also, I already put two copies of each network in Scripts and /scripts /converter-scripts and still doesnt work.

this is how I implemented the code:
`import coremltools
import onnxmltools

Update your input name and path for your caffe model

proto_file = 'alex.prototxt'
input_caffe_path = 'alex1.caffemodel'

Update the output name and path for intermediate coreml model, or leave as is

output_coreml_model = 'model.mlmodel'

Change this path to the output name and path for the onnx model

output_onnx_model = 'model.onnx'

Convert Caffe model to CoreML

coreml_model = coremltools.converters.caffe.convert((input_caffe_path, proto_file))

Save CoreML model

coreml_model.save(output_coreml_model)

Load a Core ML model

coreml_model = coremltools.utils.load_spec(output_coreml_model)

Convert the Core ML model into ONNX

onnx_model = onnxmltools.convert_coreml(coreml_model)

Save as protobuf

onnxmltools.utils.save_model(onnx_model, output_onnx_model)`

i hope you have seen such a thing before and can help me .

pytorch to onnx notebook fails on imports

ImportError                               Traceback (most recent call last)
<ipython-input-1-cc8851f7a7c0> in <module>
      2 from torch.autograd import Variable
      3 import torch.onnx
----> 4 import torchvision

/usr/local/lib/python3.6/dist-packages/torchvision/__init__.py in <module>
      1 from torchvision import models
----> 2 from torchvision import datasets
      3 from torchvision import transforms
      4 from torchvision import utils
      5 

/usr/local/lib/python3.6/dist-packages/torchvision/datasets/__init__.py in <module>
      7 from .svhn import SVHN
      8 from .phototour import PhotoTour
----> 9 from .fakedata import FakeData
     10 from .semeion import SEMEION
     11 from .omniglot import Omniglot

/usr/local/lib/python3.6/dist-packages/torchvision/datasets/fakedata.py in <module>
      1 import torch
      2 import torch.utils.data as data
----> 3 from .. import transforms
      4 
      5 

/usr/local/lib/python3.6/dist-packages/torchvision/transforms/__init__.py in <module>
----> 1 from .transforms import *

/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py in <module>
     14 import warnings
     15 
---> 16 from . import functional as F
     17 
     18 __all__ = ["Compose", "ToTensor", "ToPILImage", "Normalize", "Resize", "Scale", "CenterCrop", "Pad",

/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py in <module>
      3 import math
      4 import random
----> 5 from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
      6 try:
      7     import accimage

ImportError: cannot import name 'PILLOW_VERSION'``

Cannot start container

I ran the following command:
docker run --rm -it --name onnx --gpus all onnx/onnx-ecosystem /bin/bash
which complainted:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/97a44c71fcaa06bb83388796dbcd0c03943a442693da8a6acf0dfd1f3889e9e7/merged/usr/bin/nvidia-smi: file exists\\\\n\\\"\"": unknown.

NVIDIA/nvidia-docker#825 saids that the reason is that nvidia driver e.g. nvidia-smi is in the image.
I have installed nvidia driver on the host.
Is it intended to not install nvidia driver on the host when using onnx-runtime container?

lightgbm prediction varies very slightly as other model don't?

I have a lightgbm classifier,

the model predict_proba returns

{"0": "0.0003362529475889886", "1": "0.999663747052411"}

with ONNX model

#saving model
initial_type = [('float_input', FloatTensorType([1, input_shape]))]
onnx_model = convert_lightgbm(lgbmmodel, initial_types=initial_type) 
with open('lgbmONNX.onnx', "wb") as f: f.write(onnx_model.SerializeToString())
content = onnx_model.SerializeToString()
sess = onnxruntime.InferenceSession(content)
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
output_proba_name = sess.get_outputs()[1].name

prediction on that

#prediction
ONNXContent = lgbmONNX.SerializeToString()
ONNXSess = onnxruntime.InferenceSession(ONNXContent)
OnnxPrediction = ONNXSess.run([label_name, output_proba_name], {input_name: PayLoad}) 
OnnxPrediction
[array([1], dtype=int64), [{0: 0.0003362894058227539, 1: 0.9996637105941772}]]

not concerned about the decimal variation, just to confirm the steps are right, Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.