GithubHelp home page GithubHelp logo

onnx model conversion about models HOT 5 OPEN

Allwinraj avatar Allwinraj commented on May 26, 2024 1
onnx model conversion

from models.

Comments (5)

chuqingq avatar chuqingq commented on May 26, 2024

Same question about mobilenetv2 model.

https://github.com/onnx/models/blob/main/validated/vision/classification/mobilenet/train_mobilenet.ipynb

from models.

Allwinraj avatar Allwinraj commented on May 26, 2024

@chuqingq

I can convert the resnet50 model using below code.

`
import numpy as np
import mxnet

mx_symbol = 'params/1.0000-imagenet-resnet18_v1-symbol.json'
mx_params = 'params/1.0000-imagenet-resnet18_v1-0001.params'
onnx_file = 'modelresnet.onnx'

input_shapes = [(1, 3, 224, 224)]
input_types = np.float32

mxnet.onnx.export_model(mx_symbol, mx_params, input_shapes, input_types, onnx_file,dynamic=False)
`
It's worked for me.

from models.

chuqingq avatar chuqingq commented on May 26, 2024

@Allwinraj Thank you!

I used some mobilenet code to export onnx model. But it does not work for inferencing.

Code to export mobilenet model to onnx model:

dir = './mobilenet-v2-model/params/'
# Downloaded input symbol and params files
sym = dir+'./1.0000-imagenet-mobilenetv2_1.0-symbol.json'
params = dir+'./1.0000-imagenet-mobilenetv2_1.0-0000.params'

# Standard Imagenet input - 3 channels, 224*224
input_shape = (1,3,224,224)

# Path of the output file
onnx_file = './mobilenet-v2-model/mxnet_exported_imagenet-mobilenetv2_1.0.onnx'

converted_model_path = mx.onnx.export_model(sym, params, [input_shape], np.float32, onnx_file)


from onnx import checker
import onnx


# Check validity of ONNX model
# Load onnx model
model_proto = onnx.load_model(converted_model_path)

# Check if converted ONNX protobuf is valid
checker.check_graph(model_proto.graph)

It works.

Code to inference, from [onxx model imagenet inference notebook]:(https://github.com/onnx/models/blob/main/validated/vision/classification/imagenet_inference.ipynb)

# %% [markdown]
# # Inference Demo for ImageNet Models

# %% [markdown]
# ## Overview
# This notebook can be used for inference on ONNX models trained on **ImageNet** dataset. The demo shows how to use the trained models to do inference in MXNet. Please install the prerequisite packages if not already installed. 
# 
# ## Model Support in This Demo
# 
# * SqueezeNet
# * VGG
# * ResNet
# * MobileNet
# 
# ## Prerequisites
# 
# * Protobuf compiler - `sudo apt-get install protobuf-compiler libprotoc-dev` (required for ONNX. This will work for any linux system. For detailed installation guidelines head over to [ONNX documentation](https://github.com/onnx/onnx#installation))
# * ONNX - `pip install onnx`
# * MXNet - `pip install mxnet-cu90mkl --pre -U` (tested on this version GPU, can use other versions. `--pre` indicates a pre build of MXNet which is required here for ONNX version compatibility. `-U` uninstalls any existing MXNet version allowing for a clean install)
# * numpy - `pip install numpy`
# * matplotlib - `pip install matplotlib`
# 
# In order to do inference with a python script: 
# * Generate the script : In Jupyter Notebook browser, go to File -> Download as -> Python (.py)
# * Run the script: `python imagenet_inference.py`

# %% [markdown]
# ### Import dependencies
# Verify that all dependencies are installed using the cell below. Continue if no errors encountered, warnings can be ignored.

# %%
import mxnet as mx
import matplotlib.pyplot as plt
import numpy as np
from collections import namedtuple
from mxnet.gluon.data.vision import transforms
from mxnet.contrib.onnx.onnx2mx.import_model import import_model
import os

# %% [markdown]
# ### Test Images
# A test image will be downloaded to test out inference. Feel free to provide your own image instead.

# %%
# mx.test_utils.download('https://s3.amazonaws.com/model-server/inputs/kitten.jpg')

# %% [markdown]
# ### Download label file for ImageNet
# Download and load synset.txt file containing class labels for ImageNet

# %%
# mx.test_utils.download('https://s3.amazonaws.com/onnx-model-zoo/synset.txt')
with open('./mobilenet-v2-model/synset.txt', 'r') as f:
    labels = [l.rstrip() for l in f]

# %% [markdown]
# ### Import ONNX model
# Import a model from ONNX to MXNet symbols and params

# %%
# Enter path to the ONNX model file
model_path= './mobilenet-v2-model/mxnet_exported_imagenet-mobilenetv2_1.0.onnx'
#model_path= './mobilenet-v2-model/mobilenetv2-12.onnx'
#model_path= './mobilenet-v2-model/mobilenetv2-12-rknn.onnx'
sym, arg_params, aux_params = import_model(model_path)

# %% [markdown]
# ### Read image
# `get_image(path, show=False)` : Read and show the image taking the `path` as input

# %%
Batch = namedtuple('Batch', ['data'])
def get_image(path, show=False):
    img = mx.image.imread(path)
    if img is None:
        return None
    if show:
        plt.imshow(img.asnumpy())
        plt.axis('off')
    return img

# %% [markdown]
# ### Preprocess image
# `preprocess(img)` : Preprocess inference image -> resize to 256x256, take center crop of 224x224, normalize image, add a dimension to batchify the image

# %%
def preprocess(img):   
    transform_fn = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
    img = transform_fn(img)
    img = img.expand_dims(axis=0)
    return img

# %% [markdown]
# ### Predict
# `predict(path)` : Takes `path` of the input image and flag to display input image and prints 5 top predictions

# %%
def predict(path):
    img = get_image(path, show=True)
    img = preprocess(img)
    mod.forward(Batch([img]))
    # Take softmax to generate probabilities
    scores = mx.ndarray.softmax(mod.get_outputs()[0]).asnumpy()
    # print the top-5 inferences class
    scores = np.squeeze(scores)
    a = np.argsort(scores)[::-1]
    for i in a[0:5]:
        print('class=%s ; probability=%f' %(labels[i],scores[i]))

# %% [markdown]
# ### Load the network for inference
# Use `mx.mod.Module` to define the network architecture and bind the parameter values using `mod.set_params`. `mod.bind` tells the network the shape of input and labels to expect.

# %%
# Determine and set context
if len(mx.test_utils.list_gpus())==0:
    ctx = mx.cpu()
else:
    ctx = mx.gpu(0)
# Load module
mod = mx.mod.Module(symbol=sym, context=ctx, label_names=None)
mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], 
         label_shapes=mod._label_shapes)
mod.set_params(arg_params, aux_params, allow_missing=True, allow_extra=True)

# %% [markdown]
# ### Generate predictions
# The top 5 classes (in order) along with the probabilities generated for the image is displayed in the output of the cell below

# %%
# Enter path to the inference image below
img_path = './mobilenet-v2-model/kitten.jpg'
predict(img_path)

It does not work. The result is:

$ python imagenet_inference.py
Traceback (most recent call last):
  File "imagenet_inference.py", line 65, in <module>
    sym, arg_params, aux_params = import_model(model_path)
  File "/home/chuqq/micromamba/envs/python37/lib/python3.7/site-packages/mxnet/contrib/onnx/onnx2mx/import_model.py", line 60, in import_model
    sym, arg_params, aux_params = graph.from_onnx(model_proto.graph, opset_version=model_opset_version)
  File "/home/chuqq/micromamba/envs/python37/lib/python3.7/site-packages/mxnet/contrib/onnx/onnx2mx/import_onnx.py", line 116, in from_onnx
    inputs = [self._nodes[i] for i in node.input]
  File "/home/chuqq/micromamba/envs/python37/lib/python3.7/site-packages/mxnet/contrib/onnx/onnx2mx/import_onnx.py", line 116, in <listcomp>
    inputs = [self._nodes[i] for i in node.input]
KeyError: 'mobilenetv20_features_relu60_relu6_min'

I don't know why.

from models.

Allwinraj avatar Allwinraj commented on May 26, 2024

@chuqingq

Did you try with

import onnxruntime
sess = onnxruntime.InferenceSession('model.onnx')

instead of using

from mxnet.contrib.onnx.onnx2mx.import_model import import_model

Onnxruntime worked for me

from models.

chuqingq avatar chuqingq commented on May 26, 2024

@chuqingq

Did you try with

import onnxruntime sess = onnxruntime.InferenceSession('model.onnx')

instead of using

from mxnet.contrib.onnx.onnx2mx.import_model import import_model

Onnxruntime worked for me

@Allwinraj

Sorry, I didn't use onnxruntime.InferenceSession to import model.

I use rknn_toolkit2 to convert onnx model to rknn model, and inference.

from models.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.