Comments (3)
Also, I'v tried different version of Onnxruntime from 1.15.0 to 1.19.0-dev-nightly。But they all failed!The Error I get is all the same。like this:
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Trilu(14) node with name 'Trilu'
from onnxruntime.
Now we support the following types for the first input:
- tensor(bool)
- tensor(double)
- tensor(float)
- tensor(int64)
from onnxruntime.
As mentioned above, Trilu
for CPUExecutionProvider
only supports four types, but for CUDAExecutionProvider
all main types are supported. You can try CUDA EP before CPU EP support is added, or you can change the input dtype to int64 if it also meets your need.
And there are some issues with your script, you may need modify it accordingly:
script
import onnx
from onnx import helper, TensorProto
import numpy as np
import onnxruntime
input_1 = helper.make_tensor_value_info('input_1', TensorProto.INT32, [1,1,1024,1024])
- input_2 = helper.make_tensor_value_info('input_2', TensorProto.INT64, [1])
+ input_2 = helper.make_tensor_value_info('input_2', TensorProto.INT64, [])
output = helper.make_tensor_value_info('output', TensorProto.INT32,[1,1,1024,1024])
def build_model():
node_def = helper.make_node(
'Trilu',
['input_1', 'input_2'],
['output'],
name="Trilu"
)
graph_def = helper.make_graph(
nodes=[node_def],
name='SingleOperatorGraph',
inputs=[input_1, input_2],
outputs=[output],
initializer=[]
)
model_def = helper.make_model(
graph_def,
producer_name='MyModel',
opset_imports=[helper.make_opsetid("", 14)],
)
return model_def
def run_model(model_path):
- input_data_1 = np.random.randint(10, size=(1,1,1024, 1024)).astype(np.int64)
+ input_data_1 = np.random.randint(10, size=(1,1,1024, 1024)).astype(np.int32)
input_data_2 = np.array(-1).astype(np.int64)
np.save("trilu_i1",input_data_1)
np.save("trilu_i2",input_data_2)
session = onnxruntime.InferenceSession(model_path,providers=['CPUExecutionProvider'])
input_names = [input.name for input in session.get_inputs()]
inputs = {input_names[0]: input_data_1, input_names[1]: input_data_2}
output = session.run(None, inputs)
print(output)
model_def = build_model()
onnx.checker.check_model(model_def)
save_path = "single_operator_model.onnx"
onnx.save_model(model_def, "single_operator_model.onnx")
print(f"{onnxruntime.__version__=}")
run_model(save_path)
from onnxruntime.
Related Issues (20)
- [Feature Request] Mark as negative tests for minimal CUDA build
- New restricted asymmetric quantization mode in QDQ mode with zero_point restricted to either 128 or 0
- [WebNN EP] Support int64 output data type for CoreML backend HOT 1
- [Web] where is the demo of object detection on web HOT 2
- LNK2019: unresolved external symbol OrtGetApiBase HOT 1
- Multi-threaded GPU inferencing failing with whisper-small: Non-zero status code returned while running DecoderMaskedMultiHeadAttention node HOT 4
- TensorRT EP failed to create engine from network. HOT 5
- Custom Op Library does not work for CUDA HOT 2
- onnxruntime.InferenceSession.run sometimes get stuck, sometimes not HOT 3
- How to do multithreaded infer with onnxruntime HOT 1
- CUDA provider fallback to CPU is not working when CUDA_PATH environment variable exists
- using TensorRT EP by nuget HOT 5
- Unable to append DML Provider HOT 1
- EP Error /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:123 HOT 1
- FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\code\Blueprint.Net.Server\bin\Debug\net8.0-windows10.0.22621.0\runtimes\win-x64\native\onnxruntime_providers_cuda.dll" ” HOT 2
- [Build] Cross compilation of the onnxruntime 1.5.1 for ARMv7 32bit target for gcc 4.9.2 HOT 6
- Quantization failed! The onnxruntime.quantization.quantize_dynamic seems didn't convert to the qint8 .onnx file successfully HOT 1
- [Build] ADD_LIBRARY cannot create target "memory" because another target with the same name already exists between xnnpack and absl HOT 1
- Create Custom Node in CUDA
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.