Comments (5)
It's not an issue.
from onnxruntime.
Please use our C API and the dynamic lib(onnxruntime.dll). You won't see any c++ code(including onnx, protobuf) in the C API. Therefore, there won't be any conflict or inconsistency.
from onnxruntime.
This issue is reported by someone who's implementing custom ops.
from onnxruntime.
Put the custom op as part of the onnxruntime.dll. You may create a private fork of onnxruntime, and put the op there.
from onnxruntime.
Custom ops must statically link to onnxruntime, and they must use the C++ API, not C API. And they must the onnx header files generated from onnxruntime, not the upstream one. For all the third party dependencies(e.g.protobuf), it must use exact the version as it in onnxruntime. And for ONNX, as we have a private fork, you can't use the upstream one.
from onnxruntime.
Related Issues (20)
- Do Release Package Libs Contain Experimental CXX API? HOT 2
- Onnx model throws an exception in 1.17.3 but works in 1.16.x HOT 1
- how to release gpu memory after session.run HOT 2
- [Mobile] [iOS] Declare NSPrivacyAccessedAPICategorySystemBootTime API Usage in onnxruntime-objc HOT 3
- Error Installing ONNX Runtime GPU with CUDA 12.0: The project does not support adding package references through the add package command. HOT 5
- [Performance] Replace MatMul with FullyConnected HOT 1
- [Performance] INT32 CLIP Support on QNN
- [Build] Remove large files from repository HOT 1
- [Performance] Conv3D Support in QNN delegate HOT 1
- [Performance] QNN EP Leaving QDQ Nodes in the QNN Graph
- [Mobile] Using flatDir should be avoided because it doesn't support any meta-data formats. HOT 2
- GlobalAveragePool with ORT_ENABLE_ALL genenerates the incorrect output.
- OPT6.7b ONNX model not giving accurate results on CPU.
- [Bug] Memory Allocation Anomaly Across Devices in OrtCUDAProviderOptions HOT 5
- Using DML in C++ DLL function called from C# HOT 2
- [Performance] gpu memory doesn't get released when ort session gets deleted HOT 1
- LLama2 convert_to_onnx.py does not fuse GQA
- [DO NOT UNPIN] ORT 1.18.0 Release Candidates available for testing HOT 15
- Unsupported model IR version: 10, max supported IR version: 9 HOT 1
- ImportError: cannot import name 'GpuBindingManager' from 'onnxruntime.transformers.io_binding_helper' HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.