Comments (28)
I don't understand why this is closed. Shouldn't the nuget package copy the native file automatically?
from onnxruntime.
I struggled with this until I fired up "Depends" and discovered that it requires the VC runtime.
from onnxruntime.
i'm having the same issue on 1.4.2
any suggestion ?
System.TypeInitializationException: The type initializer for 'Microsoft.ML.OnnxR
untime.NativeMethods' threw an exception.
---> System.DllNotFoundException: Unable to load DLL 'onnxruntime' or one of it
s dependencies: The specified module could not be found. (0x8007007E)
at Microsoft.ML.OnnxRuntime.NativeMethods.OrtGetApiBase()
at Microsoft.ML.OnnxRuntime.NativeMethods..cctor()
--- End of inner exception stack trace ---
at Microsoft.ML.OnnxRuntime.SessionOptions..ctor()
at Microsoft.ML.OnnxRuntime.InferenceSession..ctor(String modelPath)
at Microsoft.ML.Transforms.Onnx.OnnxModel..ctor(String modelFile, Nullable1 gpuDeviceId, Boolean fallbackToCpu, Boolean ownModelFile, IDictionary
2 shapeDic
tionary)
OS Platform: Windows 7 64bit
ONNX Runtime installed from Nuget Microsoft.ML.OnnxRuntime in VS19
ONNX Runtime version: 1.4.2
from onnxruntime.
Having the same issue using AnyCPU - It doesn't download/install the 32bit DLLs and so the project I'm trying to use it fails with
System.BadImageFormatException: 'Could not load file or assembly 'Aspose.OCR, Version=21.1.2.0, Culture=neutral, PublicKeyToken=716fcc553a201e56' or one of its dependencies. An attempt was made to load a program with an incorrect format.'
(because it loads the 64 bit DLLs)I downloaded the x86 release zip and was able to place onnxruntime.dll in the right directory (
\packages\Microsoft.ML.OnnxRuntime.MKLML.1.4.0\runtimes\win-x86\native\
) to (assumedly, since I can't run it) fix it, but I'm not able to find pre-compiled x86 versions of mklml.dll and libiomp5md.dll to see if this would fix it for them as well.
I did some digging to see if I could find a solution to your issue but was unable to locate any x86 binaries. I think your best bet might be to download the source code from Microsoft.ML source code and try to build your desired x86 libs.
Keep in mind not all functionalities when running x86 are supported, specifically TensorFlow and LightGBM. This is stated in the github for ML. Another reference to x86 not being supported here. Who knows what issues might arise when trying to build the lib with your setup so I would suggest migrating your project to x64 if possible.
from onnxruntime.
I was facing this issue (or possibly a similar issue), and I ended up installing https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime/1.5.2?_src=template which resolved the issue for me.
from onnxruntime.
I've had similar problems with onnxruntime.dll when deploying my application on other computers.
In the end it came down to installing "Visual C++ Redistributable for Visual Studio" (VC_redist.x64.exe & VC_redist.x86.exe). After installing these libraries the problem was resolved and onnxruntime.dll loaded without problems.
from onnxruntime.
@shahasad @jignparm can one of you take a look? thanks!
from onnxruntime.
Current build is supported for windows 10, x64 CPU only. Support for other platforms are coming in subsequent releases. Meanwhile, please feel free to build from the source.
from onnxruntime.
I built from source but could not find the dll. I’m also running Windows 10 x64.
from onnxruntime.
The problem is due to the Visual Studio not copying the dependent dlls (onnxruntime.dll and mkldnn.dll )to the \x64\Debug directory. After copying both these files, the project runs successfully.
from onnxruntime.
That is correct! Also the onnx model I exported from the current version of the PyTorch ONNX export uses a version of Gemm in the export that is not compatible.
from onnxruntime.
Same issue here.
from onnxruntime.
Should have been fixed in the later versions. Plz try v0.5
from onnxruntime.
I have the same thing on 0.5
from onnxruntime.
I have the same thing on 1.1.0
from onnxruntime.
You have to set your cpu type to either x86 or x64.
@shahasad @jignparm Perhaps this could be made clearer, maybe build should fail if cpu type is not correctly setup in Visual Studio.
from onnxruntime.
You have to set your cpu type to either x86 or x64.
I have the same thing on 1.1.0
That should not be the case -- it should work for x86, x64 as well as AnyCPU, depending on which architecture applies to the project (i.e. native C++, .Net Core or .Net Framework).
The package ships with the appropriate targets files, so you should not need to copy any DLLs (the original issue was created prior to the package fixes.)
Can you post a way to reproduce the error?
from onnxruntime.
To be honest I haven't tried to reproduce since the 0.4 era, I only pinged you because I saw someone saying they had it with 1.1.0. I still do leave mine configured with x86 regardless of whether my code is managed or not. If I have time I'll experiment.
from onnxruntime.
I believe one issue was fixed by changing the platform, but I have not tried to replicate the problem.
One thing I know for sure: I have a project - project A - that uses onnxruntime as a nuget package. I have another project - project B - that references project A. Unless onnxtruntime is explicitly added as a Nuget package to project B, onnxtruntime.dll doesn't get copied into the bin folder. I think this is a property of Nuget more generally, and not onnxruntime per se.
from onnxruntime.
It happens to me also on 1.1.1, and without a way to reproduce it for now
from onnxruntime.
I figured out my issue!
I am using paket as a package manager, instead of VS nuget manager. So my project doesn't have a packages.config, which is why onnxruntime.dll is never copied
props.xml has the following conditions:
<ItemGroup Condition="Exists('packages.config') OR Exists('$(MSBuildProjectName).packages.config') OR Exists('packages.$(MSBuildProjectName).config')">
I will be creating a new issue for this
from onnxruntime.
I've had this error with Microsoft.ML.OnnxRuntime.Gpu 1.3.0
Turns out, I've accidently removed "NVIDIA GPU Computing Toolkit" from the path.
from onnxruntime.
@devnet123 I get the same error on both 1.5.1 and 1.52 versions but I have fixed it.
Here is how I fixed it:
Go to here: download onnxruntime.dll from github
I downloaded the onnxruntime-win-x64-1.5.1.zip package since it matches my OS and architecture.
Then once you have that downloaded, unzip it and go to: *onnxruntime-win-x64-1.5.1\lib*
You should see the onnxruntime.dll inside there.. just leave it for now..
Keep that directory open and then go to back to your project.
Go to the directory (assuming your using .netcore and x64): *bin\x64\Debug\netcoreapp3.1\runtimes\win-x64\native*
What matter is you go to the native folder.
Copy the onnxruntime.dll into that native folder which might have LdaNative.dll and tensorflow.dll also in it (mine does for my setup).
Now when you run your program you should be able to find that dll (module) the application was complaining about.
I hope that fixes your problem!
from onnxruntime.
Having the same issue using AnyCPU - It doesn't download/install the 32bit DLLs and so the project I'm trying to use it fails with System.BadImageFormatException: 'Could not load file or assembly 'Aspose.OCR, Version=21.1.2.0, Culture=neutral, PublicKeyToken=716fcc553a201e56' or one of its dependencies. An attempt was made to load a program with an incorrect format.'
(because it loads the 64 bit DLLs)
I downloaded the x86 release zip and was able to place onnxruntime.dll in the right directory (\packages\Microsoft.ML.OnnxRuntime.MKLML.1.4.0\runtimes\win-x86\native\
) to (assumedly, since I can't run it) fix it, but I'm not able to find pre-compiled x86 versions of mklml.dll and libiomp5md.dll to see if this would fix it for them as well.
from onnxruntime.
I struggled with this until I fired up "Depends" and discovered that it requires the VC runtime.
Thank You for this, You saved me hours.
from onnxruntime.
我一直在努力解决这个问题,直到我启动了"Depends"并发现它需要VC运行时。
I solved it in this way,thanks!
from onnxruntime.
I was having same problem with VS2022 and deploying to windows x86.
If I install on develepment computer, it all runs fine, but if it's a clean Win10 computer (recently installed), I will get the error. The error got solved by installing "Visual C++ Redistributable for Visual Studio" as @nelemans1971 suggested. Why is it not included in the nuget dependencies?
from onnxruntime.
Same issue here and solved by installing vc redists.
It seems Microsoft just assume everyone has installed vc redists by default, that's really a pain in the ass. Not only onnxruntime, but a huge amount of software.
If the vc redists are soooo important, why not contain them all in windows? Confusing.
from onnxruntime.
Related Issues (20)
- [cuda ep] Squeeze node fails when axes is not provided
- [Mobile] .NET MAUI and OnnxRuntime integration on iPhone 11 - iOS 17.5.1 with Hot Restart deployment/debugging - Loading InferenceSession results in 'The type initializer for 'Microsoft.ML.OnnxRuntime.NativeMethods' threw an exception.' HOT 8
- Same Model Hash Code Issue from different models
- Memory Leak in Onnx Session Release [Web]
- [Build] fail to build `rel-1.19.0` vs CUDA 12.6 on Windows HOT 4
- [DO NOT UNPIN] ORT 1.19 release candidates available for testing HOT 2
- [Mobile] Setting the namespace via the package attribute in the source AndroidManifest.xml is no longer supported
- FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory HOT 2
- device_id is not work
- [Performance] Inference time discrepancy when using TorchScript vs ONNX exported model
- onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : CopyTensorAsync is not implemented HOT 3
- [Build] ONNXRuntime QNN execution provider dlls are not copied to target
- [Feature Request] Enable --use-extensions for pre built releases
- Inference session with 'optimized_model_filepath' expands HardSwish operator into atomic operators
- [Performance] SetIntraOpNumThreads not take effect HOT 2
- [Performance] pytorch quantize_qat model export to onnx, insert a transpose layer befor and after the conv layers HOT 5
- [Web] model can run under python env both cpu and cuda, but cannot run under webGPU HOT 6
- 1.18.1 regression: not working with Emgu.CV <= 4.8.0 HOT 2
- [Build] Access Violation error on using 'new InferenceSession(filepath)' in 1.18.1
- Why is CoreMLExecution an Objective-C class HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.