Comments (29)
The build passes but sadly the import fails with
libc++abi: terminating due to uncaught exception of type std::runtime_error: Failed to load device
The issue is that MTLCreateSystemDefaultDevice
doesn't return a default device. The following gives an indication how to fix it but adding -framework CoreGraphics
to LDFLAGS
had no impact.
In macOS, in order for the system to provide a default Metal device object, you must link to the Core Graphics framework. You usually need to do this explicitly if you’re writing apps that don’t use graphics by default, such as command line tools.
from mlx.
0.0.9 builds and passes. We should be able to release mlx on conda-forge soon 🎉
from mlx.
@ngam it turns out your were 100% right, the packaging was broken (🤦 I didn't consider that I had gguf already installed on the machines I tested on). We fixed this in the latest release. Hopefully that makes it easier to get the conda package working!
from mlx.
Looking at your build log, it might be the case that you on a MacOS version that is not supported
We only support MacOS >= 13.4 - and have updated the docs with clearer instructions on that
Also, we have updated the cmake to give a clearer indication of the macOS version being used for the build and throw a clear error when that is incompatible
from mlx.
CMake correctly reports -- Detected macOS version 14.0
but this actually helped as I searched the log on why you might think that I have an older macOS and found that it still set the macos deployment target to 11.0. Setting it to 14.0 fixes the build.
from mlx.
Should be resolved in conda-forge/mlx-feedstock#4
from mlx.
It happens both in cross- and non-cross-compilation mode. You can see the environment in the CI logs of conda-forge/mlx-feedstock#4 . This is a cross-compilation log but as the error is equal on whether cross-compiled or not, it should give you the relevant information. Happy to provide you with the relevant information, but I'm not sure what you would need.
from mlx.
The only key difference (I can think of) is that we are using the LLVM Clang 16.0.6 (not the default Apple Clang 15.0.0 found on MacOS nowadays, at least on mine).
We can potentially test with the Apple Clang provided in the images, but that may require us some additional work...
from mlx.
@awni looks like it should be closer to working (I will test more thoroughly soon, but it passes locally for me). However, this PR introduces problems #350 (need to package the new lib correctly)
from mlx.
Thanks @xhochy and @awni. I think we will benefit from someone on the mlx side getting involved as well (anyone interested we can add them as maintainers on the conda-forge side). As always @awni, don't hesitate to ping one of us if needed
We are compiling with our own LLVM clang compilers and we are building against macos sdk 13.3. If at any point, we need to increase the sdk version, e.g., for new features in metal, we can do that
from mlx.
It currently fails with
[2/74] Building arg_reduce.air
FAILED: mlx/backend/metal/kernels/arg_reduce.air /Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work/build/temp.macosx-11.0-arm64-cpython-39/mlx.core/mlx/backend/metal/kernels/arg_reduce.air
cd /Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work/build/temp.macosx-11.0-arm64-cpython-39/mlx.core/mlx/backend/metal/kernels && xcrun -sdk macosx metal -Wall -Wextra -fno-fast-math -c /Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work/mlx/backend/metal/kernels/arg_reduce.metal -I/Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work -o arg_reduce.air
In file included from /Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work/mlx/backend/metal/kernels/arg_reduce.metal:6:
/Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work/mlx/backend/metal/kernels/utils.h:15:27: error: variable in constant address space must be initialized
static const constant U max;
^
/Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work/mlx/backend/metal/kernels/utils.h:16:27: error: variable in constant address space must be initialized
static const constant U min;
^
/Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work/mlx/backend/metal/kernels/utils.h:17:27: error: variable in constant address space must be initialized
static const constant U finite_max;
^
/Users/uwe/mambaforge/conda-bld/mlx_1701855483044/work/mlx/backend/metal/kernels/utils.h:18:27: error: variable in constant address space must be initialized
static const constant U finite_min;
Full build log at https://gist.github.com/xhochy/3290e73eba3fce442f4ba453fe8def60
As I have no experience with metal, I'm unsure about the right fix for this.
from mlx.
I have a PR ready that builds it for Apple Silicon on conda-forge: conda-forge/mlx-feedstock#3 Sadly this fails in cross-compilation mode. As we don't have Apple Silicon runners on conda-forge, this makes it hard to build. Locally (without cross-compilation), it passes.
from mlx.
Cool! Would love to support install with Conda. Could you say more about how the distribution works? Right now we build w/ cross compilation for PyPI. Could we reuse that same machinery for Conda forge?
from mlx.
It works similar to the PyPI machinery, but the main difference is that conda-forge brings its own compiler infrastructure. With Metal, this should be too much of a difference as it simply calls out into the SDK.
The current problem is though that compilation stops with:
2023-12-20T09:24:46.5479330Z [4/74] Building binary.air
2023-12-20T09:24:46.5600500Z FAILED: mlx/backend/metal/kernels/binary.air /Users/runner/miniforge3/conda-bld/mlx_1703063827515/work/build/temp.macosx-11.0-arm64-cpython-311/mlx.core/mlx/backend/metal/kernels/binary.air
2023-12-20T09:24:46.5700970Z cd /Users/runner/miniforge3/conda-bld/mlx_1703063827515/work/build/temp.macosx-11.0-arm64-cpython-311/mlx.core/mlx/backend/metal/kernels && xcrun -sdk macosx metal -Wall -Wextra -fno-fast-math -c /Users/runner/miniforge3/conda-bld/mlx_1703063827515/work/mlx/backend/metal/kernels/binary.metal -I/Users/runner/miniforge3/conda-bld/mlx_1703063827515/work -o binary.air
2023-12-20T09:24:46.5801130Z In file included from /Users/runner/miniforge3/conda-bld/mlx_1703063827515/work/mlx/backend/metal/kernels/binary.metal:6:
2023-12-20T09:24:46.5901600Z /Users/runner/miniforge3/conda-bld/mlx_1703063827515/work/mlx/backend/metal/kernels/utils.h:15:27: error: variable in constant address space must be initialized
2023-12-20T09:24:46.6004170Z static const constant U max;
2023-12-20T09:24:46.6106660Z ^
2023-12-20T09:24:46.6207310Z /Users/runner/miniforge3/conda-bld/mlx_1703063827515/work/mlx/backend/metal/kernels/utils.h:16:27: error: variable in constant address space must be initialized
2023-12-20T09:24:46.6307040Z static const constant U min;
2023-12-20T09:24:46.6414930Z ^
2023-12-20T09:24:46.6517150Z /Users/runner/miniforge3/conda-bld/mlx_1703063827515/work/mlx/backend/metal/kernels/utils.h:17:27: error: variable in constant address space must be initialized
2023-12-20T09:24:46.6598180Z static const constant U finite_max;
2023-12-20T09:24:46.6699680Z ^
2023-12-20T09:24:46.6777010Z /Users/runner/miniforge3/conda-bld/mlx_1703063827515/work/mlx/backend/metal/kernels/utils.h:18:27: error: variable in constant address space must be initialized
2023-12-20T09:24:46.6879650Z static const constant U finite_min;
2023-12-20T09:24:46.6983390Z ^
2023-12-20T09:24:46.7084820Z 4 errors generated.
If I compile that natively (on Apple Silicon), I get the same error if I use "11.0" as the Xcode SDK and Deployment target. If I increase this then to 13.3, it passes. In the cross-compilation setup, the error sadly persists even with the more modern SDK and deployment target.
from mlx.
One issue could also be the 13.3 SDK as only 13.4 is supported. Sadly, I don't know of a reliable way to retrieve a newer SDK in CI.
from mlx.
Sadly, I don't know of a reliable way to retrieve a newer SDK in CI.
Newer SDKs are usually available readily in the newer vm images we could use.
from mlx.
The build passes but sadly the import fails with
libc++abi: terminating due to uncaught exception of type std::runtime_error: Failed to load device
The issue is that
MTLCreateSystemDefaultDevice
doesn't return a default device. The following gives an indication how to fix it but adding-framework CoreGraphics
toLDFLAGS
had no impact.In macOS, in order for the system to provide a default Metal device object, you must link to the Core Graphics framework. You usually need to do this explicitly if you’re writing apps that don’t use graphics by default, such as command line tools.
@awni, we are again facing this error on arm64. Build passes, but runtime error...
from mlx.
I've never seen that issue before (not being able to load the default device). You are compiling natively on an Apple silicon machine? Can you say more about the setup / environment.
from mlx.
I tried building with Clang 17.0.6 and did not encounter this issue:
-- The CXX compiler identification is Clang 17.0.6
I see you are continuing to work on this!! How is it going, any updates?
from mlx.
From conda-forge/mlx-feedstock#4 (comment):
Current status: as is,
- ✅ builds and runs successfully without cf compilers (i.e., with apple's arm64 compilers) using both native and cross compiling (note that the compilers are arm64, so it is not really cross compiling). Also,
python -m unittest discover python/tests
passes- ❌❌ does not run when built with conda-forge's clang compilers (not native and not cross compiling); that is, it builds fine, but doesn't actually work
- ❌ does not run when built with apple's x86_64 clang; that is, it builds fine, but doesn't actually work
For the latter two, the error is the same:
libc++abi: terminating due to uncaught exception of type std::runtime_error: Failed to load device
Has anyone tested cross-compiling here?
I can test with Clang 17 we have available in conda-forge (update: tested, fails ❌)
from mlx.
Hi I wonder if you try again on main it might find the device now? We changed the way we find the default device in #370
from mlx.
@ngam could you say more about packaging the library correctly? It seems to work fine with PyPi? I tested it here: https://test.pypi.org/project/awni-test-mlx/0.0.7/
from mlx.
Oh... I only tested for conda-forge (not pypi) and assumed it would be the same for pypi. This may be limited to conda packaging; and if so, @xhochy or I would submit a patch if needed. For now, I believe we have this working on conda-forge (pending a new release from you and pending @xhochy's review). See conda-forge/mlx-feedstock#4 (comment).
from mlx.
Oh ok, let me know if we need to change anything on our end!
from mlx.
…and pushed to conda-forge.
Verified that it works. I successfully fine-tuned Mistral-7B-v0.1
on my M1 GPU.
from mlx.
This is so awesome!
One more dumb question: as we do new releases, how does the conda forge distribution get updated?
from mlx.
conda-forge has a bot called @regro-cf-autotick-bot that will issue new PRs to https://github.com/conda-forge/mlx-feedstock where I will review and merge (and I have hopes that @ngam will continue to support me there). Once the PR is merged, the package will be available roughly 1h later on conda-forge.
from mlx.
Thanks! And thanks for setting this up. I will plan to add the new install path to our docs
from mlx.
Thank you all for working on this! Very happy to see it landing on conda-forge.
from mlx.
Related Issues (20)
- [BUG] Using `make test` (e.g. ctest) causes segfaults
- [BUG] ValueError: [quantize] The last dimension of the matrix needs to be divisible by the quantization group size 64. HOT 7
- [Feature] Add a `clip_grad_norm` to `mlx.optimizers`
- [BUG] Layernorm provide strange results during inference HOT 3
- [Feature Request] Support `numpy.ndarray.view`
- [BUG] HOT 1
- [BUG] Cannot install MLX locally HOT 5
- [Feature] Multi-Machine Support for Distributed Inference HOT 4
- [BUG] mlx_lm issue with Phi-3 fine tuned model: adding and repeating weird tokens
- [FEATURE] in keras LayerNorm by default is apply to last dimension only HOT 9
- [BUG] in-place updating of array slice unexpectedly fails due to broadcasting problem HOT 2
- [BUG] Matmul gives wrong output for large sizes HOT 4
- [BUG] broadcast of scalar array in last dimension fails after #1035
- [BUG] Unable to install mlx on MacbookPro M3Pro with MacOS 14.4.1 HOT 1
- [FEATURE] how to return mlx intermediate layer output similarly to Keras HOT 2
- [BUG] cannot replicate a keras model into mlx when I reuse keras pretrained weights
- [BUG] EOS terminator for mlx_lm generate function HOT 1
- [BUG] libc++abi crash when using recurrent layer and transformer HOT 2
- [Feature] arctan2 HOT 3
- [BUG] arithmetic operations with numpy arrays are not commutative HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mlx.