GithubHelp home page GithubHelp logo

Comments (11)

tcapelle avatar tcapelle commented on July 17, 2024 2

I am curious if someone managed to run this on a laptop outside of the Ultras.

from mlx.

tcapelle avatar tcapelle commented on July 17, 2024 2

Yes, I am using the provided mistral example. It's not a typo it takes around 80 seconds to generate 1 token.

from mlx.

awni avatar awni commented on July 17, 2024 2

Yes that's an oversight, the Mistral example does fp16, but the llama does fp32 by defualt since that's what the weights are saved in.

You can see an example of casting the weights in the Mistral file. We should add the same for Llama (and probably just save them as fp16 in the first place as it doesn't seem to make a difference).

from mlx.

briancpark avatar briancpark commented on July 17, 2024 1

@rovo79 Correct. As per #18, ANE API is closed source and not publicly accessible. I believe the only way to touch ANE today is via CoreML.

from mlx.

tcapelle avatar tcapelle commented on July 17, 2024 1

So, has someone managed to run a 7B inference using MLX on 16GB of RAM? Or do you need an Ultra to make any use of MLX?

from mlx.

dc-dc-dc avatar dc-dc-dc commented on July 17, 2024

FP16/BF16 are both supported dtypes here

The ops are lazy and will only execute the compute as needed, but if the default_device indicates the gpu it should be using the metal kernels

from mlx.

lin72h avatar lin72h commented on July 17, 2024

I am running Llama/Mistral inference examples on my M1Pro with 16GB of memory and getting around 80sec/token.

Are you using the 7b llama and 7b mistral model? Is it a typo? Do you mean(80ms/token or 80sec/token)?

from mlx.

khiet1234 avatar khiet1234 commented on July 17, 2024

GPU usage seems low right ?

from mlx.

arpan-dhatt avatar arpan-dhatt commented on July 17, 2024

@tcapelle Can you please check your memory pressure when running the model? At 16GB of memory, you may be running out of wired memory since the example uses FP16 (weights total nearly 14.6GB) and inference takes a bit more than that.

from mlx.

briancpark avatar briancpark commented on July 17, 2024

@tcapelle I tried the LLaMA example on my M1 Pro 32GB. It's indeed slow, and I think that's mostly due to the weights being FP32. I haven't checked Mistral example yet, but this performance is expected if that is also FP32. Transformer inference is typically memory-bound and using FP32 is a bottleneck.

Did you do additional modifications to run the example in FP16 or did I miss something?

from mlx.

rovo79 avatar rovo79 commented on July 17, 2024

Running Inference with MLX won't touch ANE in anyway, right?

from mlx.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.