GithubHelp home page GithubHelp logo

Comments (1)

DifferentialityDevelopment avatar DifferentialityDevelopment commented on June 22, 2024

You just separate them with spaces like so:
./dllama inference ... --workers 10.0.0.2:9998 10.0.0.3:9998 10.0.0.4:9998

You can also run several from the same IP, like so:
./dllama inference ... --workers 10.0.0.1:9996 10.0.0.1:9997 10.0.0.1:9998

As for 1. performance on workers that have unified memory would be faster due to their increased memory bandwidth.
The root node consumes a bit more memory than the workers so I'd use the 36gb macbook as the root node, though typically it divides the memory required to load the model by the amount of workers though the number of workers need to be a power of 2 so 2, 4, 8 workers etc.

Also it's worth experimenting with the number of threads you specify, in my case I have 6 cores and 12 threads, but I get the best performance by using 8 threads.

Larger models require more data transferred during each inference pass, something like Q80 Llama 70B might already hit the limits of gigabit ethernet, switching capacity of your ethernet switch also becomes a factor then.

from distributed-llama.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.