Comments (13)
could you add -f https://developer.intel.com/ipex-whl-stable-cpu
to pip install?
from llm-on-ray.
could you add
-f https://developer.intel.com/ipex-whl-stable-cpu
to pip install?
I followed the instructions and executed below command. I see its already added.
pip install .[cpu] -f https://developer.intel.com/ipex-whl-stable-cpu -f https://download.pytorch.org/whl/torch_stable.html
from llm-on-ray.
We did not verify or support the package for Windows. Could you try it in Linux?
from llm-on-ray.
@nkanike07 could you please check your pip version by 'pip -V'? Please let me know what you get.
You should get similar request as below.
pip 20.2.4 from /usr/lib/python3.9/site-packages/pip (python 3.9)
from llm-on-ray.
@nkanike07 could you please check your pip version by 'pip -V'? Please let me know what you get.
You should get similar request as below.
pip 20.2.4 from /usr/lib/python3.9/site-packages/pip (python 3.9)
from llm-on-ray.
@nkanike07 could you please check your pip version by 'pip -V'? Please let me know what you get.
You should get similar request as below.
pip 20.2.4 from /usr/lib/python3.9/site-packages/pip (python 3.9)
I downgraded my pip to 20.2.4 and tried to install dependencies but faced same issue again. FYR, screenshot below
from llm-on-ray.
@nkanike07 could you please check your pip version by 'pip -V'? Please let me know what you get.
You should get similar request as below.
pip 20.2.4 from /usr/lib/python3.9/site-packages/pip (python 3.9)
I tried to upgrade my pip to your version, but still cannot reproduce your issue. Is it possible for you to use install conda and create an empty conda environment to install llm-on-ray? Because there might be an existing package in your env has conflict with our llm-on-ray packages.
from llm-on-ray.
@nkanike07 could you please check your pip version by 'pip -V'? Please let me know what you get.
You should get similar request as below.
pip 20.2.4 from /usr/lib/python3.9/site-packages/pip (python 3.9)I tried to upgrade my pip to your version, but still cannot reproduce your issue. Is it possible for you to use install conda and create an empty conda environment to install llm-on-ray? Because there might be an existing package in your env has conflict with our llm-on-ray packages.
sure, will give a try and update same here
from llm-on-ray.
@nkanike07 I just reproduced the issue in Windows. The issue is caused by ipex (intel extension for pytorch) doesn't support windows. So, please switch to linux system. thanks for reporting the issue.
from llm-on-ray.
I tried downloading a ubuntu docker image and could successfully run all dependencies in that.
I'm currently facing one issue while running the inference.
When ran below inference command
$python inference/serve.py --config_file inference/models/gpt2.yaml --simple
Getting below missing module error
ModuleNotFoundError: No module named 'inference.api_openai_backend'
Could someone help me here?
from llm-on-ray.
@nkanike07 We are working on it. Will update you soon.
from llm-on-ray.
@nkanike07 Unfortunately, we cannot reproduce your issue. Let's have a talk in Teams.
from llm-on-ray.
@nkanike07 I just fixed the issue and merged to the main branch. Please check out the latest code and try again.
The root cause is pip and setuptools perform different between bare-metal and container. I adjusted the way setuptools finds project files. As verified, it worked for both bare-metal and container.
thanks for your reporting.
from llm-on-ray.
Related Issues (20)
- [FOLLOW-UP] Add test scripts for query_single.py
- Move setup and getting started tests to nightly
- Issue with running Lama70 model: (ServeReplica:router:PredictorDeployment pid=17618) Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1') HOT 1
- Install vLLM from upstream as CPU backend was merged. HOT 2
- ModuleNotFoundError: No module named 'llm_on_ray.inference' HOT 3
- Update vLLM to upstream version HOT 2
- Update bigdl-LLM to IPEX-LLM
- Add serve command line options to list all supported model-ids (configured in *yaml)
- Issue about using ipex on cpu
- Migrate OpenAI API to 1.0 HOT 1
- Consolidate deepspeed workers for DeepSpeedPredictor and HPUPredictor
- Docker files for both CI and User
- Define simple_protocol.py and define pydantic SimpleRequest and SimpleModelResponse classes to encapsulate current json format
- Output some debug info in CI when Internal Server Error
- Calculate correct input length for every prompt in a single batch
- expected scalar type BFloat16 but found Float HOT 1
- Revise README.md in examples directory
- Build docker files for both CI and User
- Migrate CI to miniforge instead of miniconda
- A question about finetune dataset processing HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llm-on-ray.