✨Welcome to the Data Intelligence Lab @ HKU!✨
🚀 Our Lab is Passionately Dedicated to Exploring the Forefront of the Data Science & AI 👨💻
[WSDM'2024 Oral] "LLMRec: Large Language Models with Graph Augmentation for Recommendation"
Home Page: https://arxiv.org/abs/2311.00423
License: Apache License 2.0
Hi, thanks for this great work.
I was trying setup the virtual env, and when I run "pip install -r requirements.txt", I get errors related to version.
For example -
ERROR: Could not find a version that satisfies the requirement anaconda-client==1.11.0 (from versions: 1.1.1, 1.2.2)
ERROR: No matching distribution found for anaconda-client==1.11.0
I'm using Python 3.10.13 with Ubuntu 22.04.3 LTS.
I was wondering if there is any specific requirement of Python version. I anyone can suggest how to resolve these.
Best Regards
Raj
Thank you for sharing the code and data.
I have a query regarding the Netflix dataset mentioned in your paper. According to the paper, the dataset includes 17,366 items. However, upon examining the train.json, val.json, and test.json files, the highest item number I noted is 17,363, with only 8,413 unique items being represented. This seems to contradict the statistics cited in your paper.
Could you please provide some clarification on this discrepancy?
Additionally, it would be helpful to have a detailed explanation of your data processing methods. Furthermore, the datasets provided do not include user ratings. Are all interactions noted in the train.json, val.json, and test.json files indicative of user preferences for movies (i.e. movies with high user ratings, like 4+)?
Thank you for your attention to these questions.
Step1:
run
python3 gpt_ui_aug.py
error message:
Traceback (most recent call last):
File "/Users/~/Documents/dev/ai/LLMRec-main/LLM_augmentation_construct_prompt/gpt_ui_aug.py", line 85, in
candidate_indices = pickle.load(open(file_path + 'candidate_indices','rb'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'candidate_indices'
A good job! Stared!
However, the number of interactions in the Netflix or Movielens-10m dataset is larger than the dataset you use. How do you filte?
Looking forward to your answer! Thanks!
Hi, when I perform:
pip install -r requirements.txt
Error comes out as:
Processing /home/ktietz/src/ci/alabaster_1611921544520/work (from -r requirements.txt (line 6))
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/ktietz/src/ci/alabaster_1611921544520/work'
requirements.txt refers some package paths that don't exist in my local PC.
I hope this message finds you well. I am reaching out regarding the gpt_i_attribute_generate_aug.py script included in the LLMRec that I found on GitHub. According to the README file, this script is intended to be executed as part of the project's workflow. However, upon inspecting the script, it appears to mainly consist of function definitions without a clear entry point or executable code block.
Could you kindly provide additional guidance or an updated version of the script that illustrates how to properly execute it or utilize these functions within the broader context of the project?
Your assistance in this matter would be greatly appreciated, as it would significantly enhance my understanding and usage of your valuable work.
Thank you for your time and contributions to this project. I look forward to your response.
Hi! Your code was using the movielens dataset. And it failed when using the netflix dataset😭. Could you please share the processed movielens data? Thanks in advance! 🙏🙏
hello,I learned that you use lightgcn as the encoder for oh eu and ei in your work, is the u_g_embeddings in the forward function of the Model.py generated using lightgcn?
I only saw it from nn. Embedding. Or maybe I misunderstood, what encoder is used for the embedding vectors of eu and ei?
Thanks!!!
Hi there,
when I run python gpt_ui_aug.py, the error is:
Traceback (most recent call last):
File "/cling/xcosdaem495/LLMRec/LLM_augmentation_construct_prompt/gpt_ui_aug.py", line 86, in
candidate_indices = pickle.load(open(file_path + 'candidate_indices','rb'))
FileNotFoundError: [Errno 2] No such file or directory: 'candidate_indices'
Can you please help?
I got a failure on below line in gpt_ui_aug.py. Could you advise where is item_attribute.csv?
toy_item_attribute = pd.read_csv(file_path + 'item_attribute.csv', names=['id','title', 'genre'])
hi!
when I perform:
pip install -r requirements.txt
it always causes errors like:
ERROR: Could not find a version that satisfies the requirement anaconda-client==1.11.0 (from versions: 1.1.1, 1.2.2)
ERROR: No matching distribution found for anaconda-client==1.11.0
and many other packages are not the correct version as well.
How can I solve it?
Hi,
Thanks for sharing your wonderful work.
I'm interested in your work, and want to reproduce it. But there are some '@file' directives in the requirements.txt file. Could you please fix it?
Thank you.
Hello, I would like to ask, what is the LLMRec\data\netflix\ dataset file directory in your code? I downloaded the netflix dataset you provided, there are two, one is netflix and the other is netflix_image_text, which one should I use? Could you please provide your data set file directory? Thank you very much!
There're two errors when running the file gpy_user_profiling.py:
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x145b449d0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='llms-se.baidu-int.com', port=8200): Max retries exceeded with url: /chat/completions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x145b449d0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
Do you know to solve the problems?🙏🙏 Thank you!
运行gpt_user_profiling文件时,open ai.api_base = "http://llms-se.baidu-int.com:8200"这行代码报错,这个URL我应该去哪里找。谢谢你的回答。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.