GithubHelp home page GithubHelp logo

yihong-chen / neural-collaborative-filtering Goto Github PK

View Code? Open in Web Editor NEW
470.0 470.0 152.0 14.31 MB

pytorch version of neural collaborative filtering

Python 29.15% Jupyter Notebook 70.85%
collaborative-filtering deep-learning matrix-factorization python recommender-systems

neural-collaborative-filtering's Introduction

Welcome! ๐Ÿ‘‹

My name is Yihong Chen. I research on AI knowledge acquisition, specifically on how different AI systems can learn to abstract, represent and use concepts/symbols efficiecntly.

I am open to collaborations on topics related to embedding learning, link prediction, and language modeling. If you would like to get in touch, you can reach me by emailing yihong-chen AT outlook DOT com, or simply booking a Zoom meeting with me.

Looking for Some Inspirations?

๐Ÿ’ฅ Mar 2024, Quanta Magazine covers our research on periodical embedding forgetting. Check out the article here.

๐Ÿ’ฅ Dec 2023, I will present our forgetting paper at NeurIPS 2023. Check out the poster here.

๐Ÿ’ฅ Sep 2023, our latest work Improving Language Plasticity via Pretraining with Active Forgetting is accepted by NeurIPS 2023!

๐Ÿ’ฅ Sep 2023, I presented our latest work on forgetting at IST-Unbabel seminar.

๐Ÿ’ฅ Jul 2023, I presented our latest work on forgetting language modelling at ELLIS Unconference 2023. The slides are available here. Feel free to leave your comments.

๐Ÿ’ฅ Jul 2023, discover the power of forgetting in language modelling! Our latest work, Improving Language Plasticity via Pretraining with Active Forgetting, shows how pretraining a language model with active forgetting can help it quickly learn new languages. You'll be amazed by the model plasticity imbued via pretraining with forgetting. Check it out :)

๐Ÿ’ฅ Nov 2022, our paper, REFACTOR GNNS: Revisiting Factorisation-based Models from a Message-Passing Perspective, will appear in NeurIPS 2022! If you're interested in understanding why FMs can be some special GNNs and make them usable on new graphs, check it out!

๐Ÿ’ฅ Jun 2022, if you're looking for a hands-on repo to start experimenting with link prediction, check out our repo ssl-relation-prediction. Simple code, easy to hack ๐Ÿš€

neural-collaborative-filtering's People

Contributors

ruihongqiu avatar yihong-chen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-collaborative-filtering's Issues

Test Dataloader for large dataset.

Maybe a test dataloader for iterate the test set is more user friendly for large dataset? Just simply treat all test data as a huge batch sometimes will cause OOM error.

image

Sometimes the size could be like this.

Missing layer and training workflow

Hi,

Thank you for the work you have done. It is extremely useful and the code is very clean. I wish every paper implementation could be like this :) I have noticed three major discrepancies from what is written in the original paper. Could you explain to me if they are on purpose and if yes, what is the reasoning behind them?

  1. I think that in neuMF architecture you are missing one linear layer between embeddings and collective layer for GMF and MLP, according to paper schema there should be one.

  2. I am a bit lost with loading pretrained weights in MLP. I see that you offer a possibility to load pretrained GMF embeddings to MLP model. I believe that according to paper they are separate embeddings and are not mixed in the original work. Does this change provide noticable improvement?

Add LICENSE.txt

Could you please add a license here?

Thanks for your work on this project!

The code implementation does not match the original paper

The function _sample_negative in data.py appears to be incorrect. It currently utilizes ratings to generate negative samples, resulting in the exclusion of test items for each user from the negative samples. But the test items should be treated as negative samples in training set.

'checkpoints/gmf_factor8neg4_Epoch100_HR0.6391_NDCG0.2852.model'

mldl@ub1604:/ub16_prj/neural-collaborative-filtering/src$ python3 train.py
Range of userId is [0, 6039]
Range of itemId is [0, 3705]
MLP(
(embedding_user): Embedding(6040, 8)
(embedding_item): Embedding(3706, 8)
(fc_layers): ModuleList(
(0): Linear(in_features=16, out_features=64, bias=True)
(1): Linear(in_features=64, out_features=32, bias=True)
(2): Linear(in_features=32, out_features=16, bias=True)
(3): Linear(in_features=16, out_features=8, bias=True)
)
(affine_output): Linear(in_features=8, out_features=1, bias=True)
(logistic): Sigmoid()
)
Traceback (most recent call last):
File "train.py", line 79, in
engine = MLPEngine(config)
File "/home/mldl/ub16_prj/neural-collaborative-filtering/src/mlp.py", line 63, in init
self.model.load_pretrain_weights()
File "/home/mldl/ub16_prj/neural-collaborative-filtering/src/mlp.py", line 47, in load_pretrain_weights
resume_checkpoint(gmf_model, model_dir=config['pretrain_mf'], device_id=config['device_id'])
File "/home/mldl/ub16_prj/neural-collaborative-filtering/src/utils.py", line 14, in resume_checkpoint
map_location=lambda storage, loc: storage.cuda(device=device_id)) # ensure all storage are on gpu
File "/usr/local/lib/python3.5/dist-packages/torch/serialization.py", line 301, in load
f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/gmf_factor8neg4_Epoch100_HR0.6391_NDCG0.2852.model'
mldl@ub1604:
/ub16_prj/neural-collaborative-filtering/src$

size mismatch for fc_layers

I want to calculate the cosine similarity of items, so I need to get feature vectors. However, NeuMF has a complicated structure, I would like to know which piece of data in the code is the feature vector of items, please teach me

NeuMF algorithm

Dear Yihong Chen,

I am very interested in this project. Thank you for sharing it.

I tried running the NeuMF algorithm using the default config that you provide in your code (see below), but I am getting bad results.

Do I need to change the parameters?
Do you maybe have a model that you already trained, and you could share with me?

Thanks,
Nir Ailon

neumf_config = {'alias': 'pretrain_neumf_factor8neg4',
'num_epoch': 200,
'batch_size': 1024,
'optimizer': 'adam',
'adam_lr': 1e-3,
'num_users': 6040,
'num_items': 3706,
'latent_dim_mf': 8,
'latent_dim_mlp': 8,
'num_negative': 4,
'layers': [16,32,16,8], # layers[0] is the concat of latent user vector & latent item vector
'l2_regularization': 0.01,
'use_cuda': True,
'device_id': 7,
'pretrain': True,
'pretrain_mf': 'checkpoints/{}'.format('gmf_factor8neg4_Epoch100_HR0.6391_NDCG0.2852.model'),
'pretrain_mlp': 'checkpoints/{}'.format('mlp_factor8neg4_Epoch100_HR0.5606_NDCG0.2463.model'),
'model_dir':'checkpoints/{}_Epoch{}_HR{:.4f}_NDCG{:.4f}.model'
}

question about the version of pytorch

Hi,

Thanks for your kind sharing.
I want to reproduce your results on my device, but my training results are too bad,
The evaluating HR and NDCG of some epochs are 0.0000, do you have any idea about this case? And could you tell me the Pytorch version that you use in your side?

Thanks.

How to apply NCF to datasets that only have the number of interactions?

As the question states, how could this be applied to a dataset that only has the number of interactions between an user and the item? Movielens has the ratings in the, which is explicit feedback, but how could this model be applied to a dataset like the audioscrobbler dataset which has as implicit-feedback the number of times a user heard an artist? Here is an example of recommendations implementing ALS and using that dataset: http://www.gousios.gr/courses/bigdata/audioscrobbler.html

The NDCG metric

The ndcg metric here is defined as 1/log2(rank_i) according to Xiangnan He's paper. Therefore, the formula in metrics.py should be log(2)/log(x+1) instead of log(2)/log(x+2) because the rank here starts from 1. Please update your code, thanks!

AssertionError

AssertionError Traceback (most recent call last)
in
82 # Specify the exact model
83 config = gmf_config
---> 84 engine = GMFEngine(config)
85 # config = mlp_config
86 # engine = MLPEngine(config)

1 frames
/content/utils.py in use_cuda(enabled, device_id)
19 def use_cuda(enabled, device_id=0):
20 if enabled:
---> 21 assert torch.cuda.is_available(), 'CUDA is not available'
22 torch.cuda.set_device(device_id)
23

AssertionError: CUDA is not available

The GPU utilization is low

When I run the code on RTX4090, the utilization of the GPU is always 0%, and I don't change the code. Is it alright?

L2 regularization

Hi,
I notice that in your config code of mlp and neumf, you set l2_regularization. But I can't find l2 regularization in your model and loss implementation. Could you help me to know how to implement the l2 regularization? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.