โจWelcome to the Data Intelligence Lab @ HKU!โจ
๐ Our Lab is Passionately Dedicated to Exploring the Forefront of the Data Science & AI ๐จโ๐ป
[ICLR'2023] "LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation"
Home Page: https://arxiv.org/abs/2302.08191
ไฝ ๅฅฝ๏ผๆๆฏไธไธชๅฐ็ฝ๏ผ็็่ฎบๆไธๅค๏ผ้ฎ็้ฎ้ขๅฏ่ฝไผๅพๅนผ็จ๏ผไฝๆฏ่ฟๆฏๅธๆไฝ ่ฝ่งฃ็ญๆ็็ๆใๆๅจๅคง้จๅ่ฎบๆไธญ็ๅฐ็infoNCEๆ็จ็ๅชๆๆญฃๆ ทๆฌ๏ผ่่ดๆ ทๆฌๅช็จๅจrecไปปๅก็bprๆๅคฑไธญใ่่ฟ็ฏ่ฎบๆไปฃ็ ็infoNCEไธญ้ฝ็จๅฐไบๆญฃ่ดๆ ทๆฌ๏ผๆๅฐ่ฏๅฐiidไฟฎๆนไธบpos๏ผๅช็จๆญฃๆ ทๆฌ๏ผๆๆ็ฅๅพฎไธ้๏ผไฝไพๆง่กจ็ฐ่ฏๅฅฝใ่ฏท้ฎ๏ผ่ฟไธค็งๆนๆณๆไปไนๅบๅซๅ๏ผๅช็งๆนๆณๅฅฝไธ็น๏ผ
Dear anthors,
I am interested in the simple while effective approach you propose. In February, I noticed this paper and download its code from https://anonymous.4open.science/r/LightGCL/. Recently I wanna do some improvements based on this work, but I notice that the code of the initial version may be incorrect? For InfoNCE loss, the code implementation is as following:
u_mask = (torch.rand(len(uids))>0.5).float().cuda(self.device)
gnn_u = nn.functional.normalize(self.Z_u_list[l][uids],p=2,dim=1)
hyper_u = nn.functional.normalize(self.G_u_list[l][uids],p=2,dim=1)
hyper_u = self.Wsl-1
pos_score = torch.exp((gnn_u*hyper_u).sum(1)/self.temp)
neg_score = torch.exp(gnn_u @ hyper_u.T/self.temp).sum(1)
loss_s_u = ((-1 * torch.log(pos_score/(neg_score+1e-8) + 1e-8))*u_mask).sum()
The neg_score should be "torch.exp(gnn_u @ self.G_u_list[l].T/self.temp).sum(1)" instead of the code above in my opinion.
And I am pretty confused about it and whether the code is correct. If not correct, how can you get such state-of-art performance?
Look forward to your reply soon.
In the section 4.1.1 DATASETS AND EVALUATION PROTOCOLS, "we split the datasets into training, validation and testing sets with a ratio of 7:2:1."
However, in the code, i find out that the raw data only contain two trnMat.pkl and tstMat.pkl files. Also, the implementation directly use the testing set in the training stage as validation set. Is there anything missing when i read the code?
# sample pos and neg pos = [] neg = [] iids = set() for i in range(len(batch_users)): u = batch_users[i] u_interact = train_csr[u].toarray()[0] positive_items = np.random.permutation(np.where(u_interact==1)[0]) negative_items = np.random.permutation(np.where(u_interact==0)[0]) item_num = min(max_samp,len(positive_items)) positive_items = positive_items[:item_num] negative_items = negative_items[:item_num] pos.append(torch.LongTensor(positive_items).cuda(torch.device(device))) neg.append(torch.LongTensor(negative_items).cuda(torch.device(device))) iids = iids.union(set(positive_items)) iids = iids.union(set(negative_items))
ๅจmain.pyไธญ็็ฌฌ132-137่กไปฃ็ ๏ผๆจกๅ้็จ็้ๆ ทๆนๆณๆฏๅ้ๆบ็ใๅจๆฏ่ฝฎ่ฎญ็ป่ฟ็จไธญ๏ผuser็้กบๅบๆฏๅบๅฎ็่้้ๆบใ่ฟๅพๅฎนๆๅฏผ่ดๆจกๅ่ฟๆๅใ
ๆๅฏนๆจกๅ็ไปฃ็ ้ๆฐๅไบไฟฎๆน๏ผๆนๅไบๅ
ถๅ้ๆบ็้ๆ ทๆนๆณ๏ผๅ
ทไฝๆนๆณๅ่recboleๆกๆถไธ็้ๆ ทๆนๆณไปฅๅLightGCN็้ๆ ทๆนๆณ๏ผใ็ปๆๆฏๆจกๅๅจyelpๆฐๆฎ้ไธ๏ผrecall@20ไธๆๆๆพๆๅ๏ผไธบ0.0938๏ผndcg@20ๆ้ไฝ๏ผไธบ0.0508ใไฝๆฏๅจๆจชๅๅฏนๆฏไธญ๏ผLightGCL็่กจ็ฐ็่ณ่ฟไธๅฆLightGCN๏ผๅไฝไบ0.02ๅทฆๅณใ
ๆๆ็ไฝ่
ๅจ้็จๅ
ถไปๆจกๅ่ฟ่กๅฏนๆฏๅฎ้ช็ๆถๅไน้็จ็ๆฏ่ฟ็งโๅ้ๆบโ้ๆ ท๏ผ่ฟ่ๅฏผ่ด่ฎบๆไธญๆฅๅ็ๅ
ถไปbaselineๆจกๅ็ๆๆ ่ฟ่ฟไฝไบๆญฃๅธธๆ
ๅตใ
ps.ๅฆๅค่ฟๆไธ็น๏ผๅจๆนๅๆจกๅ็้ๆ ทๆนๆณๅ๏ผๅ ถไปๅๆฐไธๅ๏ผๆจกๅๅบ็ฐไบๆขฏๅบฆ็็ธๆ ๅต๏ผๆ่ฎคไธบ่ฟๆฏๅจๅฏนๆฏๅญฆไน ่ฟ็จไธญๅ ๅ ฅๆ้Wไฝๆชnormalize็ๅๅ ใๆไปฅๆๅๆถๅไบๅปๆWไปฅๅ่ฐๅคงtempไธค็งๆนๆณ๏ผไปฅ้ฟๅ ๆขฏๅบฆ็็ธ๏ผๅ ถ็ปๆไพๆงๅพๅทฎใ
can I know how you preprocess the yelp dataset?
Thank you!
Hi. I am a very interested reader of your paper. I have a question about dataset splits and would be grateful if you answer me when you have time.
In the paper, each dataset has the following interactions:
Yelp: 1,517,326
Gowalla: 1,172,425
ML-10M: 9,988,816
Amazon-book: 2,240,156
Tmall: 2,357,450
and all datasets are split into training, validation, and testing with a ratio of 7:2:1, which means each training, validation, and testing dataset has the following interactions:
Yelp: 1,062,128 / 303,465 / 151,733 (70%, 20%, 10%)
Gowalla: 820,697 / 234,485 / 117,243 (70%, 20%, 10%)
ML-10M: 6,992,171 / 1,997,763 / 998,882 (70%, 20%, 10%)
Amazon-book: 1,568,109 / 448,031 / 224,016 (70%, 20%, 10%)
Tmall: 1,650,215 / 471,490 / 235,745 (70%, 20%, 10%)
But I found that the uploaded dataset has the following interactions: (trnMat.pkl and tstMat.pkl)
Yelp: 1069128 / - / 305466 (70%, -, 20%)
Gowalla: 1172425 / - / 130270 (100%, -, 10%)
ML-10M: 6999171 / - / 1999761 (70%, -, 20%)
Amazon-book: 2240156 / - / 640045 (100%, -, 30%)
Tmall: 2357450 / - / 261939 (100%, -, 10%)
which is far different from the paper. Am I missing something important?
I look forward to hearing back from you.
็ป่ฟๆถ่ๅฎ้ชๅ็ฐๅ ๅปSVDๅ่งฃ๏ผไธ่ช่บซ่็นๅฏนๆฏ็ปๆไนๆฏไธๆ ท็๏ผๅนถไธ--lambda1็ๅผไธบ1e-7่ฟไนๅฐ๏ผcl_lossๆฏๅฆ็็ๆไฝ็จๅข
ไฝ่ ไฝ ๅฅฝ๏ผๆๆฏSimGCL็ไฝ่ ๏ผ็ปๅธธๆๅ ณๆณจ่ดต็ป็ๅทฅไฝใๆ่ฟไนๆๆ็ฎๅค็ฐLightGCLใ
ไฝๆๅ็ฐๅจLightGCL็่ฎบๆไธญ๏ผSimGCLไผผไนๅฎๅ จๅคไบๆฌ ๆๅๆ่ ้่ฏฏ้ๆฉ่ถ ๅๆฐ็็ถๆใๆไฝฟ็จไบๆฌrepo้ๆไพ็yelpๅgowallaๆฐๆฎ้้ๆฐๅฏนSimGCL่ฟ่กไบๆต่ฏใๅจไฟๆgeneral settingไธๆไธญไธ่ด็ๆ ๅตไธ๏ผๆๅญ็ป้ช้ๆบ้ๆฉไบ lambda_cl = 0.2, epsilon=0.1, tau=0.2็็ปๅ ๏ผๅไธคไธชๅผๅฏนๅคงๅคๆฐๆฐๆฎ้ๆฅ่ฏดๆฏSimGCL็็ธๅฏนๆไผ่ถ ๅ๏ผ๏ผๅจ่ฟญไปฃๆๆ่ฎญ็ปๆ ทๆฌ็ฌฌไบๆฌกไนๅ็็ปๆๅณ่ถ ่ฟไบLightGCL็็ปๆ๏ผไน่ฟ่ถ SimGCLๅจๆไธญ็็ปๆใๆ็็ปๆๅฆไธ๏ผ
Yelp: SimGCL ็ฌฌไบๆฌก่ฟญไปฃ Recall@20: 0.0962 NDCG@20: 0.0833
Yelp: SimGCL ๆถๆ๏ผ็ฌฌไนๆฌก๏ผ Recall@20: 0.1048 NDCG@20: 0.0903
yelp: SimGCL ่ฎบๆๆฑๆฅ็ปๆ Recall@20: 0.0718 NDCG@20: 0.0615
yelp: ๆไธญLightGCL็ปๆ Recall@20: 0.0793 NDCG@20: 0.0668
Gowalla: SimGCL ็ฌฌไบๆฌก่ฟญไปฃ Recall@20: 0.1739 NDCG@20: 0.1060
Gowalla: SimGCL ๆถๆ ๏ผ็ฌฌๅๆฌก๏ผRecall@20: 0.1893 NDCG@20: 0.1145
Gowalla: SimGCL ่ฎบๆๆฑๆฅ็ปๆ Recall@20: 0.1357 NDCG@20: 0.0818
Gowalla: ๆไธญLightGCL็ปๆ Recall@20: 01578 NDCG@20: 0.0935
ๆไธญๆๅฐ โTo ensure a fair comparison, we tune the hyperparameters of all the baselines within the ranges suggested in the original papers.โ ไฝๆๅ็ฐ่ดต็ปๅฎ้ชไธญๅฎ้ ไธๅฏ่ฝไฝฟ็จไบ lambda_cl = 0.01, tau=0.1ใ lambda_cl = 0.01ๅจSimGCL็ๆ็ซ ๅๆฐๆๆๆงๅฎ้ชไธญๅทฒ่ขซ่กจๆไธบไธไธชๆฐๆฎ้ไธ่พๅทฎ็้ๆฉใๅฆๅคSimGCLๆ็ซ ไธญไนๆๅฐโ In SimGCL and SGL, we empirically let the temperature ๐ = 0.2, and this value is also reported as the best in the original paper of SGL.โ tau่ฟไธชๅๆฐๅฎ้ ไธๆฏ่พไธบๆๆ็๏ผๆ็็ป้ชๆฏ0.2ๅๅฐ0.1ไผๅบ็ฐ็ปๆ็่พๅคงๆณขๅจใไผผไนLightGCLๅฎ้ชๆถๅนถๆชๅ่SimGCL็ๆ็ซ ใ
ไปฅไธๅ ณไบSimGCL็็ปๆๅไธบ้่ฟSELFRecๅพๅฐใๆๅ ด่ถฃ็่ฏๅฏไปฅๅฏนๆฏๆฏๅฆๆไปฌๅ ณไบSimGCL็ๅฎ็ฐๆไธๅไนๅค๏ผๅฏผ่ดไบ่ฎบๆ้้ข็้ฎ้ขใ
------------------------- UPDATE---------------------------------------------------------------------
ๆ็จlambda_cl = 0.2, epsilon=0.1, tau=0.2็็ปๅๅจ่ดต็ป็SSLRecไธๅฐ่ฏไบไธไธ๏ผyelp datasetไธ็ป่ฟไธคๆฌก่ฟญไปฃๅไธบ
Recall@20: 0.0929 NDCG@20: 0.0791
ๆๆฒกๆ่ทๅฎๆๆไบ๏ผไฝไธคๆฌก่ฟญไปฃไนๅฅฝไบๆไธญๆฑๆฅ็SimGCLไธLightGCL็็ปๆใ
็ถๅๆไนไฝฟ็จไบSSLRec้้ข้ป่ฎค็SimGCLๅๆฐ lambda_cl = 0.01, epsilon = 0.2, tau=0.1, ไธๆฌก่ฟญไปฃ่ฎฐๅฝๅฆไธ๏ผ
{'optimizer': {'name': 'adam', 'lr': 0.001, 'weight_decay': 0}, 'train': {'epoch': 100, 'batch_size': 256, 'save_model': False, 'loss': 'pairwise', 'test_step': 1}, 'test': {'metrics': ['recall', 'ndcg'], 'k': [10, 20], 'batch_size': 256}, 'data': {'type': 'general_cf', 'name': 'yelp', 'user_num': 29601, 'item_num': 24734}, 'model': {'name': 'simgcl', 'keep_rate': 1.0, 'layer_num': 2, 'reg_weight': 1e-06, 'cl_weight': 0.01, 'temperature': 0.1, 'embedding_size': 32, 'eps': 0.2}, 'tune': {'enable': False}, 'device': 'cuda'}
Training Recommender: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4177/4177 [03:48<00:00, 18.26it/s]
[Epoch 0 / 100] bpr_loss: 0.2147 reg_loss: 0.0304 cl_loss: 0.1139
[recall@10: 0.0439 recall@20: 0.0728 ] [ndcg@10: 0.0533 ndcg@20: 0.0621 ]
Training Recommender: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4177/4177 [03:48<00:00, 18.29it/s]
[Epoch 1 / 100] bpr_loss: 0.1037 reg_loss: 0.0536 cl_loss: 0.1022
[recall@10: 0.0478 recall@20: 0.0810 ] [ndcg@10: 0.0582 ndcg@20: 0.0686 ]
Training Recommender: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4177/4177 [03:48<00:00, 18.28it/s]
[Epoch 2 / 100] bpr_loss: 0.0929 reg_loss: 0.0623 cl_loss: 0.0932
[recall@10: 0.0499 recall@20: 0.0840 ] [ndcg@10: 0.0605 ndcg@20: 0.0711 ]
็ฌฌไบๆฌก่ฟญไปฃๅฎไนๅไนๅทฒ็ป่ถ ่ฟไบLightGCL็็ปๆใไนๅฏไปฅ็ๅบ่ฟ็ปๅๆฐ็กฎๅฎๅทฎไบSimGCL่ฎบๆ้ๆจ่้ๆฉ็ๅๆฐใ
ๅจparser1ไธญ๏ผdefault='yelp'๏ผ๏ผไฝๆไฟฎๆนyelp๏ผๆนไธบgowallaๅml10mๆฐๆฎ้๏ผ่ฟๆฏๅจ่ท็yelpๆฐๆฎ้๏ผ่ฏท้ฎไฝ่ ่ฟๆฏไธบไปไน๏ผๆไนไฟฎๆนไธบๅจๅ ถๅฎๆฐๆฎ้ไธ่ท๏ผ
Very nice work! While I find the time complexity I computed for graph convolution of LightGCL should be O[2ELd + 2IJLd] which is not aligned with that in the paper in Table 2. I think the reconstructed graph is fully connected (a dense matrix) and can not use sparse matrix multiplication to make acceleration. Can you help me figure it out? Thanks!
On Yelp, the performance of the paper is low (Recall@20: 0.0793 Ndcg@20:0.0668 Recall@40:0.1292 Ndcg@40:0.0852). The actual running is high( Recall@20: 0.1005596274687573 Ndcg@20: 0.08650433827615736 Recall@40: 0.1597782188737718 Ndcg@40: 0.10812958940033758).
The details of the actual running is following:
Test of epoch 96 : Recall@20: 0.10051406435437939 Ndcg@20: 0.08627444271978929 Recall@40: 0.15986645365267868 Ndcg@40: 0.10792282233659943
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 262/262 [00:24<00:00, 10.64it/s]
Epoch: 97 Loss: 2.5118775304037197 Loss_r: 0.3031725097476071 Loss_s: 2.2050413157193716
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 262/262 [00:24<00:00, 10.68it/s]
Epoch: 98 Loss: 2.511937270637687 Loss_r: 0.30322049376163773 Loss_s: 2.205053930974189
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 262/262 [00:24<00:00, 10.59it/s]
Epoch: 99 Loss: 2.5120028903466145 Loss_r: 0.30331670953572254 Loss_s: 2.205021769945858
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 116/116 [00:08<00:00, 13.75it/s]
-------------------------------------------
Test of epoch 99 : Recall@20: 0.1005596274687573 Ndcg@20: 0.08650433827615736 Recall@40: 0.1597782188737718 Ndcg@40: 0.10812958940033758
-------------------------------------------
Final test: Recall@20: 0.1005596274687573 Ndcg@20: 0.08650433827615736 Recall@40: 0.1597782188737718 Ndcg@40: 0.10812958940033758
On ML-10M, the performance of the paper is high (Recall@20: 0.2613 Ndcg@20:0.3106 Recall@40:0.3799 Ndcg@40:0.3387 1). The actual running is low (Recall@20: 0.22966711970088424 Ndcg@20: 0.28407235346796683 Recall@40: 0.31642916993719605 Ndcg@40: 0.30047428117834374)
The details of the actual running is following:
Epoch: 98 Loss: 2.513269269026951 Loss_r: 0.3047634800356546 Loss_s: 2.18953038922104
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1709/1709 [07:04<00:00, 4.03it/s]
Epoch: 99 Loss: 2.5132692800480556 Loss_r: 0.3047331753462082 Loss_s: 2.1895613562928227
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 273/273 [00:23<00:00, 11.86it/s]
-------------------------------------------
Test of epoch 99 : Recall@20: 0.22966711970088424 Ndcg@20: 0.28407235346796683 Recall@40: 0.31642916993719605 Ndcg@40: 0.30047428117834374
-------------------------------------------
Final test: Recall@20: 0.22966711970088424 Ndcg@20: 0.28407235346796683 Recall@40: 0.31642916993719605 Ndcg@40: 0.30047428117834374
่ฏท้ฎๅฆไฝๆๅปบ่ชๅทฑ็ๆฐๆฎ้๏ผ่ฐข่ฐข๏ผ
While implementing the code, it appeared an bug
torch._C._LinAlgError: cusolver error: CUSOLVER_STATUS_EXECUTION_FAILED, when calling cusolverDnXgeqrf( handle, params, m, n, CUDA_R_32F, reinterpret_cast<void*>(A), lda, CUDA_R_32F, reinterpret_cast<void*>(tau), CUDA_R_32F, reinterpret_cast<void*>(bufferOnDevice), workspaceInBytesOnDevice, reinterpret_cast<void*>(bufferOnHost), workspaceInBytesOnHost, info)
. This error may appear if the input matrix contains NaN.
But we have no idea to fix it.
Hi. I am a reader who read your paper with great interest. After running your code myself, I have two questions. I would be grateful if you could answer them when you have time.
ย | ย | Recall@20 | Recall@40 | NDCG@20 | NDCG@40 |
---|---|---|---|---|---|
Yelp | reported | 0.0793 | 0.1292 | 0.0668 | 0.0778 |
ย | reproduced | 0.1001 | 0.1587 | 0.0868 | 0.1080 |
Gowalla | reported | 0.1578 | 0.2245 | 0.0935 | 0.1108 |
ย | reproduced | 0.2124 | 0.2993 | 0.1236 | 0.1464 |
Thank you for your time.
I look forward to hearing back from you.
ไฝ่
ๆจๅฅฝ๏ผๅจๅพๅท็งฏ็่ฟ็จไธญ๏ผๆจๅจ่ฎบๆไธญ็ปๅบ็ๅ
ฌๅผๆฏ๏ผz(u)i,l = ฯ(p(Ai,:)E(v)l-1)๏ผe(u)i,l = z(u)i,l + e(u)i,l-1ใ
ไฝๆฏๆจ็ไปฃ็ ๅฎ็ฐๆฏ๏ผ
self.E_u_list[layer] = self.Z_u_list[layer]
self.E_i_list[layer] = self.Z_i_list[layer]
่ฏท้ฎ่ฟ้ๆฏๅฆๅญๅจๅบๅ
ฅๅข๏ผๆฏๅฆๅบ่ฏฅๅ ไธself.E_u_list[layer-1]ๅข๏ผ
The performance in the paper is lower than that the code does. For example, on gowalla, the actual running is Recall@20: 0.2103 Ndcg@20: 0.1223 Recall@40: 0.2991 Ndcg@40: 0.1453; but the paper is low (Recall@20: 0.1578 Ndcg@20: 0.0935 Recall@40: 0.2245 Ndcg@40: 0.1108). Is the paper not updated ?
ๅจLightGCNๅๅง่ฎบๆไธญไนๆๅ ณไบgowallaๆฐๆฎ้็ๅจๅท็งฏๅฑๆฐไธบ2็ๆ ๅตไธ็recall@20ๅndcg@20๏ผๅจๅท็งฏๅฑๆฐไธบ2็ๆ ๅตไธ๏ผไธบไปไนๅจไฝ ไปฌ็่ฎบๆไธญ็LightGCNๅฑ็คบ็ๅฎ้ช็ปๆไธLightGCNๅๅง่ฎบๆไธญ็ๅทฎ่ท้ฃไนๅคงๅข๏ผ
ๆจๅฅฝ๏ผๆๆณ่ฏทๆไธไธๅฝๅๅฎ้ชไธ๏ผๆจๅจ่ฟ่กๅฏนๆฏ็ฎๆณHCCFๆถ๏ผๅจๅไธชๆฐๆฎ้ไธๅๅซๆฏๆไน่ฎพ็ฝฎ็ๅๆฐ็ๅข๏ผ๏ผๆ็ฎๅๆฏๅจGowallaๆฐๆฎ้ไธ่ฟ่ก๏ผไฝๆฏๆๆไผๅทฎ็นๅซๅค๏ผ
ๅจ่ชๅฎไน็ๆฐๆฎไธๅบ็ฐneg_score ๅบ็ฐInfๅฏผ่ดloss nan็ๆ
ๅต๏ผไธป่ฆbugไปฃ็ ๅจไบไธ้ข
Lines 78 to 80 in 5590453
่ฏท้ฎๆๅปบๆฐๆฎ้็ๆถๅ๏ผuserๆitem็ๅฑๆงๅฆไฝๆๅปบ่ฟๅป๏ผ่ฐข่ฐข๏ผ
ๅจๅค็ฐbaseline็ๆถๅ๏ผGCA็่กจ็ฐไธ่ฎบๆไธญ็ๆๆๆๅทฎ่ท๏ผๅ ๆญคๆณ่ฏท้ฎไฝ่ ๆฏๅฆๅฏไปฅๅไบซไธไธ่ฎบๆไธญๆ็จๅฐ็GCAไปฃ็
Are the data reported in the paper the result of multiple experiments? Or how exactly?
ๆจๅฅฝ๏ผๆๅจๅค็ฐๆจ็ไปฃ็ ็่ฟ็จไธญๅ็ฐไบไธไธช็ๆ๏ผๅฐฑๆฏๆจ็ๅๅง้ปๆฅ็ฉ้ตๅSVDๅ่งฃ้ๆ็็ฉ้ต้ฝๆฒกๆ่ฟ่กnormalizeๆไฝ๏ผไฝๆจๅจๅๆไธญๅฏนๅๅง้ปๆฅ็ฉ้ต็ๅท็งฏๆฏๅไบnormalize็๏ผไธบไฝๅจๅฎ็ฐ็ๆถๅไธๅ normalizeๅข๏ผๆญคๅค๏ผๅฆๆไธๅ normalize็ดๆฅ่ฟ่กๅพๆถๆฏไผ ้็่ฏๅฏ่ฝไผๅบ็ฐ้ฎ้ข๏ผไธบไฝๆ็ซ ไธญๅฏนSVDๅ่งฃ้ๆ็็ฉ้ตไธ่ฟ่กnormalizeๅ่ฟ่กๅพๆถๆฏไผ ้ๅข๏ผ่ฟๆไธไธชๅฐ้ฎ้ข๏ผๅฐฑๆฏไฝฟ็จSVDๅ่งฃ้ๆไนๅ็ฉ้ตไธญไผๅบ็ฐ่ดๅผ๏ผไฝไปฃ็ ไธญๆฒกๆๅฏน่ดๅผ่ฟ่กๅค็๏ผ่ฏท้ฎ่ดๅผไผๅฏน็ปๆไบง็ๅฝฑๅๅ๏ผ
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.