GithubHelp home page GithubHelp logo

hkuds / mmssl Goto Github PK

View Code? Open in Web Editor NEW
158.0 3.0 20.0 5.01 MB

[WWW'2023] "MMSSL: Multi-Modal Self-Supervised Learning for Recommendation"

Home Page: https://arxiv.org/abs/2302.10632

Python 100.00%
self-supervised-learning multi-modal-recommendation graph-neural-netowrks multimedia-recommendation

mmssl's Introduction

Hi there 👋

✨Welcome to the Data Intelligence Lab @ HKU!✨

🚀 Our Lab is Passionately Dedicated to Exploring the Forefront of the Data Science & AI 👨‍💻

       

mmssl's People

Contributors

hkuds avatar weiwei1206 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mmssl's Issues

Question about no acoustic modal

Hi, Weiwei, I noticed that the performance of MMSSL on Tiktok dataset using the acoustic modal in the paper. Hoever, the code only contains the text and image modal. Is the code not yet released on TikTok dataset?

The use of validate set

Hi, I notice the datasets are split into the train, validate, and test sets, but the validate set is not used. The model that achieves the best performance on the test set is selected as the best model. I think we should select the best-performed model on validate set, and report the performance on the test set. What's your opinion?

The tiktok dataset

I will appreciate your great work on multi-modal recommendation!I am trying to work on the multimodal encoding, so I just want to see it can achieve higher performance with other feature extractor. I am wondering is it possible to get access to the raw data? Thank you!

About dataset statistics

Hi!

Thank you for your novel work and processed datasets.
I downloaded tiktok and allrecipes from the given links and found the their dataset statistics are as follows:

tiktok: #Users: 9308; #Items: 6710;
allrecipes: #Users: 19805; #Items: 10068.
They are different from the reported statistics. Have the datasets been changed?

Thanks!

Hyperparameter settings

Hello,I recently saw your work and was very interested, but when I reproduce the paper, it is always a little worse than what was written in the original paper, do you have more sensitive hyperparameter settings or whatever? I hope you can reply to me,thank you!!

The file path of datasets

Hi, Weiwei:
I debug the codes and find that I can't reproduce the result on Tiktok because I am confused with the file path of the processed datasets on google drive.
The file path of Allrecipes is easy to find. JSON files and Mat files are in the first level directory, and there are no other files and folders. So I reproduce the results successfully.
But the other three datasets are a little confusing with many files and folders. Can you show the path of JSON files and Mat files as the Allrecipes does?

Allrecipes is easy to find
image
but others are difficult for me to find:
image

result reproduction settings

Hello, thanks for sharing the code. Could you report your the specific settings of each datasets for BEST result reproduction? Thanks.

A code error

Hi, Weiwei, there may be a small error in the codes. In main.py:
image
Line 453, the first param of the test function should be users_to_val.
Now I reproduce the results successfully. Thanks for your careful and patient answer!

Thanks for your outstanding work!

Th study is great,and thank you very much for providing the dataset. I believe it an important contribution to recommendation study!

Some questions about your `Multi-Modal High-Order Connectivity` module.

An excellent paper, but I was confused by your module Multi-Modal High-Order Connectivity:
截屏2023-06-05 17 38 50
In this formula, If I have deduced correctly, $\hat{E}_u^l$ is from the output of modality-wise Dependency Modeling.
Its dimension is $m \times d$, supposing that $m$ is the nums of users. The $A \in \mathbb{R}^{n \times m}$ is the user-inter interactive matrix, and $n$ denotes the item nums. From those, it is inferred that the dimension of the output representations $\hat{E}_u^{l+1}$ is $n \times d$, which is corresponding to the representations of items but users.
So, Can help me solve my confusion? thanks.

About the baselines for MMSSL.

Thank you very much for your team's excellent work;

There are some confusion about the baselines of this paper. Is the SGL, LightGCN covered in the paper implemented using https://github.com/HKUDS/SSLRec?

When I ran the tiktok dataset with SGL in SSLRec, the final result was surprisingly good and surpassed most of the baselines. key parameters: {'keep_rate': 0.5, 'layer_num': 3, 'reg_weight': 1e-05, 'cl_weight': 1.0, ' temperature': 0.5 'embedding_size': 32, 'augmentation': 'edge_drop'}
Test set [recall@10: 0.0577 recall@20: 0.0856 ] Test set [ndcg@10: 0.0321 ndcg@20: 0.0391 ]

Very much looking forward to your reply, sincerely.

Raw Data

Thanks for the great work!

I noticed that you have provided the processed feature. I am wondering if the raw data (such as the images, videos, and text) will be made publicly available? Thanks!

Processed data about Allrecipes

Thanks for your excellent work! Can you please share the processed data of Allrecipes Dataset? I can not find it on the shared Google Drive link.

Raw dataset processing details

Can you detail on how you preprocess the raw data into V/T/A features, which stored in *npy. Only textual features is mentioned in your paper.

embedding question from filter text_feat.npy and image_feat

Thanks for your wonderful contribution for embedding netflix item data.

In python, when I load your Netflix data, the text_feat.npy and image_feat.npy both represents a numpy adarray. To be more exact:

text_feat = np.load('text_feat.npy')
image_feat = np.load('image_feat.npy')

print(text_feat.shape) # -> 17366 * 768 
print(image_feat.shape) # -> 17366 * 512 

May I ask if it is true that the organization of text_feat and image_feat are by the sequence of, for each row,
item 1, [embedding 1];
item 2,[embedding 2]; # as itemid sequence
...
or
item 9733, [embedding 9733];
item 14147, [embedding 14147]; # as the sequence from item_attribute.csv
...

Thanks! I am carrying out embedding_based i2i similarity recommendation.

Raw dataset about Tiktok

Thanks for sharing the code for your great work.
I've observed that you have provided the pre-processed dataset about Tiktok, which seems different with the one used in DualGNN.

Recall@20, MMSSL: 0.0921 < Recall@10 DualGNN: 0.1318

As this dataset is used in a messy manner, may you also provide the raw dataset about TikTok and how you pre-proceed raw dataset into multimodal features? You efforts are great appreciated. Thanks.

HELP!!Experimental data issue

Thank you very much for your contribution to multimodal recommendation systems!
When I try to reproduce your paper, the experimental data obtained are always worse than the data you gave in the paper.Have you made any further adjustments to the hyperparameters of your experiment?I would be grateful if you could explain in detail the method you used

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.