Comments (13)
i run your code on Gowalla dataset, but recall@20 is 0.2367899010064117,but you declare almost 0.26 in your paper, can you expain why?
from sae-nad.
from sae-nad.
hi, i think already obtained the Gowalla dataset after preprocessing from you a few weeks ago throungh my email [email protected], i have already use the same dataset in your paper but get a worse result
from sae-nad.
Hi,
Could you make sure the parameter setting is the same as which is shown in the paper?
One thing to notice is that we use [N, 500, 50, 500, N] network structure on the Gowalla dataset.
from sae-nad.
Hi.
Thanks very much, after i tried [N, 500, 50, 500, N] network structure, recall@20 is 0.26. i am very confused why just a dimention changing will have a large performence boosting, without dimention changing, the performence is below the strongest baseline pace, do you have some insight
from sae-nad.
Since with a higher dimension, the model has more parameters to learn. More parameters mean that the model has a larger capacity to capture more complex interactions between users and items. What does the hidden space capture is still an open topic to discuss.
from sae-nad.
hi,there is still have problem in yelp dataset
the performence is
[0.031003334736298824, 0.02644154498656703, 0.023792102394753805, 0.02171139961796174]
[0.030995961369661526, 0.052313087184899534, 0.06992631652546047, 0.0844492726939037]
[0.020529951471852593, 0.021236947021091836, 0.022450209082036383, 0.023362063093665134]
which is much below than you declare in your paper
from sae-nad.
Hi,
Could you specify the experiment configuration?
This is the results of 60 epochs I just ran:
The value in the result represents the metrics as follows:
P5 P10 P15 P20 R5 R10 R15 R20
M5 M10 M15 M20
from sae-nad.
i do not change the settting you relase in this repo.
parser = ArgumentParser(description="SAE-NAD")
parser.add_argument('-e', '--epoch', type=int, default=60 , help='number of epochs for GAT')
parser.add_argument('-b', '--batch_size', type=int, default=256, help='batch size for training')
parser.add_argument('--alpha', type=float, default=2.0, help='the parameter of the weighting function')
parser.add_argument('--epsilon', type=float, default=1e-5, help='the parameter of the weighting function')
parser.add_argument('-lr', '--learning_rate', type=float, default=1e-3, help='learning rate')
parser.add_argument('-wd', '--weight_decay', type=float, default=1e-3, help='weight decay')
parser.add_argument('-att', '--num_attention', type=int, default=20, help='the number of dimension of attention')
parser.add_argument('--inner_layers', nargs='+', type=int, default=[200, 50, 200], help='the number of latent factors')
parser.add_argument('-dr', '--dropout_rate', type=float, default=0.5, help='the dropout probability')
parser.add_argument('-seed', type=int, default=0, help='random state to split the data')
args = parser.parse_args()
from sae-nad.
did the yelp dataset is exactly the same as http://spatialkeyword.sce.ntu.edu.sg/eval-vldb17/?
from sae-nad.
hi,i found that the reason is that i use the splited data provided in http://spatialkeyword.sce.ntu.edu.sg/eval-vldb17/, after i use the spliting method in your code i get [0.047664065788200975, 0.04022080486936263, 0.035410798502072835, 0.032151066791849696]
[0.04911375603604978, 0.08122508361942432, 0.10555880260087365, 0.12592199830428658]
[0.03377287460024534, 0.034733410435961745, 0.03642260669407191, 0.037814141501288104]
which is still below your paper result, why?
from sae-nad.
The network structure on Yelp is [N, 200, 50, 200, N], I don't know if you set it or not.
I run one more time just now. It still can achieve similar results with what is shown in the paper:
It also achieves similar results with the experiment I ran yesterday: #4 (comment)
from sae-nad.
The performance is the same as which in the paper.
from sae-nad.
Related Issues (5)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sae-nad.