ycjuan / libffm Goto Github PK
View Code? Open in Web Editor NEWA Library for Field-aware Factorization Machines
License: BSD 3-Clause "New" or "Revised" License
A Library for Field-aware Factorization Machines
License: BSD 3-Clause "New" or "Revised" License
Hello!
I have very disbalanced dataset that has only 0.5% of clicks.
So I have very poor results.
Can I increase weights of the clicks to make them more important? Or the only way is to oversample them?
Hello,
Thank you for your excellent method, software and description.
I faced a problem trying to employ the libffm in my ML task. I am getting segmentation fault when using it with cross-validation option. Here are my setup and data:
Ubuntu 13.10
~/libffm$ ./ffm-train -k 5 -t 30 -r 0.03 -v 2 data.txt
fold logloss
0 0.1080
Segmentation fault (core dumped)
The data.txt can be downloaded here https://drive.google.com/open?id=0B9HyQ7ZccW4-VFE0VWtxUHF2R3c
The problem arises only when working with big data files like that. If you cut it to 100K lines (it is around 250K lines) everything get OK.
Regards,
Sergey
in ffm.cpp: ffm_node* end = &prob.X[prob.P[i + 1]];
can access array out the bounds
How to use tags associated with item as a field in FFM? In FFM, only one feature for a given field can be turned on. But, for tags, we have several of features "1" for that given field. So, how to use tags as field for FFM?
Hi,
It seems like bigdata.tr.txt
has feature 2739
with multiple fields (5
and 13
) in lines 21
, 36
, and 88
. Shouldn't fields be unique per feature?
Regarding train in ffm.cpp lines 228-375, I have a question on thread safety.
below are lines 288-312
#if defined USEOMP
#pragma omp parallel for schedule(static) reduction(+: tr_loss)
#endif
for(ffm_int ii = 0; ii < (ffm_int)order.size(); ii++)
{
ffm_int i = order[ii];
ffm_float y = tr->Y[i];
ffm_node *begin = &tr->X[tr->P[i]];
ffm_node *end = &tr->X[tr->P[i+1]];
ffm_float r = R_tr[i];
ffm_float t = wTx(begin, end, r, *model);
ffm_float expnyt = exp(-y*t);
tr_loss += log(1+expnyt);
ffm_float kappa = -y*expnyt/(1+expnyt);
wTx(begin, end, r, *model, kappa, param.eta, param.lambda, true);
}
I'm new to openmp parallel operations. I'm curious whether it ensures thread safety regarding wTx operation at the very bottom. wTx(begin, end, r, *model, kappa, param.eta, param.lambda, true);
It seems that since wTx with do_update = true updates weights, it could interfere with other threads updating the weights.
Waiting for reply.
I read the source code. i can not figure out why the model.w size is model.n * model.m * k_aligned * 2.
Hi,what is the optimized method used in this model?
When I was training the model, the first few iterations worked fine but subsequent iterations returned "-nan" for the log losses of training and validating data sets.
Any ideas what went wrong?
Sample of the data used for training:
1 0:400492:1 1:977206:1 2:861366:1 3:223345:1 4:4:0.0 5:5:9567.0 6:6:31835.0 7:7:0.300471105528 8:8:0.0 9:9:0.0 10:35822:1 11:486386:1 12:528723:1 13:662860:1 14:990282:1 15:406964:1 16:698517:1 17:585048:1 18:18:0.38219606197 19:19:0.125217833586 20:20:0.438929013305 21:21:0.216453092359 22:923220:1 23:63477:1 24:216531:1 25:461117:1
0 0:400492:1 1:203267:1 2:861366:1 3:223345:1 4:4:0.0 5:5:1642.0 6:6:9441.0 7:7:0.173830192674 8:8:0.0 9:9:0.0644 10:709579:1 11:486386:1 12:528723:1 13:662860:1 14:778015:1 15:581435:1 16:698517:1 17:181797:1 18:18:0.581693006318 19:19:0.097000178732 20:20:0.367630745198 21:21:0.182764132116 22:923220:1 23:63477:1 24:216531:1 25:461117:1
It would be useful to mention in the README that memory allocation depends on k_aligned, not just k. So changing k from 4 to 5 actually doubles memory requirements.
Is there any particular reason why you align k to the power of 2?
Hi, just wanted to share that LIBFFM is now available in Rust. Thanks for the neat project!
hi,
libffm is really an useful model. I want to use it as a regression model , may i know how to realize? thanks.
Consider a case where several of the binary features in a field can be true. For example, one might want to encode the history of recent advertisers that were shown to a user.
In regards to this, the paper says:
Note that according to the number of possible values in a
categorical feature, the same number of binary features are
generated and every time only one of them has the value 1.
I'm using this python wrapper, and it trains on such a feature configuration. For example, the following (field, feature, value)
sample will run:
[(1, 2, 1), (2, 3, 1), (3, 5, 1), (3, 6, 1), (3, 7, 1)]
. But this seems to go against the statement from the paper.
So is this code just working by coincidence, or is it the FFM actually capable of learning from this sort of "history" encoding?
I'm confused about the last line of "ffm_predict.cpp":
ffm_float ffm_predict(ffm_node *begin, ffm_node *end, ffm_model &model) {
ffm_float r = 1;
if(model.normalization) {
r = 0;
for(ffm_node *N = begin; N != end; N++)
r += N->v*N->v;
r = 1/r;
}
ffm_float t = wTx(begin, end, r, model);
return 1/(1+exp(-t));
}
After reading the paper, "Field-aware Factorization Machines for CTR Prediction" , I think the predict value is the variable "t" , but the return of this function is "1/(1+expp(-t))" . Could you answer my doubt ?
Hi @guestwalk !
Thanks a lot for the awesome library. It's certainly made my life a lot easier.
Since we get a segfault for files that are too large, is there a way to learn from chunks of data? In other words, can an existing model be updated with new data?
Thanks again,
Hi, I am trying to use libffm on ubuntu 16.04. I have C++11 and OpenMP installed via apt-get, downloaded libffm and did make. I am in the libffm dir and ran and got the following.
josh:~/libffm-master$ ffm-train bigdata.tr.txt model
ffm-train: command not found
When I check the dir
you can see it is there
josh@josh-HP-ZBook-17-G2:~/libffm-master$ dir
bigdata.te.txt ffm.cpp ffm-predict ffm-train.cpp README
bigdata.tr.txt ffm.h ffm-predict.cpp Makefile
COPYRIGHT ffm.o ffm-train Makefile.win
Any help would be great. Thanks.
I'm confused about the "ffm_predict" function in the ffm.cpp :
ffm_float ffm_predict(ffm_node *begin, ffm_node *end, ffm_model &model) {
ffm_float r = 1;
if(model.normalization) {
r = 0;
for(ffm_node *N = begin; N != end; N++)
r += N->v*N->v;
r = 1/r;
}
ffm_float t = wTx(begin, end, r, model);
return 1/(1+exp(-t));
}
After reading the paper, "Field-aware Factorization Machines for CTR Prediction", I think the return value should be the variable t
, not be the value of 1/(1+exp(-t))
. Could you answer my doubt ?
g++ -Wall -O3 -std=c++0x -march=native -fopenmp -DUSESSE -DUSEOMP -c -o ffm.o ffm.cpp /tmp/cc2xJsit.s: Assembler messages: /tmp/cc2xJsit.s:3277: Error: no such instruction:
vinserti128 $0x1,%xmm0,%ymm1,%ymm0'
/tmp/cc2xJsit.s:3286: Error: suffix or operands invalid for vpaddd' /tmp/cc2xJsit.s:3598: Error: no such instruction:
vinserti128 $0x1,%xmm0,%ymm1,%ymm0'
/tmp/cc2xJsit.s:3609: Error: suffix or operands invalid for vpaddd' /tmp/cc2xJsit.s:3949: Error: no such instruction:
vinserti128 $0x1,%xmm0,%ymm1,%ymm0'
/tmp/cc2xJsit.s:3955: Error: suffix or operands invalid for vpaddd' /tmp/cc2xJsit.s:4273: Error: no such instruction:
vinserti128 $0x1,%xmm0,%ymm1,%ymm0'
/tmp/cc2xJsit.s:4284: Error: suffix or operands invalid for vpaddd'
for learning ffm, I would like to find a tf version of ffm.
using the python wrapper (libffm-python)
for some reason, when the input dataset becomes too large (too many "fields" ~ about 29 or more), the predictions (at least the first iterations, havent checked if it changes eventually after N iterations) are all NaN
*edit: few samples of data, even a one row dataframe, presents the same issue, so it appears to be "fields" related
*edit2: tested, doesnt cnverge after N iterations
Hello,
I'm trying to use libffm-linear library. Here are my outputs:
libffm-linear>windows\ffm-train -s 2 -l 0 -k 10 -t 50 -r 0.01 --au
to-stop -p test_data.txt train_data.txt model
iter tr_logloss va_logloss
1 0.25510 0.25017
2 0.25129 0.24927
3 0.25070 0.24882
4 0.25041 0.24843
5 0.25020 0.24821
6 0.25005 0.24808
7 0.24990 0.24801
8 0.24977 0.24800
9 0.24968 0.24820
Auto-stop. Use model at 8th iteration.
libffm-linear>windows\ffm-predict test_data.txt model output_file
logloss = 0.34800
Why prediction logloss differs from validation logloss on same file?
hi,dear
could you pls help me how to transform CSV data to FFM data ?
thx
Unknown features (like new app_id or device_id that was not in training data) lead to random probabilities (too small or too high). Could you suggest a workaround for using LIBFFM in that case?
Did you think about porting this to CUDA/CUBLAS?
I found that loss is very quick to decrease, if all feature are categorical. And some numerical feature are chosen into the model, it is very slow to decrease the loss even if 150 iteration.
Could you tell me why or give me some advices?
Are there any plans of incorporating bias and linear terms in this new re-factored version ?
I know they're included in v114 on the website but if I'm not mistaken they're still not on master (I think?).
Thanks !
Can two field have the same feature id, which means the feature id is associated with field?
It seems we cannot set the optimization objective. Can we use ffm for regression problem?
In the implement, there are almost no comments. It is hard to read and learn.
It is known that C codes is harder to read than python lang. That there are no comments make learner much harder.
All in all, the implement is unfriendly. Please add necessary comments. At least, the members of structs would be commented.
Thank you on behalf of everyone
Sorry for being ignorant, can I put float
type value in the input data?
Hello!
I'm about to finish a generalised wrapper for "predict" and "ffm_load_model" function in Java. It would be great if you will review my code and then add it to your library if you deem it fit.
Thank You
I've used this pacakge a few months, ago, and I remember I was able to do $head model, and to see the model weights.
It seems that the model is now encoded somehow (binarized?) am I correct? is there a way to see the model as before?
How to output AUC-ROC on console?
Thanks for your amazing libffm.
When using ffm_predict, I have a problem about how to fill up in FFM data format when test data set without labels.
Thanks again.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.