Comments (20)
I'm also on Ubuntu 16.04 LTS I can give it a try if you wish.
from stacknet.
@arisbw .
You dont need to remove it . You just change the seed of Softmax to 10 it works fine:
softmaxnnclassifier usescale:True seed:10 Type:SGD maxim_Iteration:35 C:0.0005 learn_rate:0.001 smooth:0.0001 h1:50 h2:40 connection_nonlinearity:Relu init_values:0.05 verbose:false
Honestly I have not a single clue why...
These are the results of all models:
Average of all folds model 0 : 0.7867379723430724
Average of all folds model 1 : 0.7929649557885149
Average of all folds model 2 : 0.7866359111370649
Average of all folds model 3 : 0.7764824969782087
Average of all folds model 4 : 0.7805320681382869
Average of all folds model 5 : 0.7754102012289547
Average of all folds model 6 : 0.7642405924462954
Average of all folds model 7 : 0.7817682598159342
from stacknet.
I have seen that before in windows...Strangely if you just press enter into the screen it continues. Not sure why this happens.... Can you try that and let me know?
from stacknet.
Already tried it but nothing happened.
from stacknet.
Strangely enough, this event also occurred when I ran the code from ubuntu 16.04.
from stacknet.
That is indeed strange. I have not seen that before... Does it always hang at the same place ? could you send me the file you run this and the parameters' file (as well as the command you run) and tell me where I should expect the pause in order to try and replicate?
from stacknet.
Thanks guys. Here are the files. You will expect sudden drop in 5th fold of first layer (4th model).
from stacknet.
ok got the files. just started stacknet on win8.1 and will let you know.
from stacknet.
If you happen to produce output file, could you please send it back to me? Thanks.
from stacknet.
Yeh sure ;-)
By the way looking at the param file I saw that RandomForest had 5 threads when you have 4 logical cores. Did you try reducing that number?
I'm just wondering if StackNet is not waiting for that thread to complete.
This may sound completely stupid but never know...
from stacknet.
Ah I see. You seem right, but it should break at first fold, right? But now I try to run it again with modified threads params.
from stacknet.
Issue reproduced. perf drops at 30% on the 5th fold and 4 first models.
3 metrics are displayed but not the 4th one.
from stacknet.
Reducing the number of threads does not change the problem. However after reducing the number of estimators or iterations of the models I managed to get StackNet go through the all process...
from stacknet.
OK. Could you please share that modified params?
from stacknet.
I am also running it right now and will let you know the results . I have many cores available and did not encounter a problem at 2nd fold (e.g. I am in 4th now) , which makes me think this is in general related with threading...
from stacknet.
It is reproduced. Don't know why , but it seems to be at the predict()
of the Softmaxclassifier . The reason CPU stops is not relevant - it has to do with the fact that we are in scoring and threading is not used. For some reason there must be a bug the code causing an infinite loop somewhere.
It does not throw error though..
from stacknet.
I will try to find a workaround.
from stacknet.
@arisbw, the reduced estimators were really low like 3 or 4... so it won't help you.
As @kaz-Anova said above, you may want to remove softmaxclassifier for now.
from stacknet.
OK, I'll make sure to remove softmaxclassifier for now. Thank you @goldentom42 @kaz-Anova
from stacknet.
...... now it goes much weirder that I can imagine. Again, thanks @kaz-Anova!
from stacknet.
Related Issues (20)
- Something wrong with labeling in classifier HOT 2
- Gini for metric output HOT 1
- java.lang.IllegalStateException: Tree is not fitted HOT 1
- Amazon example
- How to convert sklearn sparse matrix to libsvm format? HOT 1
- Do the meta classifiers (learners) learn on the predicted probabilities of the base learners or the class labels predicted by the base learners? HOT 2
- likelihood encoding and stacking HOT 1
- java.lang.OutOfMemoryError: GC overhead limit exceeded HOT 2
- ffmRegressor suport or how to change LibffmClassifier to Regressor?
- Pilon terminated with "Exception in thread "main" java.lang.reflect.InvocationTargetException" error
- Exception in thread "main" java.lang.reflect.InvocationTargetException HOT 2
- Where to include the target labels of the dataset? HOT 1
- can't figure this error out? HOT 6
- Explanation of (double) HOT 1
- Failed to generate sparse file HOT 8
- Fail to train a regression task? HOT 5
- README: Missing information
- Help please HOT 1
- Custom objective function in StackNet
- Hyperparameter Tuning
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stacknet.