Comments (46)
i guees i agrree. how do we parse yaml in r?
from aslib-spec.
https://cran.r-project.org/web/packages/yaml/index.html
from aslib-spec.
ok sorry :)
from aslib-spec.
No worries :)
As far as I can see at the moment this would give us exactly the same data structure as we have at the moment (apart from the feature steps of course), so any changes should be minimal.
from aslib-spec.
Hi Lars,
I agree that the current format of the feature groups is an issue.
I also like the idea of "provide" and "requires".
However, please note first of all,
that your example is wrong.
Pre provides the following features: "reducedVars,nvars,nclausesOrig,nvarsOrig,nclauses,reducedClauses".
Furthermore, I don't like YAML so much.
We use it for one of our homepages and it is always a pain to edit the yaml files.
Aren't there any better alternatives?
Cheers,
Marius
from aslib-spec.
Ok, that the example is wrong would have been much clearer in the new format :)
I don't see how editing YAML is more painful than editing a non-standard format.
from aslib-spec.
In the end, I can live with YAML.
however, there is no way to specify this with arff, or?
If possible, I would like to prevent to use two different standard formats.
from aslib-spec.
Again, I don't see how using two different standard formats is worse than using a standard and a non-standard format. In principle I don't have a problem with using YAML for everything.
from aslib-spec.
How would one of the other files look like in YAML?
I read in wikipedia that each JSON file is also valid YAML (>=1.2) file.
I like JSON but I don't know whether this is really user-friendly.
from aslib-spec.
Hmm, I guess something like
- instance_id: bla
repetition: 1
feature_x: foo
I don't really see a problem with being user friendly -- you're not supposed to edit/write those files manually.
from aslib-spec.
such a format would blow up our files by more than factor 2 I guess.
The description.txt is a file I always write manually.
from aslib-spec.
you can forget arff for such files immediatly
from aslib-spec.
Yes, everything would be much larger. But as I said, I'm not opposed to keeping everything but description.txt
in arff. We also have citation.bib
which is in yet another standard format.
from aslib-spec.
OK.
I also asked Matthias whether he likes this new format. and he agreed.
so, please go on and make the changes.
Cheers,
Marius
from aslib-spec.
Ok, what's your feeling on making the lists proper YAML lists as well? I.e. instead of comma-separated they would be
provides:
- CG_mean
- CG_coeff_variation
- etc.
from aslib-spec.
I like the comma-separated more since I can look up the corresponding feature step to a feature by looking one line above (and not n lines).
To have a proper YAML (1.2), which is similiar to right now, we could use
[CG_mean, CG_coeff_variation,...]
However, we should change the entire file.
So for example also algorithms_deterministic.
from aslib-spec.
Ok, but presumably you're not going to parse the YAML yourself but use a library? And yes, that would apply for everything -- if the data structure is serialized by a YAML library we may not even be able to control which type of list we get (and don't need to care).
So I guess my real question is whether you're planning to use a library to parse/write the YAML.
from aslib-spec.
parsing: of course.
but i would prefer it if people could still manually write (smaller) files without programing.
can we do that?
from aslib-spec.
I often have a look into the description.txt files to get a better feeling for the scenarios, e.g., which algorithms are used; how many feature are used and how are the feature distributed in the feature groups.
I could write scripts for such things, but looking into the files is often faster.
So I would prefer that I can easily read the files.
from aslib-spec.
well that argument i find slightly strange? why not use the eda overview?
from aslib-spec.
Of course you can still read/write the files manually and that shouldn't even be much more difficult than it is now. But it would be much easier to parse/write programmatically because we can just use YAML libraries.
from aslib-spec.
i meam we invested lots of time to write exactly scripts for that purpose.... web based.....
from aslib-spec.
Which, come to think of it, we should rerun to update the web pages at some point.
from aslib-spec.
Proposal: Use travis for that. People do PRs for a new scenario. Then travis builds all EDA stuff. This even checks the validity of the scenario files. Only then we merge. The only thing we then have to run manually might be the selector benchmarks.
from aslib-spec.
+1
from aslib-spec.
i meam we invested lots of time to write exactly scripts for that purpose.... web based.....
- I'm not always online.
- I'm faster with my local files than finding the URL and then clicking through the web interface.
from aslib-spec.
Ok, so you think that
- name: Basic
provides:
- vars_clauses_ratio
- POSNEG_RATIO_CLAUSE_mean
- POSNEG_RATIO_CLAUSE_coeff_variation
- POSNEG_RATIO_CLAUSE_min
- POSNEG_RATIO_CLAUSE_max
- POSNEG_RATIO_CLAUSE_entropy
- VCG_CLAUSE_mean
- VCG_CLAUSE_coeff_variation
- VCG_CLAUSE_min
- VCG_CLAUSE_max
- VCG_CLAUSE_entropy
- UNARY
- BINARYp
- TRINARYp
requires: Pre
is harder to read than
- name: Basic
provides: vars_clauses_ratio,POSNEG_RATIO_CLAUSE_mean,POSNEG_RATIO_CLAUSE_coeff_variation,POSNEG_RATIO_CLAUSE_min,POSNEG_RATIO_CLAUSE_max,POSNEG_RATIO_CLAUSE_entropy,VCG_CLAUSE_mean,VCG_CLAUSE_coeff_variation,VCG_CLAUSE_min,VCG_CLAUSE_max,VCG_CLAUSE_entropy,UNARY,BINARYp,TRINARYp
requires: Pre
from aslib-spec.
Yes, but in the end, I don't feel strongly about this.
So, I can also live with the first format if we don't have a nice way to automatically generate the second format.
from aslib-spec.
Ok, I've updated the spec, converted all the scenarios and updated the R code.
@mlindauer Could you please update the Python code/checker?
from aslib-spec.
I'm on vacation for the next two weeks. I will do it afterwards.
from aslib-spec.
Ok, thanks. No rush :)
from aslib-spec.
It just occurred to me that we should also have a look at the feature_runstatus.arff files for instances that are presolved. The spec doesn't say what should happen to dependent feature steps in this case and the data is inconsistent. For example for ASP, feature steps that depend on one that presolved seem to be listed as presolved" as well but the costs aren't given, implying that they weren't actually run. For the SAT data sets, the runstatus of feature steps that depend on one that presolved are listed as unknown (which probably makes more sense in this case).
from aslib-spec.
Hi,
I started to implement the new description.txt parser and I found an issue.
According to the spec, "performance_measures" specifies a list.
But looking at some of the description.txt files, e.g., ASP-POTASSCO, it is only a string:
performance_measures: runtime
So, the format according to YAML should be:
performance_measures:
- runtime
The same issue holds for "maximize" and "performance_type".
from aslib-spec.
The same issue applies to feature_step->"requires" in same senarios.
In ASP-POTASSCO it is fine:
Dynamic-1:
requires:
- Static
IN SAT11-HAND it is not OK:
Basic:
requires: Pre
from aslib-spec.
I updated the checker tool (and flexfolio).
Right now, the checker tools complains about the issues raised above.
from aslib-spec.
Thanks, good catch. Could you fix the files please?
from aslib-spec.
Hi, I fixed it. All scenarios in the master branch are now compatible with the checker tool again.
However, I found another issue.
At some point, we agreed that we need an order of the feature steps. This was implicitly given by the order of the feature steps in the description.txt.
Since, we use YAML now, we encode the "feature_steps" as dictionaries:
feature_steps:
Pre:
provides:
- nvarsOrig
[...]
Basic:
requires:
- Pre
Parsing this file (at least with Python) will give you a dictionary without a defined order of the feature steps. So, we either have to change "feature_steps" to list (which would look unintuitively and ugly imho) or we add another list, such as "feature_step_order".
What do you think?
Cheers,
Marius
from aslib-spec.
Just remind me what the order is needed for? You can derive any ordering constraints from the provides/requires right?
from aslib-spec.
If I correctly remember, the problem was the presolved feature steps.
- The features were computed in an (unknown) order and if a feature step pre-solved an instance, the remaining feature steps were not computed (at least true for ASP and SAT scenarios).
- We discussed the definition of the oracle at some point. If we want to include the feature steps as a possible algorithm to solve an instance (important for some scenarios) in the oracle defintion, we have to know the right order of the feature steps, or else we have to solve an NP-hard problem (i.e., all possible orders of feature steps) to find an optimal order.
(3. there is more than one possible order, if we only consider "requires")
from aslib-spec.
- Sounds to me like we should have a feature runstatus "not computed" then -- using the order to derive this is quite similar to how the dependencies were encoded. Not at all obvious and intuitive and bound to trip somebody up.
- I remember -- what conclusion did we come to? It seems fair enough to me that oracle would be able to change the ordering of feature steps.
- I don't see that as a disadvantage.
from aslib-spec.
- Saying some features are "not computed" sounds even more unintuitive. Without an order of the feature steps, it is not explained why they are not computed. And we assume so far that the data is complete as long as there are no good reasons for missing data.
- I think we postponed the discussion to later and used simply the old definition of the oracle without consideration of feature steps.
from aslib-spec.
Ok, so let's have a feature status "not computed because instance presolved by previous feature step". We don't need to know what that feature step was, do we?
from aslib-spec.
OK, I agree that we should have something like "not computed because instance presolved by previous feature step".
However, if we have such a status, I still think we should have some more information about the order of the feature steps - at least how they were generated; the user can still decide to use another order.
The arguments for such information are:
- We would know which step was responsible for this new status
- The optimal order of the feature steps (-> presolved status) is exactly the order in which the features were generated. I don't see an argument why the users should try to figure this out by themselves if we already know it. (In the same way, it is also important for a new oracle definition as mentioned before.)
from aslib-spec.
Should the order of the feature steps used when generating the data for the scenarios be part of the metadata?
from aslib-spec.
Yes?
from aslib-spec.
Ok, then let's do that.
from aslib-spec.
Related Issues (3)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aslib-spec.