GithubHelp home page GithubHelp logo

Comments (10)

chrbertsch avatar chrbertsch commented on July 20, 2024

I do not think the displayed results are absurd, but perhaps they need to be better explained (and the displayed numbers perhaps reconsidered).

Exporting and importing tool vendors can upload results for different versions of their tools (which by the way is very welcome!)

E.g. MapleSim has uploaded results for importing for four tool versions for https://github.com/modelica/fmi-cross-check/tree/master/results/2.0/me/win64/MapleSim, but for each exporting tool only results for one tool version are reported.

Where specifically do you see the problem?
The scripts to generate the results are free for inspection (https://github.com/modelica/fmi-cross-check/blob/master/result_tables.py) , and constructive suggestions for improvements are welcome.

I suggest to add document on the result table pages as https://fmi-standard.org/cross-check/fmi2-me-win64/ on "what do the displayed numbers mean?"

from fmi-cross-check.

lochel avatar lochel commented on July 20, 2024

Thanks @chrbertsch, you are right and I missed that each importing tool is available in different versions. However, that makes the entire table useless.

What is the purpose of the table? The table should provide a simple overview how the tools compare to each other and which tools are compatible in terms of import/export. However, you cannot use the current table to compare the numbers of one tool to any other tool, because the number of uploaded versions differ and are not displayed. Any tool can reach any number by simply uploading the same results for different versions.

I propose that the table should only display the results of a certain importing tool version (either the recent version, or all versions separately). Today, the provided information are misleading and useless.

from fmi-cross-check.

chrbertsch avatar chrbertsch commented on July 20, 2024

The current provided information is not useless, but has to be better explained.
It is very beneficial if tool vendors upload results for different (and latest) versions of their tools and this should be honoured and reflected.

Changing the result display to only latest tool versions has been discussed in #2 This has not been realized yet. Perhaps we could address this with the help the Backoffece (@GallLeo )

from fmi-cross-check.

lochel avatar lochel commented on July 20, 2024

@chrbertsch I am not arguing agains uploading results for different tool versions. I just state that the results cannot be interpreted giving the provided information. That makes the displayed results indeed useless and, even worse, missleading.

If we can agree on that then we can go ahead in a constructive attempt to improve the display of the results.

from fmi-cross-check.

andreas-junghanns avatar andreas-junghanns commented on July 20, 2024

@lochel : Can we keep the tone of these discussion here less heated and more civil, please? E.g. the heading of this issue is very close to offensive to those that have worked hard to get XC to where it currently is, whatever flaws it might still have.

from fmi-cross-check.

lochel avatar lochel commented on July 20, 2024

I don’t quite know what you mean. This is a discussion on the issue and nothing else. I am very concerned by the presented results and the process which maintains the cross-check. I raised several issues both publicly and privately to you and @t-sommer but got basically no response on the addressed issues.

Regarding the title: @andreas-junghanns, don’t you think that the presented results are indeed very concerning and not reflecting the aim of the cross-check project? I would like to have a discussion on the topic.

I would like to find a constructive way forward to improve the current project status and to make the cross-check a fair and valuable tool for all participants.

from fmi-cross-check.

lochel avatar lochel commented on July 20, 2024

I opened two pull requests in order to address the issue: The first one filters out not compliant tests which until now are still listed in the results. The other one is breaking down the importing tools in order to provide a good overview of the results. This way, you can easily follow the progress of all the different tools, and you can compare the tools with each other.

from fmi-cross-check.

lochel avatar lochel commented on July 20, 2024

This is just to illustrate the changes I propose. I selected Dymola as an example, because it supports most of the cross-check and provides results for different versions.

The current homepage shows the following numbers, as you can see from my initial post:

importing tool CATIA DS - FMU Export from Simulink Dymola FMI Toolbox for MATLAB/Simulink FMUSDK MapleSim Test-FMUs
Dymola 3 24 49 3 24 0 0

Whereas the table with my changes would actually provide much more useful information:

importing tool CATIA DS - FMU Export from Simulink Dymola FMI Toolbox for MATLAB/Simulink FMUSDK MapleSim Test-FMUs
Dymola (2015FD01) 0 3 3 0 3 0 0
Dymola (2016) 0 5 9 0 3 0 0
Dymola (2016FD01) 0 7 15 0 6 0 0
Dymola (2017) 3 9 22 3 6 0 0

For example the entry Dymola/Dymola didn't make too much sense in the first table. It shows 49 tests, even though the cross-check only contains 32 valid examples. The second table shows actually the same 49 tests, but for the respective Dymola versions. That way, one can easily see what is supported, what got improved, and how things compare to other tools.

from fmi-cross-check.

lochel avatar lochel commented on July 20, 2024

I am very glad to see that #132 was merged. More than 19% of the green badges were wrongly awarded and vanished from the tool page. This is because the numbers of counted tests from the detailed reports dropped considerably:

fmi2-me-win64:   14% wrongly counted results (previously counted:  730, actually valid:  640)
fmi2-me-linux64: 23% wrongly counted results (previously counted:   21, actually valid:   17)
fmi2-cs-win64:   17% wrongly counted results (previously counted: 1258, actually valid: 1073)
fmi2-cs-linux64: 26% wrongly counted results (previously counted:   82, actually valid:   65)
...

from fmi-cross-check.

ghorwin avatar ghorwin commented on July 20, 2024

Hi all,
it's been a while since I've worked on this issue, so please excuse my late comment on the matter. If I recall correctly, I had pretty much the same concerns mentioned towards Torsten some 2-3 years ago. In my opinion, merging "passed" counts from different versions of a software is indeed misleading. While I see Torsten's argument, that actively participating tool venders who frequently update their results should be rewarded in some way, this should taint the message of the comparison table.

Both suggestions discussed so far are ok for me (i.e. list all individual versions or only the latest one). Showing only the most recent version in the basic table probably makes most sense for users, who want a fair comparison between tool capabilities. Also, since tools are generally expected to improve over time, the number of passed tests will increase with newer versions, thus the comparison to other tools will be ok.

Feature request: It would be nice to have an additional column showing the number of versions of the software that results were submitted for. Interested users may then click on an underlying link and get a detailed view of the table only for versions of this software. That would be IMHO the best compromise.

from fmi-cross-check.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.