greendelta / epd-editor Goto Github PK
View Code? Open in Web Editor NEWThis is an editor for ILCD data sets with EPD format extensions
License: Mozilla Public License 2.0
This is an editor for ILCD data sets with EPD format extensions
License: Mozilla Public License 2.0
Some empty <common:commissionerAndGoal/>
elements have been spotted in the wild which shouldn't be there as they cause unnecessary complaints by the validation.
Let the user edit indicators and modules in EPD profiles
This can be off by default, but when the user switches it on, it synchronizes the reference data for all profiles that have a reference data URL.
Especially when updating the product flow, it would be more intuitive from a user's perspective if the associated process dataset would be updated as well automagically (and get a version number increment), reflecting the version number increment in the product flow.
In a way, this is reverse of what's described in #9. We'll need a unified strategy here that provides a choice and appropriate feedback to the user.
Currently we only have the Windows builds working. Linux works probably but the macOS build does not.
When creating multiple EPD datasets, certain information such as compliance, owner, background database etc. has to be individually added for each dataset. It would be a great improvement to have some sort of customizable dataset template that can contain this information. Thus, when creating a new EPD dataset, optionally the template can be used and the dataset will be pre-filled with that information.
Once there are EPD datasets with the same name but from multiple reference years (thus having a different UUID), they are not distinguishable anymore in the navigation tree, leading to a less-than-optimal UX.
In order to improve this, the reference year could be added after the displayed name in parentheses, e.g.
Foomaterial (2019)
It would be nice to select a single data set and export it (optionally with all dependencies) into a zip file.
When including dependency datasets in an operation like upload or dependency scan, the indicator datasets should not be included, as they are usually stable over a long time and are not supposed to be edited by users anyway. Plus the new LCIA methods from EF are huge.
The 2.x validation library already exempts all reference objects from validaton by default.
mvn package
currently fails with this error:
[ERROR] Failed to execute goal on project dependencies:
Could not resolve dependencies for project
epd-editor:dependencies:pom:1.0.0:
Failed to collect dependencies at
com.okworx.ilcd.validation.profiles:EPD-1.1-OEKOBAUDAT:jar:
1.0.38: Failed to read artifact descriptor for
com.okworx.ilcd.validation.profiles:EPD-1.1-OEKOBAUDAT:jar:
1.0.38: Could not find artifact com.okworx.ilcd.validation.profiles:validation-profiles:pom:1.0.0 in central (https://repo.maven.apache.org/maven2)
@okworx @diambakus the reason is probably that the EPD-1.1-OEKOBAUDAT artifact has a parent reference to com.okworx.ilcd.validation.profiles:validation-profiles:1.0.0
but it cannot find a matching artifact in the central repo? Not sure how this should work with relative path references to a parent:
<parent>
<groupId>com.okworx.ilcd.validation.profiles<groupId>
<artifactId>validation-profiles</artifactId>
<version>1.0.0</version>
<relativePath>../..</relativePath>
</parent>
@okworx I could not find the profiles repository. So installing from source does not work. (the scm-url from the pom points to a non-public repo? https://bitbucket.org/okusche/ilcdvalidation-profiles.git).
This would be nice to reflect changes in the referenced data sets... on the other hand we would modify a data set when opening it -> should we then increase the version? mark it as dirty?
Currently, when external docs are stored in a ZIP file for export, they are simply written to the external_docs
folder.
Now there may be collisions if a user uses the same file name for different external docs, which will cause a conflict during export. Consider the following example:
Source dataset A (UUID c50ec8b7-691d-458b-8379-a8d787e4e4b4
)
flow_chart.png
)Source dataset B (UUID 9bf38b43-d3f5-4f94-99b3-178531ea74bb
)
flow_chart.png
)When these datasets are exported to the file system, both files flow_chart.png
will be written to the external_docs
folder, one of them being overwritten with the other.
We can't simply circumvent this at export time by altering file names, since we'd also have to alter the reference in the corresponding source dataset in this case, which is not an option.
In order to ensure non-ambiguous file names for external docs, in practice it has proven to be effective to append the UUID of the source dataset to the attached external docs. For the example above, this would look like this:
Source dataset A (UUID c50ec8b7-691d-458b-8379-a8d787e4e4b4
)
flow_chart_c50ec8b7-691d-458b-8379-a8d787e4e4b4.png
)Source dataset B (UUID 9bf38b43-d3f5-4f94-99b3-178531ea74bb
)
flow_chart_9bf38b43-d3f5-4f94-99b3-178531ea74bb.png
)Renaming the external doc file would have to happen by the application at the time the file is attached to the source. The user could be offered the option to use the original file name instead (if for example they are certain that the file name is and will always be non-ambiguous, for example if it already contains a UUID), but probably the described approach should be active by default.
This way, a collision could be avoided when exporting to a ZIP file.
This is not a critical issue but as the user base of the tool grows, this would be good to have in the mid-term.
With large numbers of datasets, it may be useful for users to be able to filter in the navigation view using a simple search field that filters datasets by name.
Even though there are several validation messages, for each dataset only one message is shown.
There are problems by the import of exel files. There isn´t always the same structure in the export file. So it´s difficult to use templates to import the LCA data.
This issue tracks the progress for the extension to optionally add content declarations to an EPD data set. A content declaration describes the composition of the EPD's product as a hierarchy of components, materials, and substances. All levels in this hierarchy are optional (e.g. a content declaration could only contain substances).
http://www.indata.network/EPD/2019
according to the current ILCD+EPD schema spec., e.g.:<epd2:contentDeclaration xmlns:epd2="http://www.indata.network/EPD/2019">
<epd2:component>
<epd2:name xml:lang="en">wooden panel</epd2:name>
<epd2:weightPerc epd2:value="100.0"/>
<epd2:material epd2:recyclable="0.0" epd2:recycled="0.0" epd2:renewable="100.0">
<epd2:name xml:lang="en">Spruce</epd2:name>
<epd2:weightPerc epd2:lowerValue="97.0" epd2:upperValue="99.0"/>
<epd2:mass epd2:value="0.99"/>
<epd2:substance epd2:CASNumber="123" epd2:packaging="false">
<epd2:name xml:lang="en">W23</epd2:name>
<epd2:weightPerc epd2:lowerValue="10.0" epd2:upperValue="60.0"/>
<epd2:mass epd2:lowerValue="9.0" epd2:upperValue="400.0"/>
</epd2:substance>
</epd2:material>
...
</epd2:component>
</epd2:contentDeclaration>
"Expand all" and "collapse all" feautures would be nice in order to quickly expand or collapse the navigation tree.
Could be either as entries in the context menu of a root node in the navigation tree or in a tool bar within the navigation view.
If something went wrong the user should be able to build the data index from a user interface function (somewhere in the advanced feature menus). Also, when the delete all
function fails the index should be created again.
currently, when data is exported as a ZIP file, every time everything is exported. In a scenario where only one dataset is edited and an export is needed, it would be nice to be able to export only that one dataset (optionally including its dependencies).
Test case:
Observed behavior in XML output:
<epd:safetyMargins xmlns:epd="http://www.iai.kit.edu/EPD/2013">
<epd:margins>10.0</epd:margins>
<description xmlns="http://www.iai.kit.edu/EPD/2013">foo</description>
</epd:safetyMargins>
Expected behavior:
<epd:safetyMargins xmlns:epd="http://www.iai.kit.edu/EPD/2013">
<epd:margins>10.0</epd:margins>
<epd:description xmlns="http://www.iai.kit.edu/EPD/2013" xml:lang="de">foo</description>
</epd:safetyMargins>
This would be an easy way for allowing the user to create a sort of template dataset which contains all of the information that is usually common to all datasets created in an organisation or of one batch.
This functionality could be added to the context menu for datasets.
Upon duplication of a dataset, only the UUID of the duplicate would need to be newly generated.
This is currently not so relevant in the EPD context but would be required when using it with standard ILCD packages
It is currently very cumbersome to actually read the validation messages. A simple solution would be to automatically set the column width for the "message" column to the length of the longest entry, so that the value can be read using the horizontal scroll bar of the table.
Currently, the default language when using the editor from the binary distribution is German. For non-German speaking users, it may be a bit cumbersome to discover the settings where they can set it to English because everything is in German.
Do we possibly have a way of making it a little bit easier for international users? One option would be to have the binary distribution be in English by default.
This is what the EPD-process XML looks like when no subtype has been selected:
<LCIMethodAndAllocation>
<typeOfDataSet>EPD</typeOfDataSet>
<common:other/>
</LCIMethodAndAllocation>
Ideally it should rather look like this:
<LCIMethodAndAllocation>
<typeOfDataSet>EPD</typeOfDataSet>
</LCIMethodAndAllocation>
The same can be observed with flow datasets. A newly created flow dataset looks like this:
<f:dataSetInformation>
<common:UUID>91f638b3-fd72-48d5-8345-96e90df54fbf</common:UUID>
<f:name>
<f:baseName xml:lang="de">Steinbutter</f:baseName>
</f:name>
<f:classificationInformation/>
<common:other/>
</f:dataSetInformation>
But should rather look like this
<f:dataSetInformation>
<common:UUID>91f638b3-fd72-48d5-8345-96e90df54fbf</common:UUID>
<f:name>
<f:baseName xml:lang="de">Steinbutter</f:baseName>
</f:name>
<f:classificationInformation/>
</f:dataSetInformation>
Long messages are hard to read in the table and there is currently no way to save them...
In the editor for EPD datasets, under "Declared product" there's a caption "Product". Change this to "Product flow" (Produktfluss in DE) for better orientation for users.
BBSR requested to include their name and logo in the splash screen, they will follow up on this and provide information and artwork.
The current hierarchical navigation for datasets could be complemented with an alternative flat navigation (just like in the Eclipse IDE), so users can choose which is best for their individual use case.
After changing the datasets language in the settings dialog, the navigation tree still shows the names of the datasets in the previously selected language.
The tree should be refreshed and show the names in the newly selected language upon saving the settings dialog.
There's a bug in the Jersey library which causes non-ASCII file names to be garbled: eclipse-ee4j/jersey#3784
As a result, the attachments of a source dataset such as this one
cannot be transferred to the server correctly.
I suggest to add a warning to the user above this UI element in case any of the files has any non-ASCII characters in its name, recommending to use only letters, numbers, underscores and dashes.
Indicator
class with data from profileRef
from an Indicator
EpdProfiles.getAll()
, import/export)==
comparisons for indicators and modules (because they were enums)Test case:
<time/>
<geography>
<locationOfOperationSupplyOrProduction/>
</geography>
<technology/>
Should possibly be multi-lang
Currently all data are stored in the data
folder in the installation directory. The user has to import/export data sets in order to use them when installing a new version. A common data folder in the user's home directory would be good
@okworx What about ~/.epd-editor
?
Currently, all reference data (LCIA method datasets, commonly used sources for compliance systems etc.) are always included when activating "include dependencies" for upload.
It would be nice to be able to exempt those from above operations, for example by marking them as reference data. At least for the LCIA method datasets this is necessary, as the new method datasets for EN15804+A2 from EF are huge which lead to some operations that include a dependency scan taking a very long time. Plus it would waste a lot of bandwidth and time when uploading them.
As a data set can be in multiple categories, this requires a bit of thinking.
When the dataset language is set to "English", the shortDescriptions of the indicators in the generated datasets contain the German names, even though the actual indicator datasets are carrying names in multiple languages (en, de, es). One would expect that the shortdescription would be the English one.
Test case:
<referenceToFlowDataSet type="flow data set" refObjectId="a2b32f97-3fc7-4af2-b209-525bc6426f33" uri="../flows/a2b32f97-3fc7-4af2-b209-525bc6426f33"> <common:shortDescription xml:lang="en">Komponenten für die Wiederverwendung (CRU)</common:shortDescription> </referenceToFlowDataSet>
but expected would be this:
<referenceToFlowDataSet type="flow data set" refObjectId="a2b32f97-3fc7-4af2-b209-525bc6426f33" uri="../flows/a2b32f97-3fc7-4af2-b209-525bc6426f33"> <common:shortDescription xml:lang="en">Components for re-use (CRU)</common:shortDescription> </referenceToFlowDataSet>
When uploading a dataset, the process arrives at the destination node, but the product flows are apparently not being transferred.
An empty safetyMargin element appears in the markup, rendering the document schema-invalid.
Test case:
<common:other>
<epd:safetyMargins xmlns:epd="http://www.iai.kit.edu/EPD/2013"/>
</common:other>
Other test case:
When initiating an Upload action (and the "checkEPDsOnProductUpdates" setting is active), the same dependency check implemented in #15 that is performed when opening a dataset shall be performed to ensure the uploaded artifact is up-to-date.
Add an option in the editing window for EPD datasets to enter a functional unit instead of/in addition to a declared unit.
That goes under quantitativeReference/functionalUnitOrOther
.
This also requires that the user can set the type of quantitative reference (quantitativeReference/@type
) to either "Reference flow(s)" or "Functional unit".
The password is being displayed as clear text in the dialog; instead, it should be obfuscated for security reasons.
The Q-metadata information should go in the common:other
element under modellingAndValidation
.
As datasets are already being created using this new feature, it needs to be ensured that existing information is not lost but automatically transferred when opening and existing dataset.
When loading a custom profile with different LCIA method names, the editor still displays the names of the LCIA methods that are shipping by default.
Test case:
EN_15804_test.json.txt
as GitHub does not allow JSON files). It contains an LCIA method named "Acidification potential".Workaround:
Now that we have the profiles as a facility to store a URL for the reference data, it is in principle no longer necessary to ship any data with the application.
At some point we might want to remove the default data, and only ship the default profile with the application instead. Users can then decide to either get the reference data from the location in the default profile, or their own.
This would avoid having to update the data shipping with the editor and actually make the user experience more consistent, as in any case updated reference data needs to be retrieved from the server.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.