GithubHelp home page GithubHelp logo

greendelta / epd-editor Goto Github PK

View Code? Open in Web Editor NEW
4.0 8.0 2.0 9.59 MB

This is an editor for ILCD data sets with EPD format extensions

License: Mozilla Public License 2.0

Java 92.62% Python 3.52% HTML 2.59% Shell 1.27%
epd ilcd epd-editor openlca java eclipse-rcp

epd-editor's People

Contributors

bachtranngoc avatar dependabot[bot] avatar francoislerall avatar msrocka avatar okworx avatar thetisiboth avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

epd-editor's Issues

update of product flow should trigger version increment of associated process

Especially when updating the product flow, it would be more intuitive from a user's perspective if the associated process dataset would be updated as well automagically (and get a version number increment), reflecting the version number increment in the product flow.

In a way, this is reverse of what's described in #9. We'll need a unified strategy here that provides a choice and appropriate feedback to the user.

Cross platform builds

Currently we only have the Windows builds working. Linux works probably but the macOS build does not.

provide templates for datasets

When creating multiple EPD datasets, certain information such as compliance, owner, background database etc. has to be individually added for each dataset. It would be a great improvement to have some sort of customizable dataset template that can contain this information. Thus, when creating a new EPD dataset, optionally the template can be used and the dataset will be pre-filled with that information.

Show Reference year after name in navigation tree

Once there are EPD datasets with the same name but from multiple reference years (thus having a different UUID), they are not distinguishable anymore in the navigation tree, leading to a less-than-optimal UX.

In order to improve this, the reference year could be added after the displayed name in parentheses, e.g.

Foomaterial (2019)

do not include indicator dataset dependencies in upload, dependency scan

When including dependency datasets in an operation like upload or dependency scan, the indicator datasets should not be included, as they are usually stable over a long time and are not supposed to be edited by users anyway. Plus the new LCIA methods from EF are huge.

The 2.x validation library already exempts all reference objects from validaton by default.

Collecting the current dependencies fails

mvn package currently fails with this error:

[ERROR] Failed to execute goal on project dependencies: 
  Could not resolve dependencies for project
  epd-editor:dependencies:pom:1.0.0: 
Failed to collect dependencies at
  com.okworx.ilcd.validation.profiles:EPD-1.1-OEKOBAUDAT:jar:  
    1.0.38: Failed to read artifact descriptor for  
  com.okworx.ilcd.validation.profiles:EPD-1.1-OEKOBAUDAT:jar: 
    1.0.38: Could not find artifact com.okworx.ilcd.validation.profiles:validation-profiles:pom:1.0.0 in central (https://repo.maven.apache.org/maven2)

@okworx @diambakus the reason is probably that the EPD-1.1-OEKOBAUDAT artifact has a parent reference to com.okworx.ilcd.validation.profiles:validation-profiles:1.0.0 but it cannot find a matching artifact in the central repo? Not sure how this should work with relative path references to a parent:

<parent>
  <groupId>com.okworx.ilcd.validation.profiles<groupId>
  <artifactId>validation-profiles</artifactId>
  <version>1.0.0</version>
  <relativePath>../..</relativePath>
</parent>

@okworx I could not find the profiles repository. So installing from source does not work. (the scm-url from the pom points to a non-public repo? https://bitbucket.org/okusche/ilcdvalidation-profiles.git).

prepend source UUID to external docs' file name

Issue

Currently, when external docs are stored in a ZIP file for export, they are simply written to the external_docs folder.

Now there may be collisions if a user uses the same file name for different external docs, which will cause a conflict during export. Consider the following example:

Source dataset A (UUID c50ec8b7-691d-458b-8379-a8d787e4e4b4)

  • attached diagram for product system "electricity mix" (file name flow_chart.png)

Source dataset B (UUID 9bf38b43-d3f5-4f94-99b3-178531ea74bb)

  • attached diagram for product system "process steam" (file name flow_chart.png)

When these datasets are exported to the file system, both files flow_chart.png will be written to the external_docs folder, one of them being overwritten with the other.

We can't simply circumvent this at export time by altering file names, since we'd also have to alter the reference in the corresponding source dataset in this case, which is not an option.

Proposed solution

In order to ensure non-ambiguous file names for external docs, in practice it has proven to be effective to append the UUID of the source dataset to the attached external docs. For the example above, this would look like this:

Source dataset A (UUID c50ec8b7-691d-458b-8379-a8d787e4e4b4)

  • attached diagram for product system "electricity mix" (file name flow_chart_c50ec8b7-691d-458b-8379-a8d787e4e4b4.png)

Source dataset B (UUID 9bf38b43-d3f5-4f94-99b3-178531ea74bb)

  • attached diagram for product system "process steam" (file name flow_chart_9bf38b43-d3f5-4f94-99b3-178531ea74bb.png)

Renaming the external doc file would have to happen by the application at the time the file is attached to the source. The user could be offered the option to use the original file name instead (if for example they are certain that the file name is and will always be non-ambiguous, for example if it already contains a UUID), but probably the described approach should be active by default.

This way, a collision could be avoided when exporting to a ZIP file.

This is not a critical issue but as the user base of the tool grows, this would be good to have in the mid-term.

filter for navigation

With large numbers of datasets, it may be useful for users to be able to filter in the navigation view using a simple search field that filters datasets by name.

Import from EPD data with excel file

There are problems by the import of exel files. There isn´t always the same structure in the export file. So it´s difficult to use templates to import the LCA data.

Add "Content declarations"

This issue tracks the progress for the extension to optionally add content declarations to an EPD data set. A content declaration describes the composition of the EPD's product as a hierarchy of components, materials, and substances. All levels in this hierarchy are optional (e.g. a content declaration could only contain substances).

  • Content declarations are written to and read from the data information extension point of an ILCD process data set under the namespace http://www.indata.network/EPD/2019 according to the current ILCD+EPD schema spec., e.g.:
<epd2:contentDeclaration xmlns:epd2="http://www.indata.network/EPD/2019">
  <epd2:component>
    <epd2:name xml:lang="en">wooden panel</epd2:name>
    <epd2:weightPerc epd2:value="100.0"/>
    <epd2:material epd2:recyclable="0.0" epd2:recycled="0.0" epd2:renewable="100.0">
        <epd2:name xml:lang="en">Spruce</epd2:name>
        <epd2:weightPerc epd2:lowerValue="97.0" epd2:upperValue="99.0"/>
        <epd2:mass epd2:value="0.99"/>
        <epd2:substance epd2:CASNumber="123" epd2:packaging="false">
            <epd2:name xml:lang="en">W23</epd2:name>
            <epd2:weightPerc epd2:lowerValue="10.0" epd2:upperValue="60.0"/>
            <epd2:mass epd2:lowerValue="9.0" epd2:upperValue="400.0"/>
        </epd2:substance>
    </epd2:material>
    ...
  </epd2:component>
</epd2:contentDeclaration>
  • The content declarations can be viewed and edited in a new tab with two separate trees for non-packaging and packaging materials:

image

  • Items can be added or edited via the context menu (or via a double click
    on the respective item):

image

  • In the upcoming dialog a single item can be edited:

image

  • Depending on the element (component/material/substance) different fields
    are visible:

image

expand/collapse options for navigation tree

"Expand all" and "collapse all" feautures would be nice in order to quickly expand or collapse the navigation tree.

Could be either as entries in the context menu of a root node in the navigation tree or in a tool bar within the navigation view.

Function for re-indexing the data sets

If something went wrong the user should be able to build the data index from a user interface function (somewhere in the advanced feature menus). Also, when the delete all function fails the index should be created again.

add selective export of datasets

currently, when data is exported as a ZIP file, every time everything is exported. In a scenario where only one dataset is edited and an export is needed, it would be nice to be able to export only that one dataset (optionally including its dependencies).

missing namespace declaration, lang attribute for epd:safetyMargins/epd:description

Test case:

  • Fill in values in fields "Sicherheitszuschlag" and "Beschreibung".

Observed behavior in XML output:

<epd:safetyMargins xmlns:epd="http://www.iai.kit.edu/EPD/2013">
                    <epd:margins>10.0</epd:margins>
                    <description xmlns="http://www.iai.kit.edu/EPD/2013">foo</description>
</epd:safetyMargins>

Expected behavior:

<epd:safetyMargins xmlns:epd="http://www.iai.kit.edu/EPD/2013">
                    <epd:margins>10.0</epd:margins>
                    <epd:description xmlns="http://www.iai.kit.edu/EPD/2013" xml:lang="de">foo</description>
</epd:safetyMargins>

add "duplicate" functionality in context menu for datasets

This would be an easy way for allowing the user to create a sort of template dataset which contains all of the information that is usually common to all datasets created in an organisation or of one batch.

This functionality could be added to the context menu for datasets.

Upon duplication of a dataset, only the UUID of the duplicate would need to be newly generated.

auto-adjust column width in validation results table

It is currently very cumbersome to actually read the validation messages. A simple solution would be to automatically set the column width for the "message" column to the length of the longest entry, so that the value can be read using the horizontal scroll bar of the table.

make language English by default in releases?

Currently, the default language when using the editor from the binary distribution is German. For non-German speaking users, it may be a bit cumbersome to discover the settings where they can set it to English because everything is in German.

Do we possibly have a way of making it a little bit easier for international users? One option would be to have the binary distribution be in English by default.

empty common:other elements are generated

This is what the EPD-process XML looks like when no subtype has been selected:

<LCIMethodAndAllocation>
            <typeOfDataSet>EPD</typeOfDataSet>
            <common:other/>
</LCIMethodAndAllocation>

Ideally it should rather look like this:

<LCIMethodAndAllocation>
            <typeOfDataSet>EPD</typeOfDataSet>
</LCIMethodAndAllocation>

The same can be observed with flow datasets. A newly created flow dataset looks like this:

        <f:dataSetInformation>
            <common:UUID>91f638b3-fd72-48d5-8345-96e90df54fbf</common:UUID>
            <f:name>
                <f:baseName xml:lang="de">Steinbutter</f:baseName>
            </f:name>
            <f:classificationInformation/>
            <common:other/>
        </f:dataSetInformation>

But should rather look like this

        <f:dataSetInformation>
            <common:UUID>91f638b3-fd72-48d5-8345-96e90df54fbf</common:UUID>
            <f:name>
                <f:baseName xml:lang="de">Steinbutter</f:baseName>
            </f:name>
            <f:classificationInformation/>
        </f:dataSetInformation>

Rename "Product" caption to "Product flow"

In the editor for EPD datasets, under "Declared product" there's a caption "Product". Change this to "Product flow" (Produktfluss in DE) for better orientation for users.

add infos to splash screen

BBSR requested to include their name and logo in the splash screen, they will follow up on this and provide information and artwork.

alternative navigation (hierarchical vs. flat)

The current hierarchical navigation for datasets could be complemented with an alternative flat navigation (just like in the Eclipse IDE), so users can choose which is best for their individual use case.

refresh navigation tree after changing dataset language

After changing the datasets language in the settings dialog, the navigation tree still shows the names of the datasets in the previously selected language.

The tree should be refreshed and show the names in the newly selected language upon saving the settings dialog.

Source dataset: non-ASCII filenames causing trouble

There's a bug in the Jersey library which causes non-ASCII file names to be garbled: eclipse-ee4j/jersey#3784

As a result, the attachments of a source dataset such as this one

bildschirmfoto 2018-08-30 um 14 28 17

cannot be transferred to the server correctly.

I suggest to add a warning to the user above this UI element in case any of the files has any non-ASCII characters in its name, recommending to use only letters, numbers, underscores and dashes.

EPD profiles for indicators and modules

  • one default profile in settings
  • remove indicator enum, group enum, and mappings -> new Indicator class with data from profile
  • clean up: there are at least two functions which create a Ref from an Indicator
  • manage EPD profiles in settings (EpdProfiles.getAll(), import/export)
  • remove modules enum; load from settings
  • display modules in ProfileEditor
  • sync-function for indicator names and units in profiles (sync with local data sets)
  • set a profile as reference/default profile in settings
  • sync profile when data set language changed in settings
  • Add function for deleting (unused) profiles
  • check for == comparisons for indicators and modules (because they were enums)
  • take out indicator names from translations
  • add new translations
  • think about indicator groups (do we need them?)

empty elements in newly created dataset

Test case:

  • create new EPD
  • save
  • result: there are a few empty elements in the resulting document that cause schema validation to fail
        <time/>
        <geography>
            <locationOfOperationSupplyOrProduction/>
        </geography>
        <technology/>

Save data in user home

Currently all data are stored in the data folder in the installation directory. The user has to import/export data sets in order to use them when installing a new version. A common data folder in the user's home directory would be good

@okworx What about ~/.epd-editor?

exempt reference data from dependency scan, upload

Currently, all reference data (LCIA method datasets, commonly used sources for compliance systems etc.) are always included when activating "include dependencies" for upload.

It would be nice to be able to exempt those from above operations, for example by marking them as reference data. At least for the LCIA method datasets this is necessary, as the new method datasets for EN15804+A2 from EF are huge which lead to some operations that include a dependency scan taking a very long time. Plus it would waste a lot of bandwidth and time when uploading them.

wrong shortdescription being written to datasets

When the dataset language is set to "English", the shortDescriptions of the indicators in the generated datasets contain the German names, even though the actual indicator datasets are carrying names in multiple languages (en, de, es). One would expect that the shortdescription would be the English one.

Test case:

  • switch "Data sets" language to English
  • restart application
  • create new EPD dataset with some indicator data
  • indicators will look like this:

<referenceToFlowDataSet type="flow data set" refObjectId="a2b32f97-3fc7-4af2-b209-525bc6426f33" uri="../flows/a2b32f97-3fc7-4af2-b209-525bc6426f33"> <common:shortDescription xml:lang="en">Komponenten für die Wiederverwendung (CRU)</common:shortDescription> </referenceToFlowDataSet>

but expected would be this:

<referenceToFlowDataSet type="flow data set" refObjectId="a2b32f97-3fc7-4af2-b209-525bc6426f33" uri="../flows/a2b32f97-3fc7-4af2-b209-525bc6426f33"> <common:shortDescription xml:lang="en">Components for re-use (CRU)</common:shortDescription> </referenceToFlowDataSet>

empty safetyMargin element written to document

An empty safetyMargin element appears in the markup, rendering the document schema-invalid.

Test case:

  • create new dataset
  • enter something in safety margins description
  • save dataset
  • clear text in safety margins description
  • result: there is still an empty element in the markup:
<common:other>
      <epd:safetyMargins xmlns:epd="http://www.iai.kit.edu/EPD/2013"/>
</common:other>

Other test case:

  • open existing dataset
  • increment version number
  • save
  • result: same as above

trigger dependency check before upload

When initiating an Upload action (and the "checkEPDsOnProductUpdates" setting is active), the same dependency check implemented in #15 that is performed when opening a dataset shall be performed to ensure the uploaded artifact is up-to-date.

Add option to enter a functional unit

Add an option in the editing window for EPD datasets to enter a functional unit instead of/in addition to a declared unit.

That goes under quantitativeReference/functionalUnitOrOther.

This also requires that the user can set the type of quantitative reference (quantitativeReference/@type) to either "Reference flow(s)" or "Functional unit".

move Q-metadata content to common:other under modellingAndValidation

The Q-metadata information should go in the common:other element under modellingAndValidation.

As datasets are already being created using this new feature, it needs to be ensured that existing information is not lost but automatically transferred when opening and existing dataset.

LCIA method names from profile are not correctly read

When loading a custom profile with different LCIA method names, the editor still displays the names of the LCIA methods that are shipping by default.

Test case:

  1. From a clean installation, import the attached profile (it has been attached as EN_15804_test.json.txt as GitHub does not allow JSON files). It contains an LCIA method named "Acidification potential".
  2. Open the profile in the editor.
  3. It shows a different name for AP as shown here: image

EN_15804_test.json.txt

Workaround:

  1. In the navigation view, delete all default LCIA methods.
  2. Import the new profile again.
  3. It now shows the correct names that are defined in the profile.

remove data shipping with application builds

Now that we have the profiles as a facility to store a URL for the reference data, it is in principle no longer necessary to ship any data with the application.

At some point we might want to remove the default data, and only ship the default profile with the application instead. Users can then decide to either get the reference data from the location in the default profile, or their own.

This would avoid having to update the data shipping with the editor and actually make the user experience more consistent, as in any case updated reference data needs to be retrieved from the server.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.