GithubHelp home page GithubHelp logo

fairdatateam / fairdatapoint Goto Github PK

View Code? Open in Web Editor NEW
58.0 58.0 36.0 12.35 MB

Home Page: https://www.fairdatapoint.org

License: MIT License

Java 99.91% Dockerfile 0.09%
data-repository fair-data metadata

fairdatapoint's People

Contributors

anandgavai avatar arnikz avatar dependabot-preview[bot] avatar dependabot[bot] avatar egonw avatar ipomrawh avatar janslifka avatar kburger avatar larsmans avatar luizbonino avatar mareksuchanek avatar markwilkinson avatar qamodi avatar rajaram5 avatar shamanou avatar vknaisl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fairdatapoint's Issues

Default fdp metadata created with wrong uri

In the current implement the default fdp metedata can also be created during the 1st catalog POST call. However during the catalog POST call session the requested URL will be something like the following http://localhost:8084/fdp/catalog . So the default fdp metadata will be created with a wrong subject uri

Source code reference
https://github.com/DTL-FAIRData/FAIRDataPoint/blob/master/src/main/java/nl/dtls/fairdatapoint/api/controller/MetadataController.java#L343

https://raw.githubusercontent.com/schemaorg/schemaorg/master/data/releases/3.4/schema.ttl

/Users/munchdevs/Library/Java/JavaVirtualMachines/openjdk-14.0.2/Contents/Home/bin/java -Dmaven.multiModuleProjectDirectory=/Users/munchdevs/Documents/FAIR/FAIRDataPoint "-Dmaven.home=/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven3" "-Dclassworlds.conf=/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven3/bin/m2.conf" "-Dmaven.ext.class.path=/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven-event-listener.jar" "-javaagent:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar=62163:/Applications/IntelliJ IDEA CE.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath "/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven3/boot/plexus-classworlds.license:/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven3/boot/plexus-classworlds-2.6.0.jar" org.codehaus.classworlds.Launcher -Didea.version=2020.2 -DskipTests=true install
[INFO] Scanning for projects...
[INFO]
[INFO] -----------------------< nl.dtls:fairdatapoint >------------------------
[INFO] Building FairDataPoint 1.6.0
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- git-commit-id-plugin:2.2.4:revision (default) @ fairdatapoint ---
[INFO]
[INFO] --- git-commit-id-plugin:2.2.4:revision (get-the-git-infos) @ fairdatapoint ---
[INFO]
[INFO] --- rdf4j-generator-maven-plugin:0.2.0:generate (default) @ fairdatapoint ---
[INFO] Parsing https://sparontologies.github.io/datacite/current/datacite.ttl
[INFO] /Users/munchdevs/Documents/FAIR/FAIRDataPoint/target/generated-sources/nl/dtls/fairdatapoint/vocabulary/DATACITE.java already exists and overwrite is set to false, skipping
[INFO] Parsing https://raw.githubusercontent.com/re3data/ontology/master/r3dOntology.ttl
[INFO] /Users/munchdevs/Documents/FAIR/FAIRDataPoint/target/generated-sources/nl/dtls/fairdatapoint/vocabulary/R3D.java already exists and overwrite is set to false, skipping
[INFO] Parsing https://raw.githubusercontent.com/DTL-FAIRData/FDP-O/develop/fdp-ontology.owl
[INFO] /Users/munchdevs/Documents/FAIR/FAIRDataPoint/target/generated-sources/nl/dtls/fairdatapoint/vocabulary/FDP.java already exists and overwrite is set to false, skipping
[INFO] Parsing https://raw.githubusercontent.com/schemaorg/schemaorg/master/data/releases/3.4/schema.ttl
[INFO] Downloading https://raw.githubusercontent.com/schemaorg/schemaorg/master/data/releases/3.4/schema.ttl
[ERROR] Could not get file from https://raw.githubusercontent.com/schemaorg/schemaorg/master/data/releases/3.4/schema.ttl
java.io.FileNotFoundException: https://raw.githubusercontent.com/schemaorg/schemaorg/master/data/releases/3.4/schema.ttl
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0 (HttpURLConnection.java:1928)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream (HttpURLConnection.java:1528)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream (HttpsURLConnectionImpl.java:224)
at com.github.kburger.maven.rdf4j.generator.GeneratorMojo.getFileInputStream (GeneratorMojo.java:205)
at com.github.kburger.maven.rdf4j.generator.GeneratorMojo.execute (GeneratorMojo.java:136)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:564)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
at org.codehaus.classworlds.Launcher.main (Launcher.java:47)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10.971 s
[INFO] Finished at: 2020-08-19T00:19:09+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.github.kburger:rdf4j-generator-maven-plugin:0.2.0:generate (default) on project fairdatapoint: Could not get file from https://raw.githubusercontent.com/schemaorg/schemaorg/master/data/releases/3.4/schema.ttl -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Saying thank you! (and a short question regarding affilitation & funding)

Hello dear team,

I did not find a better way in the readme documents: just wanted to say thank you for your efforts!

The Fair Data Point looks really great and is easy to deploy. Makes a very professional impression!

A short question: are you as developers related to go-fair.org ? Is there funding for you, or do you do voluntary work? Couldn't find info on that...

Wishing you all the best,
Robert

Q: Changing SPARQL query

The Question
Good to know that there a new feature of executing sparql query in version 1.15.
I see a predefined sparql query is provided but it is not editable to define my own query., as the screenshot shows.
I wonder is it possible to run freely edit the query?

fdpsparqlquery

Unable to write to FDP - time parsing problem

FDP version 1.15 docker image

Describe the bug
when writing to a record using HTTP PUT, new information is rejected due to time parsing error.
fdp logs (from docker):

fdp_1 | 2022-11-09 13:38:40,210 191569 [http-nio-80-exec-4] ERROR org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet] - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.time.format.DateTimeParseException: Text '2022-05-09T06:03:00.445' could not be parsed at index 23] with root cause
fdp_1 | java.time.format.DateTimeParseException: Text '2022-05-09T06:03:00.445' could not be parsed at index 23
fdp_1 | at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2052) ~[?:?]

To Reproduce
Create a compliant DCAT Dataset record. HTTP PUT that record to an existing Dataset URI.

Q: Could you provide an example on how to use the /search endpoint?

The Question
Could you provide an example on how to use the /search endpoint?
We tried around to give it SPARQL-queries, but always got an empty array as a result.

Extension part
RestAPI/Swagger

Additional context
We need to query the FDP and get back a certain element. With the /search/query endpoint we could filter accordingly, but we did not get back the element.

Thank you.

Q: Is this project active? Is it possible to publish a dataset in a FAIR Data Point?

The Question

Hi @MarekSuchanek we are wondering if the FAIR Data Point project still active?

Because currently most issues in this GitHub repository have no answers

When we look at the list of current FDP: https://home.fairdatapoint.org
We can see that a lot of people (including us) deployed publicly available FAIR Data Point, but got discouraged by the instability and lack of basic features (cf. other GitHub issues)

If we take a look at the list of published FDP it is quite stricking: https://home.fairdatapoint.org/

On the 164 FDP that were publicly deployed (this shows there is a need and crave for such service at least), only 24 are still active

And when we look more closely at a few of those 24 FAIR Data Point, more than half are obviously not used in practice:

We tried to include datasets from the rare FAIR Data Points that were hosting actual data that are not quick tests (e.g. https://w3id.org/ejp-rd/fairdatapoints/wp13 ) to some testing workflow, the problem is that FAIR Data Point is so unstable (that's why we stopped trying to deploy one on our servers) that every now and then the FDP is down and breaks our tests until it's back up

We would like to publish a dataset in a FDP but we don't want to maintain it's deployment (it's a lot of work to fix it every week/month just to have basic DCAT RDF description available for a dataset).

Is there a way to do so?

Extension part
Overall

Additional context

FAIRDataPoint unstable: breaking at the first restart

Describe the bugs

A few months ago we started a FAIRDataPoint in "production", we logged in, created users, changed the weird default users/passwords manually, and we even added datasets metadata. It was "working" (the form to add metadata was really simple and did not help the user in adding dataset metadata, and the whole thing was not user-friendly to use, but it was at least showing some text fields for the datasets we entered!)

This was a few months ago, now when we go to the FAIRDataPoint we get greeted with 404, see by yourself at https://fairdatapoint.semanticscience.org/

The only error showing in the docker logs is:

fdp_1         | 2021-07-23 10:44:25,145 56916 [http-nio-80-exec-4] INFO  nl.dtls.fairdatapoint.util.HttpUtil - Modified requesed url https://fairdatapoint.semanticscience.org
fdp_1         | 2021-07-23 10:44:25,187 56958 [http-nio-80-exec-4] ERROR nl.dtls.fairdatapoint.api.controller.exception.ExceptionControllerAdvice - No metadata found for the uri 'https://fairdatapoint.semanticscience.org'

I am trying to connect with the login/password I gave at the time, but I am getting bad login, and I can't reset the password

To Reproduce
Steps to reproduce the behavior:

  1. Start a FAIRDataPoint server with docker-compose
  2. Change the login/password
  3. It should stop working for no reason at some point and show a 404 on the main page

Expected behavior

Multiple things should be improved to make the FAIRDataPoint more stable and production-ready:

  1. Remove the default albert.einstein user/password, this is really bad practice for service that are expected to go to productions at multiple places. This should be defined as environment variables at the start of the docker-compose (like for most services with login). So that the admin can at least reconnect
  2. Implement OAuth login instead of requiring user to create a new account with a new password. It would make complete sense to use ORCID OAuth here or SOLID authentication, worst case you can also use GitHub, GitLab, Google or even Facebook if you want, but if you want to create an API based on Linked Data principle, it does not make sense to create another silo of user data, and ask your users to create another account (note that it is easy to implement OAuth with Spring, easier than creating and managing a whole user base)

Context
Please fill the following and eventually add additional information (e.g. about used storage in case that issue is storage-related):

  • FDP version:
Server
v1.6.0~82acd3f
23. 6. 2020, 15:57
  • Docker Engine version: Docker version 20.10.7, build f0df350
  • Operating System: CentOS 7

Here is the docker-compose.yml we use:

version: '3'
services:
    fdp:
        image: fairdata/fairdatapoint:1.6.0
        volumes:
            - ./application.yml:/fdp/application.yml:ro
    fdp-client:
        image: fairdata/fairdatapoint-client:1.6.0
        environment:
            FDP_HOST: fdp
            VIRTUAL_HOST: fairdatapoint.semanticscience.org
            LETSENCRYPT_HOST: fairdatapoint.semanticscience.org
        ports:
            - 8081:80
    mongo:
        image: mongo:4.0.12
        volumes:
            - /data/fairdatapoint/mongo/data:/data/db
    blazegraph:
        image: metaphacts/blazegraph-basic:2.2.0-20160908.003514-6
        volumes:
            - /data/fairdatapoint/blazegraph:/data

I already reported this issue when I started with FDP: #94

At the time I somehow managed to get it running by tweaking permissions of the Blazegraph volumes, but this wasn't a stable fix

Note that FDP can be deployed with multiple triplestores as backend, I used Blazegraph because this was the one pushed in the "production deployment" documentation: https://fairdatapoint.readthedocs.io/en/latest/deployment/production-deployment.html

So I was expecting blazegraph to be the triplestore working the best with FDP, but it does not seems to be the case

Maybe we should use another triplestore? But which one? It would be nice to have a clear idea of the exact stack that needs to be setup for production (with working persistent volumes)

dash:EnumSelectEditor converts IRIs to strings

Describe the bug
A SHACL descriptor that sets the node type as IRI, and provides a sh:in list of IRIs, will correctly display the dropdown list using only the localname, but when converted to RDF, those IRIs will become strings.

To Reproduce
Steps to reproduce the behavior:

Metadata Definition:

[
    sh:path ejp:vpConnection ;
    sh:nodeKind sh:IRI ;
    sh:in (<http://purl.org/ejp-rd/vocabulary/VPDiscoverable> <http://purl.org/ejp-rd/vocabulary/VPQueryable>) ;
    sh:minCount 0 ;
    sh:maxCount 2 ;
    sh:order 0 ;
    dash:editor dash:EnumSelectEditor ;
    dash:viewer dash:LabelViewer ;
  ] .

Result:

@prefix dcat: <http://www.w3.org/ns/dcat#>.
@prefix dct: <http://purl.org/dc/terms/>.
@prefix foaf: <http://xmlns.com/foaf/0.1/>.
@prefix loc: <http://localhost:7070/>.
@prefix c: <http://localhost:7070/catalog/>.
@prefix voc: <http://purl.org/ejp-rd/vocabulary/>.

loc:new
    a dcat:DataService, dcat:Resource;
    dct:isPartOf c:3bb6ba9a-c2f4-4d15-9f3c-e495da1e7418;
    dct:publisher [ a foaf:Agent ];
    voc:vpConnection "http://purl.org/ejp-rd/vocabulary/VPDiscoverable".

Expected behavior

The output RDF, with the stringified IRI, cannot validate against the nodeType IRI, and therefore it is impossible to use the EnumSelectEditor to select from a set of IRIs. This is... disappointing ;-)

Context
Please fill the following and eventually add additional information (e.g. about used storage in case that issue is storage-related):

  • FDP version: 1.16.2
  • Docker Engine version: 20.10.21
  • Operating System: Linux

Fields "Metadata Issued" and "Metadata Modified" are always set to today.

Describe the bug
The fields "Metadata Issued" and "Metadata Modified" are always set to today, even if I created the metadata a week ago and haven't touched it since.

To Reproduce
Steps to reproduce the behavior:

  1. Install the FDP (I used docker, in this case).
  2. Create a catalog.
  3. Wait for a day.
  4. The catalog's metadata date fields are set to today, not yesterday.

Expected behavior
The "Metadata Issued" field should show when the meta data (or, alternatively, the actual data) was first created. The "Metadata Modified" field should show when the meta data was last modified.

Context
Please fill the following and eventually add additional information (e.g. about used storage in case that issue is storage-related):

  • FDP version: v1.15.0~4bf6089
  • Docker Engine version: 20.10.12-0ubuntu4
  • Operating System: Ubuntu 22.04.2 LTS

Saving the edit fail (Save button loop for ever)

Describe the bug
I modified dcat:accessURL and dcat:downloadURL of the page
https://fairsfair.fair-dtls.surf-hosted.nl/distribution/cb070ea0-5061-43af-b007-4b93ea689b8a/edit
and instead of saving it when I push the save button status of the page (above Edit kurkijärvi3051-3.jpg title ) is permanently Loading... and also Save button stay rolling circle of the circles state.

To Reproduce
Steps to reproduce the behavior:

  1. Login as pj
  2. The distribution already exists.
  3. Do some edit
  4. Save button stay rolling circle of the circles state

Expected behavior
The data is saved (or error message)

Context
Please fill the following and eventually add additional information

SPARQL interface from FDP web client not displaying the correct results

Describe the bug
SPARQL interface from FDP web client not showing all results from the default query.

To Reproduce

  1. Go into the SPARQL interface of the web client FDP. Ex: https://fair.healthinformationportal.eu/search?isSparql=true
  2. Click "search". It will show only 25 results sorted alphabetically (from A.... to C....)
  3. Modify the query by changing ASC(?title) -> DESC(?title)
  4. Click "search". It will show 25 results (but different from point 2. ) sorted alphabetically (from U... to C....)

Expected behavior
It should show 50 results instead 25 because LIMIT 50 in the query and because there are >50 metadata hosted under this FDP instance.
This doesn't happen if you do the default query but you change ASC(?title) -> ASC(?rdfType)
Then it shows 50 results all the time

Thank you

Context

  • FDP version: 1.16.2

32Bit: PRODUCTION:Linux:B32 not supported

Trying to compile on: Linux jetson-nano 4.9.140-tegra #1 SMP PREEMPT Mon Dec 9 22:47:42 PST 2019 aarch64 aarch64 aarch64 GNU/Linux

Running mvn install for spring-security-acl-mongodb gives:
Tests in error:
org.springframework.security.acls.mongodb.MongoDBAclServiceTest: this version does not support 32Bit: PRODUCTION:Linux:B32
org.springframework.security.acls.mongodb.MongoDBMutableAclServiceTest: this version does not support 32Bit: PRODUCTION:Linux:B32

POSTing catalog metadata without a dct:description breaks the client.

When posting catalog metadata without the (optional) dc:description property the POST call is accepted without problems, as it should. When viewing the repository metadata in the client, an error is triggered. The problem lies in the following call, which assumes the description is a mandatory field (line 93):

public CatalogMetadataSimpleDTO toSimpleDTO(CatalogMetadata c) {
return new CatalogMetadataSimpleDTO(
c.getIdentifier().getIdentifier().getLabel(),
c.getUri().toString(),
c.getTitle().getLabel(),
c.getDescription().getLabel(),
c.getThemeTaxonomys().stream().map(uriMapper::toDTO).collect(Collectors.toList()),
c.getDatasets().size(),
c.getIssued().getLabel(),
c.getModified().getLabel()
);
}

Implement autocomplete

Is your feature request related to a problem? Please describe.
I'm always frustrated when I need to fill information because there is no autocomplete

  • I need to go to OLS/BioPortal/LOV to find out the URI of themes I want to use
  • I need to figure out what is the URI I need to use for language (personally I would use lexvo, but I am not even sure it is the right one)

FDP is entirely, and only, about filling metadata, without autocomplete it seems like a really unfinished and unserious product (some quick student project UI). Because helping people filling metadata is the whole point of the application.

Additionally when people are filling metadata for the themes/language they might use slightly different URIs because people tend to make mistake when copy/pasting stuff from the internet if they are not assisted. Which leads to a poor quality of metadata. People might use a not standard URI for the NCIT theme (e.g. https instead of http), which will make the analysis of the metadata harder.

Describe the solution you'd like
Basic autocomplete for as many fields you can. Based on existing ontologies when possible. Basically everytime people needs to fill in a URI that comes from existing ontologies there should be an autocomplete. Currently I only see themes and language that would really benefit, but you can probably find more

For example the person deploying a FDP could provide a list of ontologies URL, and those ontologies will be used by the FDP client to perform the autocomplete on different fields (e.g. NCIT and ORDO ontology for the themes)

Additional context
I did not described all the fields that will require to have autocomplete because it is quite obvious if you are using the application

If you try to create multiple resources you will notice it quickly.

Not sure if I should put this issue on the fdp-client repo or here, but this seems to be the main repo for the project

Body Request Type for POST Methods

Hi I have been trying to make some POST calls, later found that the body requires Turtle format.
But the Body comes with predefined JSON as defined in the POJO. There is no direct way to convert the given JSON to RDF format, Even JSON to JSON-LD, can you please help on how do i proceed?

Does the Post method works? There are no direct leads found, if you can guide on how to send the POST Request for the FAIRDataPoint it will be helpful.

Last question is for the Clarity, Does FDP just creates and stores "Meta Data" only?

Because Fairifier also created the Data set, I'm trying to connect both the dots if clairity is given will be much thankful.

FDP Could not be reset

Describe the bug
When I tried to reset the FDP in a production deployment, it says unable to reset. as the pic shows.
The error is same when I tried to delete metadata only.

image

Expected behavior
The user defined metadata, resources and shapes are expected to be cleared.

Context

  • FDP version: v1.12.0~859fe61
  • Docker Engine version:
  • Operating System: Win 10

Relations among dataset, distribution and data record

From the spec [1], I understood that a data record should be linked to a dataset, right?

The Dataset Metadata can have a Data Record Metadata. The Dataset Metadata Retrieval function can lead to the Data Record Metadata Retrieval function by appending the URI of the Data Record Metadata in the Dataset Metadata content.

(a) I miss an example in the spec of this link.
(b) Which predicate should be used to link a dataset to a data record?
(c) I think that it is also necessary to link the data record with the distribution, since a data record is serialized in one (or more) specific distribution of the dataset.
(d) It would be more useful if the data record example is a real use case.

[1] https://github.com/FAIRDataTeam/FAIRDataPoint/wiki/FAIR-Data-Point-Specification

Creating and updating metadata schemas:

I would like to improve the form generation using SHACL and DASH (ref:https://datashapes.org/forms.html).
In particular, I've tried to specify that the value must be of datatype "float" and not a literal. I have tried a couple of solutions based on the documentation. But every time I have an error when saving the record:
My first try:

sh:path geo:lat ;
sh:nodeKind xsd:Float ;
sh:maxCount 1 ;
dash:editor dash:TextFieldEditor ;
dash:viewer dash:LiteralViewer ;

Any idea of a correct and working syntax? Thx P.

Q: How to list all datasets without having to go through each catalog ?

The Question
Dear,
Is there a way to grap the list of the metadata about all datasets hosted by the FDP server without having to go through each catalog ?

I tried different options through the API swagger interface. I thought that [/index/entries/all] could list all of them but I get an error (400 bad request) when doing it.
I also tried with SPARQL queries but it seems impossible to have this result whitout having to go through each catalog.
Do you have a query that can do it ?

Thank you very much,

note: running FDP server last version V1.16.2

Issue accessing FAIR Data Point after first deployment

Hi, we tried to deploy the FAIR Data Point on our server using nginx-proxy and its letsencrypt companion.

We routed the fdp-client as explained in the documentation for production deployment.

We manage to reach the FDP client at the expected address: https://fairdatapoint.semanticscience.org/

But it displays 404 Not Found:

Screenshot from 2020-11-11 12-45-03

The Login page seems to work but there is no indication of a default startup password in the documentation.

Do you know what are the next steps to access the deployed FAIR Data Point?

Our docker-compose.yml using VIRTUAL_HOST and LETSENCRYPT_HOST:

version: '3'
services:
    fdp:
        image: fairdata/fairdatapoint:1.6.0
        volumes:
            - ./application.yml:/fdp/application.yml:ro
    fdp-client:
        image: fairdata/fairdatapoint-client:1.6.0
        environment:
            FDP_HOST: fdp
            VIRTUAL_HOST: fairdatapoint.semanticscience.org
            LETSENCRYPT_HOST: fairdatapoint.semanticscience.org
    mongo:
        image: mongo:4.0.12
        volumes:
            - ./mongo/data:/data/db
    blazegraph:
        image: metaphacts/blazegraph-basic:2.2.0-20160908.003514-6
        volumes:
            - ./blazegraph:/blazegraph-data

application.yml:

instance:
    clientUrl: https://fairdatapoint.semanticscience.org
    persistentUrl: https://fairdatapoint.semanticscience.org
security:
    jwt:
        token:
            secret-key: <128 char string>
repository:
    type: 1

The docker logs gives this:

fdp_1 | 2020-11-11 11:33:32,498 61692 [http-nio-80-exec-5] INFO  nl.dtls.fairdatapoint.util.HttpUtil - Modified requesed url https://fairdatapoint.semanticscience.org
fdp_1 | 2020-11-11 11:33:32,501 61695 [http-nio-80-exec-5] ERROR nl.dtls.fairdatapoint.api.controller.exception.ExceptionControllerAdvice - 
    No metadata found for the uri 'https://fairdatapoint.semanticscience.org'

Are we missing a step? Or is there a default login/password we are unaware of?

Thanks a lot for this project! We cannot wait to try it :)

please add boolean support

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when...

Describe the solution you'd like
support for dash:boolean.

Describe alternatives you've considered
tried using "true" as a literal, with node type boolean, but that doesn't work (the correct value of the boolean is "true"^^xsd:boolean, not just the string "true"

Additional context
Add any other context or screenshots about the feature request here.

Docker-compose deploy on RHEL8 stops working after 24 minutes.

For a POC i first deployed your "local installation" on my laptor on Windows 10. This works and stays up.
For a workshop i deployed the same configuration on a Redhat Enterprise Linux server for internal use. The configuration at the end starts up and i can log in.

But after about 24 minutes the application becomes unresponsive. In the logging i get :

Startup

fdp_1 | 2020-10-08 20:18:39,374 4174 [main] INFO org.springframework.data.repository.config.RepositoryConfigurationDelegate - Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.
fdp_1 | 2020-10-08 20:18:39,506 4306 [main] INFO org.springframework.data.repository.config.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 122ms. Found 8 MongoDB repository interfaces.
fdp_1 | 2020-10-08 20:18:40,533 5333 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'spring.data.mongodb-org.springframework.boot.autoconfigure.mongo.MongoProperties' of type [org.springframework.boot.autoconfigure.mongo.MongoProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:40,540 5340 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.data.mongo.MongoDbFactoryDependentConfiguration' of type [org.springframework.boot.autoconfigure.data.mongo.MongoDbFactoryDependentConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:40,543 5343 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.data.mongo.MongoDbFactoryConfiguration' of type [org.springframework.boot.autoconfigure.data.mongo.MongoDbFactoryConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:40,549 5349 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration' of type [org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:40,912 5712 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[mongo:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
fdp_1 | 2020-10-08 20:18:40,971 5771 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongo' of type [com.mongodb.MongoClient] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,002 5802 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoDbFactory' of type [org.springframework.data.mongodb.core.SimpleMongoDbFactory] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,032 5832 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.data.mongo.MongoDataConfiguration' of type [org.springframework.boot.autoconfigure.data.mongo.MongoDataConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,049 5849 [cluster-ClusterId{value='5f7f74207286e55d90907b1f', description='null'}-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:1}] to mongo:27017
fdp_1 | 2020-10-08 20:18:41,063 5863 [cluster-ClusterId{value='5f7f74207286e55d90907b1f', description='null'}-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 0, 12]}, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=2648730}
fdp_1 | 2020-10-08 20:18:41,130 5930 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoCustomConversions' of type [org.springframework.data.mongodb.core.convert.MongoCustomConversions] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,263 6063 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoMappingContext' of type [org.springframework.data.mongodb.core.mapping.MongoMappingContext] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,293 6093 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mappingMongoConverter' of type [org.springframework.data.mongodb.core.convert.MappingMongoConverter] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,368 6168 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#301d8120' of type [org.springframework.beans.factory.config.ObjectFactoryCreatingFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,369 6169 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#301d8120' of type [org.springframework.beans.factory.config.ObjectFactoryCreatingFactoryBean$TargetBeanObjectFactory] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,370 6170 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoTemplate' of type [org.springframework.data.mongodb.core.MongoTemplate] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,403 6203 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#29182679' of type [org.springframework.beans.factory.config.PropertiesFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,404 6204 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#29182679' of type [java.util.Properties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,405 6205 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#5432050b' of type [org.springframework.data.repository.core.support.PropertiesBasedNamedQueries] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,411 6211 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#2e52fb3e' of type [org.springframework.data.repository.core.support.RepositoryFragmentsFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,412 6212 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#2e52fb3e' of type [org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,540 6340 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclRepository' of type [org.springframework.data.mongodb.repository.support.MongoRepositoryFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,542 6342 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclRepository' of type [com.sun.proxy.$Proxy119] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,545 6345 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclConfig' of type [nl.dtls.fairdatapoint.config.AclConfig$$EnhancerBySpringCGLIB$$ec5531cd] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,547 6347 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'cacheConfig' of type [nl.dtls.fairdatapoint.config.CacheConfig$$EnhancerBySpringCGLIB$$643f1005] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,554 6354 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'cacheManager' of type [org.springframework.cache.concurrent.ConcurrentMapCacheManager] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,559 6359 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'permissionGrantingStrategy' of type [org.springframework.security.acls.domain.DefaultPermissionGrantingStrategy] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,561 6361 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclAuthorizationStrategy' of type [org.springframework.security.acls.domain.AclAuthorizationStrategyImpl] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,562 6362 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclCache' of type [org.springframework.security.acls.domain.SpringCacheBasedAclCache] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,572 6372 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'lookupStrategy' of type [org.springframework.security.acls.mongodb.BasicLookupStrategy] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,572 6372 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclService' of type [org.springframework.security.acls.mongodb.MongoDBMutableAclService] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,574 6374 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'defaultMethodSecurityExpressionHandler' of type [org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,578 6378 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler@1bcf67e8' of type [org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,580 6380 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclMethodSecurityConfiguration' of type [nl.dtls.fairdatapoint.config.AclMethodSecurityConfiguration$$EnhancerBySpringCGLIB$$c8971de] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
fdp_1 | 2020-10-08 20:18:41,585 6385 [main] INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'methodSecurityMetadataSource' of type [org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

Involuntary shutdown

(the tokens call were an attempt at a keep-alive solution)

fdp_1 | 2020-10-08 20:41:41,546 1386346 [http-nio-80-exec-5] INFO nl.dtls.fairdatapoint.api.filter.LoggingFilter - http://fdp/tokens
fdp-client_1 | 10.162.28.92 - - [08/Oct/2020:20:41:41 +0000] "POST /tokens HTTP/1.1" 200 195 "-" "Jakarta Commons-HttpClient/3.1" "-"
fdp_1 | 2020-10-08 20:41:51,676 1396476 [http-nio-80-exec-7] INFO nl.dtls.fairdatapoint.api.filter.LoggingFilter - http://fdp/tokens
fdp-client_1 | 10.162.28.92 - - [08/Oct/2020:20:41:51 +0000] "POST /tokens HTTP/1.1" 200 195 "-" "Jakarta Commons-HttpClient/3.1" "-"
fdp_1 | 2020-10-08 20:42:41,246 1446046 [cluster-ClusterId{value='5f7f74207286e55d90907b1f', description='null'}-mongo:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server mongo:27017
fdp_1 | com.mongodb.MongoSocketOpenException: Exception opening socket
fdp_1 | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:131) [mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at java.lang.Thread.run(Thread.java:832) [?:?]
fdp_1 | Caused by: java.net.SocketTimeoutException: Connect timed out
fdp_1 | at sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:546) ~[?:?]
fdp_1 | at sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) ~[?:?]
fdp_1 | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:333) ~[?:?]
fdp_1 | at java.net.Socket.connect(Socket.java:648) ~[?:?]
fdp_1 | at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | ... 3 more
fdp_1 | 2020-10-08 20:42:42,524 1447324 [cluster-ClusterId{value='5f7f74227286e55d90907b20', description='null'}-mongo:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server mongo:27017
fdp_1 | com.mongodb.MongoSocketOpenException: Exception opening socket
fdp_1 | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:131) [mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at java.lang.Thread.run(Thread.java:832) [?:?]
fdp_1 | Caused by: java.net.SocketTimeoutException: Connect timed out
fdp_1 | at sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:546) ~[?:?]
fdp_1 | at sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) ~[?:?]
fdp_1 | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:333) ~[?:?]
fdp_1 | at java.net.Socket.connect(Socket.java:648) ~[?:?]
fdp_1 | at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | ... 3 more
fdp_1 | 2020-10-08 20:42:42,713 1447513 [cluster-ClusterId{value='5f7f74227286e55d90907b22', description='null'}-mongo:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server mongo:27017
fdp_1 | com.mongodb.MongoSocketOpenException: Exception opening socket
fdp_1 | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:131) [mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at java.lang.Thread.run(Thread.java:832) [?:?]
fdp_1 | Caused by: java.net.SocketTimeoutException: Connect timed out
fdp_1 | at sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:546) ~[?:?]
fdp_1 | at sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) ~[?:?]
fdp_1 | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:333) ~[?:?]
fdp_1 | at java.net.Socket.connect(Socket.java:648) ~[?:?]
fdp_1 | at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-3.11.2.jar!/:?]
fdp_1 | ... 3 more
mongo_1 | 2020-10-08T20:56:02.826+0000 I NETWORK [conn5] end connection 172.31.0.5:54302 (7 connections now open)
mongo_1 | 2020-10-08T20:56:02.826+0000 I NETWORK [conn3] end connection 172.31.0.5:54298 (6 connections now open)
mongo_1 | 2020-10-08T20:56:43.786+0000 I NETWORK [conn7] end connection 172.31.0.5:54348 (5 connections now open)
mongo_1 | 2020-10-08T20:57:37.034+0000 I NETWORK [conn6] end connection 172.31.0.5:54304 (4 connections now open)
mongo_1 | 2020-10-08T20:58:30.282+0000 I NETWORK [conn8] end connection 172.31.0.5:54382 (3 connections now open)
mongo_1 | 2020-10-08T20:58:34.378+0000 I NETWORK [conn2] end connection 172.31.0.5:54296 (2 connections now open)
mongo_1 | 2020-10-08T20:58:34.378+0000 I NETWORK [conn4] end connection 172.31.0.5:54300 (0 connections now open)
mongo_1 | 2020-10-08T20:58:34.378+0000 I NETWORK [conn1] end connection 172.31.0.5:54294 (1 connection now open)

Configuration

# docker-compose.yml

version: '3'
services:

mongo:
    image: mongo:4.0.12
    ports:
        - 27017:27017
    volumes:
        - ./mongo/data:/data/db
blazegraph:
    image: metaphacts/blazegraph-basic:2.2.0-20160908.003514-6
    ports:
        - 8080:8080
    volumes:
        - ./blazegraph:/blazegraph-data

fdp:
    image: fairdata/fairdatapoint:1.6.0
    volumes:
        - ./application.yml:/fdp/application.yml:ro

fdp-client:
    image: fairdata/fairdatapoint-client:1.6.0
    ports:
        - 80:80
    environment:
        - FDP_HOST=fdp

"docker-compose.yml" [dos] 37L, 806C

# application.yml

# ... other configuration

repository:
type: 5
blazegraph:
url: http://blazegraph:8080/blazegraph
~
"application.yml" [readonly][noeol][dos] 8L, 139C 1,1

Q: How to ensure that same record/object will receive same FDP UUID

@a-tassoni, @markwilkinson , @sdvr

Extension part
adding / updating data

Requirement / why we think this is important
Requirement F1: (Meta) data are assigned globally unique and persistent identifiers

FDP uses its own UUIDs for all of its objects (catalogues, etc). These are created when you use the web interface, or the FDP API. In our testing it seems that even if you provide your own UUID, FDP will overwrite it.

We need to ensure that when we uplift data to the FDP (regardless of our local data processing pipeline) that these UUIDs are somehow matched consistently – we assume that this has to be persistent matching for the data to be FAIR. In other words, if we have records with local IDs 1-10, these should always match to the same FDP UUIDs.

We know that when we add a new resource with POST to the FDP the response body returns the generated UUID (as part of the resource’s URI) and we can use this to do PUT afterwards to update resources and the URIs stay persistent from there on.
But in case we would need to setup a completely new FDP, FDP would create a complete new set of URIs because it would generate new UUIDs.

The Question
Is it possible to recreate the FDP content without genererating new URIs for all the objects in FDP?

Build problems.

mvn spring-boot:start fails:

[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.2.7.RELEASE:start (default-cli) on project fairdatapoint: Unable to find a suitable main class, please add a 'mainClass' property -> [Help 1]

Java 11 recommended, but 14 as target

When running mvn install for spring-rdf-migration and for spring-security-acl-mongodb, an error is produced "invalid target release: 14".
setting source and target to 11 instead of 14 solves this, but it would be good to either update pom.xml or the recommended JDK version.

Q: Can't modify metadata through FDP-web-UI when Dgraphdb.external-url is set

@a-tassoni, @sdvr

Extension part
overall

The Question

Hi all,

not sure if I'm now at the right place to ask this, I originally posted this on ejp-rd-vp/FiaB#4 and then @markwilkinson said I should rather ask here.
Maybe I'm just overlooking something trivial, I'm thankful for any hints.

For exact steps for reproduction please check this original issue, will not duplicate entire description of FiaB here. So to explain in short:

We are running a FAIR Data Point with

  • fairdata/fairdatapoint:1.15.0
  • fairdata/fairdatapoint-client:1.15.0
  • ontotext/graphdb:10.1.2

We are using apache2 as reverse proxy, minimal exemplar configuration:

<VirtualHost *:80>

    ServerName localhost
    ErrorLog /var/log/apache2/error_log
    TransferLog /var/log/apache2/access_log

    <LocationMatch /fairdatapoint-ctsr>
      ProxyPass http://localhost:7070
      ProxyPassReverse http://localhost:7070
    </LocationMatch>

    <LocationMatch /graphdb-ctsr>
      ProxyPass http://localhost:7200
      ProxyPassReverse http://localhost:7200
    </LocationMatch>

</VirtualHost>

If we want graphdb-web-ui to correctly render, then we need to set
command: ["-Dgraphdb.home=/opt/graphdb/home -Dgraphdb.external-url=http://localhost/graphdb-ctsr"] for the graphdb-container
see also ejp-rd-vp/FiaB#2

But as soon as we do that, we can no longer edit any data inside FDP - not through API nor through FDP web interface.
We can see that the query reaches the graphdb-container but it just never terminates and runs for minutes.

When we disable command: ["-Dgraphdb.home=/opt/graphdb/home -Dgraphdb.external-url=http://localhost/graphdb-ctsr"] for the container, then again we can edit data inside FDP both through API and through FDP web interface.

Many thanks!

Dagmar

Q: error on startup

The Question

Any suggestions how to troubleshoot this startup error:

fdp_1 | 2023-06-29 08:36:52,188 6934 [main] ERROR org.springframework.boot.SpringApplication - Application run failed
fdp_1 | org.yaml.snakeyaml.constructor.DuplicateKeyException: while constructing a mapping
fdp_1 | in 'reader', line 1, column 1:
fdp_1 | instance:
fdp_1 | ^
fdp_1 | found duplicate key instance
fdp_1 | in 'reader', line 16, column 1:
fdp_1 | instance:
fdp_1 | ^

I have another FDP running on the same machine, but they should be completely isolated - different networks, different ports, different volumes. I have 5 FDPs running on my other server and they're all quite happy together!

Additional context

version: "3"
services:

  fdp_client:
    image: fairdata/fairdatapoint-client:1.16
    hostname: fdpclient
    restart: always
    environment:
      FDP_HOST: fdp
    volumes:
      - ./fdp/variables.scss:/src/scss/custom/_variables.scss:ro
      - ./fdp/assets:/usr/share/nginx/html/assets:ro
      - ./fdp/favicon.ico:/usr/share/nginx/html/favicon.ico:ro
    depends_on:
      - fdp
    ports:
      - 9090:80  # You should/must close this port, if you are using hitch
    networks:
      - index-default


  graphdb:
    image: ontotext/graphdb:10.1.2
    restart: always
    hostname: graphdb
    ports:
      - 9091:7200
    volumes:
      - index-graphdb:/opt/graphdb/home
    networks:
      - index-default

  fdp:
    image: fairdata/fairdatapoint:1.15.0
    restart: always
    hostname: fdp
    volumes:
      - ./fdp/application-index.yml:/fdp/application.yml:ro
    depends_on:
      - mongo
      - graphdb
    networks:
      - index-default
  # Mongo for FDP server    
  mongo:
    image: mongo:4.2.3
    hostname: mongo
    restart: always
    volumes:
      - index-mongo-data:/data/db
      - index-mongo-init:/docker-entrypoint-initdb.d/
    networks:
      - index-default



volumes:
  index-graphdb:
    external: true
  index-mongo-data:
    external: true
  index-mongo-init:
    external: true
  index-fdp-server:
    external: true

networks:
  index-default:

WAR file is not available in GitHub releases

The installation instructions in the README point to the releases for the fdp.war file that should be installed, but there are no releases with a WAR file.

Could you update the instructions to say that the code needs to be built?

Build fails because of missing dependency fairmetadata4j

I tried building the FAIRDataPoint using Maven, but it failed with the following message:

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:18 min
[INFO] Finished at: 2019-06-04T11:18:34+02:00
[INFO] Final Memory: 20M/387M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project fairdatapoint: Could not resolve dependencies for project nl.dtls:fairdatapoint:war:0.1-beta: Could not find artifact nl.dtl:fairmetadata4j:jar:0.1-beta in jcenter-snapshots (https://jcenter.bintray.com/) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException

It turned out that after cloning fairmetadata4j and running mvn install in the directory, I could build the FDP WAR file.

Perhaps this step can be documented in the README? (Alternately, if the other project were published in the Maven repository, FDP should have built as well.)

Fresh install cannot delete Catalog

Describe the bug
Starting from a fresh install. Create a Catalog, then attempt to delete it:
Cannot delete by clicking the "trash" icon (whether published or draft)
Cannot delete from curl -X delete
In both cases, the docker log shows a 400 error

This behavior is not duplicated with a Dataset record - those can be deleted with no problem.

To Reproduce
Steps to reproduce the behavior:

  1. Login as albert
  2. Create Catalog
  3. Do click trash icon
  4. See error 400 (message "unable to delete")

Expected behavior
delete :-)

Context
Please fill the following and eventually add additional information (e.g. about used storage in case that issue is storage-related):

  • FDP version: 1.15.0
  • Docker Engine version: 1.5.0
  • Operating System: Ubuntu 20

Q:Is there a way to get better error messages from the API?

The Question
Here's a sample error message from the API:

{
    "data": {
      "timestamp": 1646750105763,
      "status": 406,
      "error": "Not Acceptable",
      "path": "/test"
    }
}

The request was obvisouly rejected. My question is: what is not accpetable in my request?

I can only assume that it's a SHACL validation issue, but the API provides 0 details.

Q: Question about dcat:theme and dcat:themeTaxonomy

The Question
Checking the FDP's specs at https://specs.fairdatapoint.org/ I saw that the Catalog scheme has the dcat:themeTaxonomy but doesn't have a dcat:theme property, contrary to DCAT2 model.
Using this implementation, I notice that the dcat:themeTaxonomy of a Catalog is the list of the its Datasets' dcat:theme.

I would like to have a hierarchy where the Catalog has a generic theme and its Datasets a more specific one.
For example, the Catalog with Datasets regarding nervous system diseases (theme ICD-10:G00-G99.9, and a dataset about Epilepsy (theme http://purl.bioontology.org/ontology/ICD10/G40.1).

My question is, why the Catalog doesn't have a dcat:theme property? Is it "legal" that in my implementation I add the property to represent scenarios similar to the one described?

Extension part
FDP Schema

Q: I keep getting the error "No metadata found for the uri '<client uri>'" and don't know how to proceed

The Question
I installed the FDP API and client on a Kubernetes Cluster and I am unable to figure out how to solve this error.

Everytime the Client makes a request to the API, the API throws the following error:

2022-03-04 16:04:32,657 76447 [http-nio-80-exec-9] ERROR nl.dtls.fairdatapoint.api.controller.exception.ExceptionControllerAdvice - No metadata found for the uri '<client uri>'

I assume that it is trying to fetch metadata from the triple store (allegro in my case), but the allegro database is completely empty.
And the API does not seem to have risen any errors related to allegro as per the following lines:

2022-03-04 16:03:23,503 7293 [main] INFO  nl.dtls.fairdatapoint.config.RepositoryConfig - Setting up Allegro Graph Store
2022-03-04 16:03:23,508 7298 [main] INFO  nl.dtls.fairdatapoint.config.RepositoryConfig - Successfully configure a RDF repository

Full API logs:

2022-03-04 16:03:17,854 main INFO Log4j appears to be running in a Servlet environment, but there's no log4j-web module available. If you want better web container support, please add the log4j-web JAR to your web archive or server lib directory.
2022-03-04 16:03:17,872 main INFO No Watcher plugin is available for protocol 'jar'

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.5.3)

2022-03-04 16:03:17,977 1767 [main] INFO  nl.dtls.fairdatapoint.Application - Starting Application v1.12.4 using Java 16.0.2 on fdp-api-8445c987f5-dgfbj with PID 1 (/fdp/app.jar started by root in /fdp)
2022-03-04 16:03:17,984 1774 [main] INFO  nl.dtls.fairdatapoint.Application - The following profiles are active: production
2022-03-04 16:03:19,290 3080 [main] INFO  org.springframework.data.repository.config.RepositoryConfigurationDelegate - Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.
2022-03-04 16:03:19,435 3225 [main] INFO  org.springframework.data.repository.config.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 140 ms. Found 13 MongoDB repository interfaces.
2022-03-04 16:03:20,415 4205 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'spring.data.mongodb-org.springframework.boot.autoconfigure.mongo.MongoProperties' of type [org.springframework.boot.autoconfigure.mongo.MongoProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,418 4208 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.data.mongo.MongoDatabaseFactoryDependentConfiguration' of type [org.springframework.boot.autoconfigure.data.mongo.MongoDatabaseFactoryDependentConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,421 4211 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.data.mongo.MongoDatabaseFactoryConfiguration' of type [org.springframework.boot.autoconfigure.data.mongo.MongoDatabaseFactoryConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,424 4214 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration' of type [org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,426 4216 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration$MongoClientSettingsConfiguration' of type [org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration$MongoClientSettingsConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,480 4270 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoClientSettings' of type [com.mongodb.MongoClientSettings] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,485 4275 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.actuate.autoconfigure.metrics.mongo.MongoMetricsAutoConfiguration$MongoConnectionPoolMetricsConfiguration' of type [org.springframework.boot.actuate.autoconfigure.metrics.mongo.MongoMetricsAutoConfiguration$MongoConnectionPoolMetricsConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,489 4279 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.actuate.autoconfigure.metrics.export.simple.SimpleMetricsExportAutoConfiguration' of type [org.springframework.boot.actuate.autoconfigure.metrics.export.simple.SimpleMetricsExportAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,494 4284 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'management.metrics.export.simple-org.springframework.boot.actuate.autoconfigure.metrics.export.simple.SimpleProperties' of type [org.springframework.boot.actuate.autoconfigure.metrics.export.simple.SimpleProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,499 4289 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'simpleConfig' of type [org.springframework.boot.actuate.autoconfigure.metrics.export.simple.SimplePropertiesConfigAdapter] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,505 4295 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.actuate.autoconfigure.metrics.MetricsAutoConfiguration' of type [org.springframework.boot.actuate.autoconfigure.metrics.MetricsAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,507 4297 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'micrometerClock' of type [io.micrometer.core.instrument.Clock$1] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,526 4316 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'simpleMeterRegistry' of type [io.micrometer.core.instrument.simple.SimpleMeterRegistry] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,536 4326 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoConnectionPoolTagsProvider' of type [io.micrometer.core.instrument.binder.mongodb.DefaultMongoConnectionPoolTagsProvider] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,540 4330 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoMetricsConnectionPoolListener' of type [io.micrometer.core.instrument.binder.mongodb.MongoMetricsConnectionPoolListener] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,545 4335 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoMetricsConnectionPoolListenerClientSettingsBuilderCustomizer' of type [org.springframework.boot.actuate.autoconfigure.metrics.mongo.MongoMetricsAutoConfiguration$MongoConnectionPoolMetricsConfiguration$$Lambda$539/0x00000008010096b0] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,551 4341 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.actuate.autoconfigure.metrics.mongo.MongoMetricsAutoConfiguration$MongoCommandMetricsConfiguration' of type [org.springframework.boot.actuate.autoconfigure.metrics.mongo.MongoMetricsAutoConfiguration$MongoCommandMetricsConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,555 4345 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoCommandTagsProvider' of type [io.micrometer.core.instrument.binder.mongodb.DefaultMongoCommandTagsProvider] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,558 4348 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoMetricsCommandListener' of type [io.micrometer.core.instrument.binder.mongodb.MongoMetricsCommandListener] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,560 4350 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoMetricsCommandListenerClientSettingsBuilderCustomizer' of type [org.springframework.boot.actuate.autoconfigure.metrics.mongo.MongoMetricsAutoConfiguration$MongoCommandMetricsConfiguration$$Lambda$540/0x000000080100a5a8] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,566 4356 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoPropertiesCustomizer' of type [org.springframework.boot.autoconfigure.mongo.MongoPropertiesClientSettingsBuilderCustomizer] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,606 4396 [main] INFO  org.mongodb.driver.cluster - Cluster created with settings {hosts=[mongo:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
2022-03-04 16:03:20,659 4449 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongo' of type [com.mongodb.client.internal.MongoClientImpl] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,668 4458 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoDatabaseFactory' of type [org.springframework.data.mongodb.core.SimpleMongoClientDatabaseFactory] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,676 4466 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.boot.autoconfigure.data.mongo.MongoDataConfiguration' of type [org.springframework.boot.autoconfigure.data.mongo.MongoDataConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:20,721 4511 [cluster-ClusterId{value='62223848f32c1366c1f9e136', description='null'}-mongo:27017] INFO  org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:33}] to mongo:27017
2022-03-04 16:03:20,721 4511 [cluster-rtt-ClusterId{value='62223848f32c1366c1f9e136', description='null'}-mongo:27017] INFO  org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:32}] to mongo:27017
2022-03-04 16:03:20,722 4512 [cluster-ClusterId{value='62223848f32c1366c1f9e136', description='null'}-mongo:27017] INFO  org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=26523656}
2022-03-04 16:03:20,799 4589 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoCustomConversions' of type [org.springframework.data.mongodb.core.convert.MongoCustomConversions] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,132 4922 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoMappingContext' of type [org.springframework.data.mongodb.core.mapping.MongoMappingContext] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,174 4964 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#5dbf5634' of type [org.springframework.beans.factory.config.ObjectFactoryCreatingFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,177 4967 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#5dbf5634' of type [org.springframework.beans.factory.config.ObjectFactoryCreatingFactoryBean$TargetBeanObjectFactory] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,187 4977 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mappingMongoConverter' of type [org.springframework.data.mongodb.core.convert.MappingMongoConverter] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,244 5034 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'mongoTemplate' of type [org.springframework.data.mongodb.core.MongoTemplate] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,287 5077 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#2ce45a7b' of type [org.springframework.beans.factory.config.PropertiesFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,289 5079 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#2ce45a7b' of type [java.util.Properties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,301 5091 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#377008df' of type [org.springframework.data.repository.core.support.PropertiesBasedNamedQueries] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,306 5096 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#4cbf4f53' of type [org.springframework.data.repository.core.support.RepositoryFragmentsFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,308 5098 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean '(inner bean)#4cbf4f53' of type [org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,429 5219 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclRepository' of type [org.springframework.data.mongodb.repository.support.MongoRepositoryFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,433 5223 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclRepository' of type [jdk.proxy2.$Proxy140] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,441 5231 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclConfig' of type [nl.dtls.fairdatapoint.config.AclConfig$$EnhancerBySpringCGLIB$$6110aa81] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,445 5235 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'cacheConfig' of type [nl.dtls.fairdatapoint.config.CacheConfig$$EnhancerBySpringCGLIB$$d8fa88b9] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,455 5245 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'cacheManager' of type [org.springframework.cache.concurrent.ConcurrentMapCacheManager] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,464 5254 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'permissionGrantingStrategy' of type [org.springframework.security.acls.domain.DefaultPermissionGrantingStrategy] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,468 5258 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclAuthorizationStrategy' of type [org.springframework.security.acls.domain.AclAuthorizationStrategyImpl] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,470 5260 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclCache' of type [org.springframework.security.acls.domain.SpringCacheBasedAclCache] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,483 5273 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'lookupStrategy' of type [org.springframework.security.acls.mongodb.BasicLookupStrategy] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,485 5275 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclService' of type [org.springframework.security.acls.mongodb.MongoDBMutableAclService] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,488 5278 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'defaultMethodSecurityExpressionHandler' of type [org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,497 5287 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler@71dfcf21' of type [org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,498 5288 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'aclMethodSecurityConfiguration' of type [nl.dtls.fairdatapoint.config.AclMethodSecurityConfiguration$$EnhancerBySpringCGLIB$$8144ea92] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,506 5296 [main] INFO  org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'methodSecurityMetadataSource' of type [org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-03-04 16:03:21,851 5641 [main] INFO  org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 80 (http)
2022-03-04 16:03:21,862 5652 [main] INFO  org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-80"]
2022-03-04 16:03:21,863 5653 [main] INFO  org.apache.catalina.core.StandardService - Starting service [Tomcat]
2022-03-04 16:03:21,863 5653 [main] INFO  org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.50]
2022-03-04 16:03:21,961 5751 [main] INFO  org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
2022-03-04 16:03:21,961 5751 [main] INFO  org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 3911 ms
2022-03-04 16:03:22,928 6718 [main] INFO  org.mongodb.driver.connection - Opened connection [connectionId{localValue:3, serverValue:34}] to mongo:27017
2022-03-04 16:03:23,098 6888 [main] INFO  nl.dtls.fairdatapoint.service.openapi.OpenApiService - Initializing OpenAPI with generic paths
2022-03-04 16:03:23,099 6889 [main] INFO  nl.dtls.fairdatapoint.service.openapi.OpenApiService - Removing OpenAPI paths: []
2022-03-04 16:03:23,114 6904 [main] INFO  nl.dtls.fairdatapoint.service.openapi.OpenApiService - Adding OpenAPI paths: [/, /spec, /expanded, /page/{childPrefix}, /meta, /meta/state, /members, /members/{userUuid}, /catalog, /catalog/{uuid}, /catalog/{uuid}/spec, /catalog/{uuid}/expanded, /catalog/{uuid}/page/{childPrefix}, /catalog/{uuid}/meta, /catalog/{uuid}/meta/state, /catalog/{uuid}/members, /catalog/{uuid}/members/{userUuid}, /dataset, /dataset/{uuid}, /dataset/{uuid}/spec, /dataset/{uuid}/expanded, /dataset/{uuid}/page/{childPrefix}, /dataset/{uuid}/meta, /dataset/{uuid}/meta/state, /dataset/{uuid}/members, /dataset/{uuid}/members/{userUuid}, /distribution, /distribution/{uuid}, /distribution/{uuid}/spec, /distribution/{uuid}/expanded, /distribution/{uuid}/page/{childPrefix}, /distribution/{uuid}/meta, /distribution/{uuid}/meta/state, /distribution/{uuid}/members, /distribution/{uuid}/members/{userUuid}]
2022-03-04 16:03:23,503 7293 [main] INFO  nl.dtls.fairdatapoint.config.RepositoryConfig - Setting up Allegro Graph Store
2022-03-04 16:03:23,508 7298 [main] INFO  nl.dtls.fairdatapoint.config.RepositoryConfig - Successfully configure a RDF repository
2022-03-04 16:03:23,610 7400 [fdp-task-1] INFO  nl.dtls.fairdatapoint.service.index.event.EventService - Resuming unfinished events
2022-03-04 16:03:23,648 7438 [fdp-task-1] INFO  nl.dtls.fairdatapoint.service.index.event.EventService - Finished unfinished events
2022-03-04 16:03:24,042 7832 [main] INFO  nl.dtls.rdf.migration.runner.RdfProductionMigrationRunner - Production Migration of RDF Store started
2022-03-04 16:03:24,056 7846 [main] INFO  nl.dtls.rdf.migration.runner.RdfProductionMigrationRunner - Production Migration of RDF Store ended
2022-03-04 16:03:24,866 8656 [main] INFO  org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 6.2.0.Final
2022-03-04 16:03:25,761 9551 [main] INFO  org.springframework.security.web.DefaultSecurityFilterChain - Will secure any request with [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@b768a65, org.springframework.security.web.context.SecurityContextPersistenceFilter@411fa0ce, org.springframework.security.web.header.HeaderWriterFilter@de579ff, org.springframework.security.web.authentication.logout.LogoutFilter@6af65f29, nl.dtls.fairdatapoint.api.filter.LoggingFilter@10bea4, nl.dtls.fairdatapoint.api.filter.CORSFilter@19962194, nl.dtls.fairdatapoint.api.filter.JwtTokenFilter@4b97c4ad, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@76cdafa3, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@118041c7, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6897a4a, org.springframework.security.web.session.SessionManagementFilter@26be9a6, org.springframework.security.web.access.ExceptionTranslationFilter@349c4d1c, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@19fec3d6]
2022-03-04 16:03:25,803 9593 [main] INFO  org.springframework.boot.actuate.endpoint.web.EndpointLinksResolver - Exposing 2 endpoint(s) beneath base path '/actuator'
2022-03-04 16:03:25,859 9649 [main] INFO  org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-80"]
2022-03-04 16:03:25,872 9662 [main] INFO  org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 80 (http) with context path ''
2022-03-04 16:03:25,897 9687 [main] INFO  nl.dtls.fairdatapoint.Application - Started Application in 8.471 seconds (JVM running for 9.696)
2022-03-04 16:03:25,999 9789 [main] INFO  org.reflections.Reflections - Reflections took 54 ms to scan 2 urls, producing 2 keys and 19 values
2022-03-04 16:03:26,066 9856 [main] INFO  com.github.cloudyrock.mongock.runner.core.executor.MigrationExecutor - Mongock skipping the data migration. All change set items are already executed or there is no change set item.
2022-03-04 16:03:26,066 9856 [main] INFO  com.github.cloudyrock.mongock.driver.core.lock.DefaultLockManager - Mongock releasing the lock
2022-03-04 16:03:26,083 9873 [main] INFO  com.github.cloudyrock.mongock.driver.core.lock.DefaultLockManager - Mongock released the lock

Am I supposed to do an extra step to populate the allegro database myself? And if so, where is it documented?

Unable to get data

I Login as admin, [email protected]

I add a resource CovidDataset as child of Catalog, as shown below:
image

before that, I already defined its shape as below:
Name: CovidDataset
Definition:

@prefix : <http://fairdatapoint.org/> . 
@prefix dash: <http://datashapes.org/dash#> . 
@prefix dcat: <http://www.w3.org/ns/dcat#> . 
@prefix dct: <http://purl.org/dc/terms/> . 
@prefix foaf: <http://xmlns.com/foaf/0.1/> . 
@prefix sh: <http://www.w3.org/ns/shacl#> . 
@prefix dcat-ext: <http://purl.org/biosemantics-lumc/ontologies/dcat-extension/> . 
@prefix ex: <http://www.example.com/resources/> . 
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . 
@prefix con: <http://www.w3.org/2000/10/swap/pim/contact#> . 
@prefix vcard: <http://www.w3.org/2006/vcard/ns> . 
@prefix gnd: <http://d-nb.info/standards/elementset/gnd> . 
@prefix frapo: <http://purl.org/cerif/frapo/> . 
@prefix obo: <http://purl.obolibrary.org/obo/> . 
@prefix sio: <http://semanticscience.org/resource/> . 
@prefix eda: <http://edamontology.org/> .

:CovidDatasetShape a sh:NodeShape ; 
sh:targetClass dcat-ext:CovidDataset ; 
sh:property[ 
sh:name "Study Subject" ; 
sh:path obo:NCIT_C41189 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "Patient" ; 
sh:path obo:NCIT_C16960 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [
sh:name "Symptoms Consistent with COVID-19" ; 
sh:path obo:NCIT_C173069 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "COVID-19" ; 
sh:path obo:MONDO_0100096 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
] , [ 
sh:name "Sex" ; 
sh:path obo:NCIT_C28421 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
] , [ 
sh:name "Age-Years" ; 
sh:path obo:NCIT_C37908 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ;
], [ 
sh:name "Patient Status" ; 
sh:path ex:InvestigationContact; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "ICU" ; 
sh:path obo:NCIT_C53511; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "Study DOI" ; 
sh:path eda:data_1188; 
sh:nodeKind sh:IRI ; 
dash:editor dash:URIEditor ; 
], [ 
sh:name "Ethnicity" ; 
sh:path obo:GECKO_0000061 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "BMI" ; 
sh:path obo:ExO_0000105 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "Smoking" ; 
sh:path obo:NCIT_C154329 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "COVID-19 Disease Severity (WHO Ordinal) Scale" ; 
sh:path obo:NCIT_C178899 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "Charlson Comorbidity Index" ; 
sh:path obo:NCIT_C176422 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "APACHE II Score" ; 
sh:path obo:NCIT_C121113 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "Mechanical Ventilation" ; 
sh:path obo:NCIT_C70909 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "Asthma" ; 
sh:path obo:NCIT_C28397 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "COPD" ; 
sh:path obo:HP_0006510 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "DM" ; 
sh:path obo:NCIT_C2985 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "CRP (mg/L)" ; 
sh:path obo:NCIT_C64548 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "Ferritin (ng/mL)" ; 
sh:path obo:NCIT_C74737 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
], [ 
sh:name "IL6" ; 
sh:path obo:NCIT_C74834 ; 
sh:nodeKind sh:Literal ; 
sh:maxCount 2 ; 
dash:editor dash:TextFieldEditor ; 
] .

Then I create a catalog named COVID as shown below:

image

in the COVID catalog, I see two items, one is CovidDataset, as shown below:

image

I choose CovidDataset and click 'Create', then in the page I fill in all the info, as shown below,

image
image
image

If I click "View RDF", I can see following triples:

@prefix dct: <http://purl.org/dc/terms/>.
@prefix fdp: <https://fdp.cmbi.umcn.nl/>.
@prefix dc: <http://purl.org/biosemantics-lumc/ontologies/dcat-extension/>.
@prefix ed: <http://edamontology.org/>.
@prefix obo: <http://purl.obolibrary.org/obo/>.
@prefix c: <https://fdp.cmbi.umcn.nl/catalog/>.
@prefix res: <http://www.example.com/resources/>.

fdp:new
    a dc:CovidDataset;
    ed:data_1188 <https://doi.org/10.1016/j.cell.2020.10.037>;
    obo:ExO_0000105 "33";
    obo:GECKO_0000061 "Caucasian";
    obo:HP_0006510 "No";
    obo:MONDO_0100096 "Yes";
    obo:NCIT_C121113 "none";
    obo:NCIT_C154329 "Never";
    obo:NCIT_C16960 "Yes";
    obo:NCIT_C173069 "Yes";
    obo:NCIT_C176422 "none";
    obo:NCIT_C178899 "3";
    obo:NCIT_C28397 "No";
    obo:NCIT_C28421 "Female";
    obo:NCIT_C2985 "none";
    obo:NCIT_C37908 "77";
    obo:NCIT_C41189 "INCOV001";
    obo:NCIT_C53511 "No";
    obo:NCIT_C64548 "none";
    obo:NCIT_C70909 "No";
    obo:NCIT_C74737 "none";
    obo:NCIT_C74834 "none";
    dct:isPartOf c:f9eabdd3-6e62-4c2a-89ee-07af897e30c3;
    res:InvestigationContact "Hospital".

Then I click the 'Save' button at the bottom, after that, I can see an error page say "Unable to get data", as shown below:

image

Context

  • FDP version:

Server

v1.12.0~859fe61
12. 8. 2021, 12:54

Client

v1.11.0~fbc3707
29. 6. 2021, 16:20

  • Docker Engine version:
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    74642fd943b5 nginx:1.17.3 "nginx -g 'daemon of…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp fdp-production_proxy_1
    404616e3e1fa mongo:4.0.12 "docker-entrypoint.s…" About an hour ago Up About an hour 27017/tcp fdp-production_mongo_1
    df5214eeadba metaphacts/blazegraph-basic:2.2.0-20160908.003514-6 "/entrypoint.bash" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp fdp-production_blazegraph_1
    843e43cd2457 fairdata/fairdatapoint-client:1.11.0 "/docker-entrypoint.…" About an hour ago Up About an hour 80/tcp fdp-production_fdp-client_1
    a8994fb07ce7 fairdata/fairdatapoint:1.12.0 "/bin/sh -c 'java -j…" About an hour ago Up About an hour fdp-production_fdp_1

  • Operating System:
    Distributor ID: Ubuntu
    Description: Ubuntu 20.04.2 LTS
    Release: 20.04
    Codename: focal

Implement OpenID Connect/OAuth to login using ORCID

Is your feature request related to a problem? Please describe.
When we want to create resources in a FAIR Data Point we need to create an account in this FAIR Data Point. With a specific email/password combination
It causes users to create a new "online identity" just for this FDP, based on the email.

Describe the solution you'd like
Connect with an external OAuth provider / OpenID Connect. Such as ORCID (really popular among researchers, already used by a lot of application as a "FAIR online identity"

You could also allow connection through Google or github, etc

It would be much better for the quality of the data people are putting in the FAIR Data Point. Because you will be able to automatically add the creator of the resource using the logged user ORCID. It will make the resources more FAIR. And your service will be more modern

For the deployment of a new FDP the person who deploy it can easily go to https://orcid.org/developer-tools and add the redirect URLs. You'll just need to add some doc to explain them how to do it (it is really easy)

Describe alternatives you've considered
You could also allow connection through Google or github, additionally to ORCID

And enable the person who deploy the FDP to choose between OpenID/OAuth or the default old school user database

Additional context

No one on the web does private user database anymore! All serious applications use external OpenID/OAuth providers nowadays, apart from the external OAuth providers themselves of course. Especially that FDP is about web standards, so that will make sense to actually use them! And personally I tend to not trust application not using OAuth authenticator (and I am probably not the only one), and I am tired to have 100 different online accounts with as many chances to get hacked)

It is really easy to implement, especially on Java since I guess you are using the Spring framework. It is actually easier to implement than to have to implement and maintain the complete user database from scratch (FAIR: Reuse)

You can find examples on how to implement OpenID Connect for ORCID: https://github.com/ORCID/orcid-openid-examples

Once you have implemented it you will be able to use it in all your other applications, they will look much more modern, it will be safer for your users, and easier for your user! Everyone wins!

Incorrect use of the dct:conformsTo property? (+Possible solution)

Hello,

I think I have found a problem in the FAIR Data Point specifications and I have a possible solution. This is the problem (It's quite a long story, but that's because the problem is also quite complex.):

An HTTP request to a FAIR data point for a Catalog, Dataset, Distribution etc. returns RDF data containing triples describing the resource. Since the data returned for the URL is not the resource itself but its description, I think the statement <https://fdp.example.org/distribution/[GUID]> a dcat:Distribution . for example is not correct. To solve this problem, one can distinguish between the resource and its description by using a hash URI or a 303 URI (see: link). So for example <https://fdp.example.org/distribution/[GUID]#this> for a Distribution resource and <https://fdp.example.org/distribution/[GUID]> for its description. This will make the statement <https://fdp.example.org/distribution/[GUID]#this> a dcat:Distribution .. Since the profile returned when accessing https://fdp.example.org/profile/[GUID] describes the RDF graph returned when accessing https://fdp.example.org/distribution/[GUID], the statement <https://fdp.example.org/distribution/[GUID]> dcterms:conformsTo <https://fdp.example.org/profile/[GUID]> . does seem correct to me.

I ran into this problem when I was trying to specify the structure of a Distribution more precisely (for example, by specifying a JSON schema for a JSON resource). The dct:conforms property seems to be the right property for this purpose, but I noticed it was already used for the profiles. After reading the DCAT standard, the use of dct:conforms in DCAT remained rather unclear to me. I found a discussion on this topic in the DCAT GitHub-repository of the W3C Dataset Exchange Working Group (w3c/dxwg#1130) that clarified this for me. The dct:conforms property of a dcat:CatalogRecord instance seems to be the way to specify the profile. One issue I see is that the value of foaf:primaryTopic for a dcat:CatalogRecord instance is by definition a dcat:Resource and this seems to be a problem because for example a dcat:Distribution doesn't need to be a dcat:Resource. Also, I'm not sure if a dcat:CatalogRecord is a Web document. (So can the URI <https://fdp.example.org/distribution/[GUID]> point to an instance of dcat:CatalogRecord?). I think both issues don't need to be a real problem and it makes sense to write:

<https://fdp.example.org/distribution/[GUID]#this>
  a dcat:Distribution;
  dct:title "sample-distribution";
  dct:mediaType "application/json";
  dct:conformsTo [
    rdfs:label "a JSON schema"
  ] .

<https://fdp.example.org/distribution/[GUID]>
  dct:conformsTo <https://fdp.example.org/profile/[GUID]>;
  foaf:primaryTopic <https://fdp.example.org/distribution/[GUID]#this> .

I'm curious about your thoughts on this.

Regards,

Bas Harmsen

PS A while ago I created an issue for this in the FAIRDataPoint-Spec repository (FAIRDataTeam/FAIRDataPoint-Spec#20) but later I noticed that https://specs.fairdatapoint.org/ points to this repository instead. In addition, I have described the issue in more detail.

Fail to Create distribution

Describe the bug
I have geo dataset
https://fairsfair.fair-dtls.surf-hosted.nl/geo-dataset/7d995d4a-d429-41ba-8a42-1d954b5a3cae
Pushing the +Create text cause
https://fairsfair.fair-dtls.surf-hosted.nl/geo-dataset/7d995d4a-d429-41ba-8a42-1d954b5a3cae/create-/geo-distribution
with content "Not Found"

To Reproduce
Steps to reproduce the behavior:

  1. Login as [email protected] (I'm Owner of the geo-dataset)
  2. Push +Create
  3. See "Not Found"

Expected behavior
I think I should get form, but never seen it.

Context
Please fill the following and eventually add additional information (e.g. about used storage in case that issue is storage-related):

Unable to connect to mongo while running from docker-compose.

When I run the FDP server from docker-compose. I get the following error:

fdp-production-fdp-1         | 2023-06-01 11:55:26,052 27860 [cluster-ClusterId{value='647887191ed7970008416f13', description='null'}-mongo:27017] INFO  org.mongodb.driver.cluster - Exception in monitor thread while connecting to server mongo:27017
fdp-production-fdp-1         | com.mongodb.MongoSocketOpenException: Exception opening socket
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:67) ~[mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:126) ~[mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) [mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	at java.lang.Thread.run(Thread.java:834) [?:?]
fdp-production-fdp-1         | Caused by: java.net.SocketTimeoutException: connect timed out
fdp-production-fdp-1         | 	at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
fdp-production-fdp-1         | 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399) ~[?:?]
fdp-production-fdp-1         | 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242) ~[?:?]
fdp-production-fdp-1         | 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224) ~[?:?]
fdp-production-fdp-1         | 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403) ~[?:?]
fdp-production-fdp-1         | 	at java.net.Socket.connect(Socket.java:609) ~[?:?]
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:62) ~[mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	... 3 more
fdp-production-fdp-1         | 2023-06-01 11:55:29,295 31103 [cluster-ClusterId{value='6478871d1ed7970008416f14', description='null'}-mongo:27017] INFO  org.mongodb.driver.cluster - Exception in monitor thread while connecting to server mongo:27017
fdp-production-fdp-1         | com.mongodb.MongoSocketOpenException: Exception opening socket
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:67) ~[mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:126) ~[mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) [mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	at java.lang.Thread.run(Thread.java:834) [?:?]
fdp-production-fdp-1         | Caused by: java.net.SocketTimeoutException: connect timed out
fdp-production-fdp-1         | 	at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
fdp-production-fdp-1         | 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399) ~[?:?]
fdp-production-fdp-1         | 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242) ~[?:?]
fdp-production-fdp-1         | 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224) ~[?:?]
fdp-production-fdp-1         | 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403) ~[?:?]
fdp-production-fdp-1         | 	at java.net.Socket.connect(Socket.java:609) ~[?:?]
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:62) ~[mongodb-driver-core-3.8.2.jar!/:?]
fdp-production-fdp-1         | 	... 3 more

docker-compose.yml:

version: '3'
services:
    proxy:
        ports:
            - 80:80
            - 443:443
        image: nginx:1.17.3
        volumes:
            # Mount the nginx folder with the configuration
            - ./nginx:/etc/nginx:ro
            # Mount the letsencrypt certificates
            - /etc/ssl:/etc/ssl:ro

    fdp:
        image: fairdata/fairdatapoint:1.0
        volumes:
            - ./application.yml:/fdp/application.yml:ro

    fdp-client:
        image: fairdata/fairdatapoint-client:1.0
        environment:
            - FDP_HOST=fdp

    mongo:
        image: mongo:4.0.12
        volumes:
            - ./mongo/data:/data/db

    blazegraph:
        image: metaphacts/blazegraph-basic:2.2.0-20160908.003514-6
        ports:
            - 8080:8080
        volumes:
            - ./blazegraph:/blazegraph-data

application.yml:

instance:
    clientUrl: https://fdp.cmbi.umcn.nl
    persistentUrl: https://fdp.cmbi.umcn.nl
    #persistentUrl: https://www3.cmbi.umcn.nl/fdp

security:
    jwt:
        token:
            secret-key:  <secret>

# repository settings (can be changed to different repository)
repository:
    type: 5
    blazegraph:
        url: https://fdp.cmbi.umcn.nl/blazegraph

All containers seem to run initially, but fdp shuts down after 30 seconds.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.