solid-contrib / conformance-test-harness Goto Github PK
View Code? Open in Web Editor NEWHarness used to run Solid conformance tests and generate reports
License: MIT License
Harness used to run Solid conformance tests and generate reports
License: MIT License
The tests are currently dependant on inrupt.net or solidcommunity.net as the IdP - it would be better to have a mock IdP as part of the test harness.
Does this assume public access - if not, harness needs to handle credentials but why would we allow private test cases?
Depends on RemoteFeature being added to Karate. See karatelabs/karate#1485 for related work.
The test suite has to expose credentials in one form or another and the preferred option of using refresh tokens currently suffers from the fact that they expire after 3 days.
Could we set up a Mock IdP that allows the test suite to log in and get access tokens without user intervention? It should only support the 2 users (alice & bob) needed for testing.
Using an inrupt broker as the IdP and Bearer tokens, the tests work with CSS.
However, when the access token is DPoP bound we get a 400 response from CSS with the following error:
`Error verifying WebID via DPoP-bound access token...
Is this a problem with the way the token is being bound?
A basic Application wrapper exists but had no options so just runs the example tests against the default server config.
If we can catch Karate errors then a CI process in the test repo can report errors with tests.
See solid-contrib/specification-tests#8
Use Quarkus to package as an uber jar so it can be deployed as a single jar with a config file (or 2)
Remove assertor description from config as it is part of the harness.
Pull the date and version from the latest build data.
I looked through the report.ttl
file, and noticed it was hard to read, and also took a long time to parse.
I noted that the schema:description
predicate contains lengthy excerpts from the spec, and I don't think that's useful, because it retains no document structure, no hypertext, and so I think it made just more difficult. So, just wondering, can we remove those?
Some systems are known to fail in specific ways. Having the conformance-test-harness fail every time due to this will hide the good work of all the things that are passing, including new things that turn up. Specifically, I would like to specify known failures so that if any new failures occur we can detect that. At the moment we'll simply fail in all cases and require human intervention to review the logs every time. It also means we can't gate releases based on the current exit code because some known failures will always gate the release.
Although I would like all systems to pass all tests, I accept that this is not always the case.
Command line options to include:
Following CommunitySolidServer/CommunitySolidServer#798 I think it makes sense to treat 501
as neither pass nor fail, as it is the server explicitly acknowledging that it knows it hasn't implemented a feature.
Earl doesn't have the exact right predicate for this, as the test has been done, and it applies, it just doesn't count towards the server pass or failure counts.
Can the test harness have executors to test clients and/or applications?
We recently started having a conformance suite test failure within the CSS CI, which seems to be a harness bug: https://github.com/solid/community-server/runs/4630611541?check_suite_focus=true#step:9:259
>>> failed features:
js failed:
>>>>
01: clients.alice.sendAuthorized('DAHU', resource.url, null, null)
<<<<
org.graalvm.polyglot.PolyglotException: org.solid.testharness.api.TestHarnessException: Failed to send authorized request
Caused by: java.io.IOException: HTTP/1.1 header parser received no bytes
- org.solid.testharness.api.SolidClient.sendAuthorized(SolidClient.java:104)
- <js>.:program(Unnamed:1)
../data/web-access-control/protected-operation/read-resource-access-R.feature:56
<<<
There can be 1 or more Link headers in a response (e.g. CSS, ESS) and a link header can contain 1 or more links (e.g. NSS). We need a parser that will handle both format (and technically a mix of them). The current solution assumes one or the other method is used and when it encounters a multi-value link header it does a simple split on ,
. This was adequate to get started but it since a comma can appear within a link definition, we need a proper parser for this.
Check accessibility of test cases via URLs
Replace Weld with Quarkus for CDI and use the microprofile configuration it provides
Simple publishing of output reports in HTML+RDF (and Turtle?) to a defined location
Translate the existing config file to Turtle including a section describing the earl:Assertor.
Load this in the TestRunner as the source of config and make it directly available to Karate for use when testing within the IDE.
Gradle is causing issues due to the way it handles resources - it keeps them separate from classes unlike an exploded jar which CDI has problems with.
Karate only outputs JSON and XML so RDF response bodies are missing. Can this be easily fixed, or made optional?
Define the vocab.
Maybe avoid extending EARL and define our own Feature, Scenario, Step.
But, how map does this map to EARL?
One option is to do this in the ReportGenerator
whilst processing FeatureResults
. However, we need to consider whether that is too short term as eventually we will need to generate a single HTML+RDFa report from multiple test runs so perhaps it would be better to build the report directly from the RDF in the DataRepository
.
Does the harness need any additional capabilities to test this?
See https://github.com/solid/webid-provider-tests
I suggest to use RFC 3339's date-time
with time-offset
value of Z
( https://datatracker.ietf.org/doc/html/rfc3339#section-5.6 ) eg: 2021-07-30T19:52:00.000Z
.
Once a spec is annotated with RDFa we need to get data from it.
Assumption is that it is not served from an endpoint that support content-negotation to provide Turtle, therefore we need to parse the HTML to get the RDF.
Evaluate semargl - does it plug into RDF4J or can we at least use it standalone?
Starting from the spec - what annotations are needed to a requirement:
To link to test cases we then need:
The test case is a single Karate feature file that contains the test procedure. It will also contain a title for the test case (which may be a duplication of something in the test suite but perhaps that is unavoidable).
New vocabs
Extensions to existing vocabs
Once the assertor data is removed from the config (#27) load the config into a temporary model, extract just the test subject required and only add that to the DataRepository
Depends on public release of lit-artifact-generator
I'm sure this is on the planning somewhere, but it would be nice to have a Docker version sooner than later.
That way, non-Java devs (like those on the CSS team) can get started fast.
Add additional capabilities for setting up and testing ACPs
I am trying to create a custom docker image to test my services. For this I follow an example presented in the documentation.
As you can see from this screenshot:
The docker container looks for the given files in /app/data not in /data as claimed in the documentation.
When changing it to /app/data in the run script however it does find the script.
However now I get the following error:
When I run the following script:
I get the following error:
I have the following setup:
harness
|──data
| | [specifications-test repo](https://github.com/solid-contrib/specification-tests) (cloned)
| [CHT](https://github.com/solid-contrib/conformance-test-harness) (cloned)
|
| run.sh (script in screenshot above)
And that's it. Is it me doing something wrong here?
Specifically packages: http, utils
Options:
pull
files locallyA declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.