GithubHelp home page GithubHelp logo

tuf-conformance's Introduction

TUF conformance test suite

This is the repository of the conformance test suite for TUF clients. The goal of this repository is to allow client developers to

  1. test their clients against the TUF specification
  2. Achieve better practical compatibility with other implementations
  3. Collaborate on tests with other client developers

Usage

The conformance test suite provides a GitHub action that can be used to test a TUF client. There are two required steps:

  1. Include an executable in the client project that implements the client-under-test CLI protocol.
  2. Use the theupdateframework/tuf-conformance action in your test workflow:
    jobs:
      tuf-conformance:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v4
    
          # insert possible client compilation/installation steps here
    
          - uses: theupdateframework/tuf-conformance@v1
            with:
              entrypoint: path/to/my/test/executable

Development

This repository contains two client-under-test CLI protocol implementations to enable easy development and testing. The test suite depends on various python modules which will be installed by the make commands into a virtual environment. The suite also depends on faketime tool which needs to be available.

# run test suite against both included clients or just one of them:
make test-all
make test-python-tuf
make test-go-tuf

It's also possible to locally run the test suite with a client-under-test CLI that is locally installed elsewhere:

make dev
./env/bin/pytest tuf_conformance --entrypoint path/to/my/client-under-test/cli

linters can also be invoked with make:

# Run linters
make lint
# Fix issues that can be automatically fixed
make fix

Creating (and debugging) tests

Let's say we want to test that clients will not accept a decreasing targets version. We start with a simple skeleton test that doesn't assert anything but sets up a repository and runs client refresh against it:

def test_targets_version(
    client: ClientRunner, server: SimulatorServer
) -> None:
    # Initialize our repository: modify targets version and make sure the version is included in snapshot
    init_data, repo = server.new_test(client.test_name)
    repo.targets.version = 7
    repo.update_snapshot()  # snapshot v2

    # initialize client-under-test, run refresh
    client.init_client(init_data)
    client.refresh(init_data)

we can now run the test:

./env/bin/pytest tuf_conformance \
    -k test_targets_version                 # run a specific test only
    --entrypoint "./clients/go-tuf/go-tuf"  # use the included go-tuf client as client-under-test
    --repository-dump-dir /tmp/test-repos   # dump repository contents

# take a look at the repository targets.json that got debug dumped
cat /tmp/test-repos/test_targets_version/refresh-1/targets.json
cat /tmp/test-repos/test_targets_version/refresh-1/snapshot.json

The metadata looks as expected (targets version is 7) so we can add a modification to the end of the test:

    # Make an non-compliant change in repository
    repo.targets.version = 6
    repo.update_snapshot() # snapshot v3

    # refresh client again
    client.refresh(init_data)

Running the test again results in a second repository version being dumped (each client refresh leads to a dump):

./env/bin/pytest tuf_conformance \
    -k test_targets_version                 # run a specific test only
    --entrypoint "./clients/go-tuf/go-tuf"  # use the included go-tuf client as client-under-test
    --repository-dump-dir /tmp/test-repos   # dump repository contents

# take a look at targets versions in both snapshot versions that got debug dumped
cat /tmp/test-repos/test_targets_version/refresh-1/snapshot.json
cat /tmp/test-repos/test_targets_version/refresh-2/snapshot.json

The repository metadata looks as expected (but not spec-compliant) so we can add some asserts for client behaviour now. The final test looks like this:

def test_targets_version(
    client: ClientRunner, server: SimulatorServer
) -> None:
    # Initialize our repository: modify targets version and make sure it's included in snapshot
    init_data, repo = server.new_test(client.test_name)
    repo.targets.version = 7
    repo.update_snapshot()  # snapshot v2

    # initialize client-under-test
    client.init_client(init_data)

    # Run refresh on client-under-test, expect success
    assert client.refresh(init_data) == 0

    # Make a non-compliant change in repository
    repo.targets.version = 6
    repo.update_snapshot() # snapshot v3

    # refresh client again, expect failure and refusal to accept snapshot and targets
    assert client.refresh(init_data) == 1
    assert client.version(Snapshot.type) == 2
    assert client.version(Targets.type) == 7

Releasing

Checklist for making a new tuf-conformance release

  • Review changes since last release, decide on version number
    • If the client-under-test CLI has changed or client workflows are otherwise likely to break, bump major version
    • otherwise if new tests were added, bump minor version
    • otherwise bump patch version
  • Create and merge PR with README changes if needed (the workflow example contains the major version number)
  • tag the new version from a commit in main branch. Example when releasing v1.1.1:
        git tag --sign v1.1.1 -m "v1.1.1"
        git push origin v1.1.1
        # now rewrite the major tag: yes rewriting tags is awful, it's also what GitHub recommends...
        git tag --delete v1
        git push --delete origin v1
        git tag --sign v1 -m "v1.1.1"
        git push origin v1
    
  • Add release notes to GitHub release in the web UI: this will be shown to action users in the dependabot update. Release notes must mention all breaking changes.

Some design notes

  • pytest is used as the test infrastructure, the client-under-test executable is given with --entrypoint
  • A single web server is started. Individual tests can create a simulated TUF repository that will be served in subdirectories of the web server
  • Each test sets up a simulated repository, attaches it to the server, runs the client-under-test against that repository. It can then modify the repository state and run the client-under-test again
  • the idea is that a test can run the client multiple times while modifying the repository state. After each client execution the test can verify
    1. client success/failure
    2. clients internal metadata state (what it considers currently valid metadata) and
    3. that the requests client made were the expected ones
  • There should be helpers to make these verifications simple in the tests

tuf-conformance's People

Contributors

jku avatar adamkorcz avatar dependabot[bot] avatar segiddins avatar davidkorczynski avatar rdimitrov avatar

Stargazers

 avatar  avatar Rajab Natshah avatar  avatar Marvin Drees avatar Trishank Karthik Kuppusamy avatar  avatar

Watchers

 avatar  avatar Rajab Natshah avatar  avatar  avatar Joshua Lock avatar  avatar  avatar  avatar

tuf-conformance's Issues

Test succinct delegations

This is not on any critical path (as I don't know of repositories using succinct delegations) but:

We should add a basic test for succinct delegations at some point:

  • I don't remember how the spec defines this but I would consider this an optional feature of the spec so should make this easy to make "expected to fail"
  • there is some functionality in RepositorySimulator to create these things already

Consider github action

sigstore-conformance provides a GitHub action that client projects can use. A similar setup might work for us: this would also mean we don't necessarily need to setup pypi releases etc...

This is not needed right now but at some point.

support go-tuf v1?

  • There is a client wrapper for go-tuf (v2) already
  • One of the most widely used tuf clients is the one in cosign: it uses go-tuf v1
  • The goal is for cosign to use go-tuf v2 in future

The question is: would supporting both go-tuf versions in the conformance suite be useful in making that transition safer and easier?

If yes, then we should consider making a client wrapper for go-tuf v1

improve and/or document development and debugging of tests

To make it easier for a client developer to act on test failures (and for test developers to implement new tests), it should be reasonably easy to see what kind of repository structure a specific test offers to the client:

Possible improvements:

  • more optional logging?
  • Ability to store a copy of repository metadata
    • python-tuf tests have this with --dump argument to some tests (like python3 tests/test_updater_top_level_update.py --dump)
    • RepositorySimulator already has write() method for this
    • I wonder if we should just automatically dump the metadata into a directory in /tmp on test failure?

tweak clients/README.md

Part of #61

The document should be written not as a normal CLI documentation (for the caller of the CLI) but for the implementer:

  • e.g. we should not say if an option is "required", we should document if the option is always included by the caller or not
  • the order that args and commands will be used needs to be clear (maybe include example)
  • we need to document the expected "side-effects":
    • metadata files the client has accepted must be stored in metadata-dir. The filenames should be non-versioned
    • artifacts client downloads should be stored in target-dir. TODO document expected file/path name encoding
    • client should return 0 if the command succeeded, 1 otherwise.

Write more test cases

Currently we only have a very simple refresh test. Once the infrastructure is a bit better (#4, #7, #10), we should write many more.

The python-tuf tests might be a good source of "inspiration" -- all tests using RepositorySimulator should be fairly easy to port (the component is not 100% same as ours but it's very close)

review target download tests

The tests in test_file_download.py seem to re-build repository functionality that I think is already provided RepositorySimulator (the tests create data in a temporary directory created in ClientRunner).

The tests really should not be this complicated even when testing the negative cases (repository serving the wrong artifact):

  • add_target("role", b"data", "target/path") adds an artifact and makes changes to the targets roles metadata
  • to pretend to be a malicious repository, modifying self.artifacts["target/path"] (that was created by add_target()) should be enough

Add documentation on writing tests

We should add documentation on writing new tests. IMO the most important part of this right now is to describe how to use the test utilities, for example:

  • Modifying repository metadata
  • Verifying expected client behavior
  • Downloading target files

redesign RepositorySimulator

This is likely a meta-issue that will not be fixed with a single PR: We've discussed this already in chats but I'm writing this down so I still remember when I get back from next weeks holiday.

  1. I believe current RepositorySimulator design is not great: it promotes low level access to json and forces tests to do manual loading and saving of metadata: this has made tests very long, difficult to read and maintain
  • The number one goal should be that tests are easy to write and to understand: otherwise this test suite will die after the initial development is done: we should assume contributors who are not tuf-conformance experts
  • access to underlying json can be enabled if it does not conflict with the above: that should not be the main way of interacting with the metadata for either modifications or verifying things in a test
  1. The RepositorySimulator implementation is a big mess currently: there are multiple ways of doing the same thing (some of them broken), there is unused code, basic TUF features (like delegated targets) are not supported even though they were previously, misleading comments and examples are everywhere, quality is variable: e.g. the manual JSON canonicalization seems very fishy

So let's redesign and refactor:

  • storing the underlying data as (json) bytes is fine by me: I originally used Metadata to avoid the constant de/serialization but I see the argument of allowing access to json in the few cases where it is necessary
  • I think it makes sense to store the underlying data in a hashtable with rolename as key: this way delegated targets are automatically supported, and the top-level roles can still have easy accessor properties if necessary
  • direct json editing should only be done when it's necessary -- it's error prone and a maintenance nightmare
  • the idea of allowing raw json access to "special" tests practically also requires redesigning the signing (to avoid the weird duplication of fetch_metadata and fetch_metadata_any): originally the signing was done at HTTP request time as this made the editing super easy (there was no "save" step required as "save" essentially happened at request time). IF we want to allow this json mangling, then this no longer makes sense: the signing should happen when json is "saved", not when it's requested by client
  • it should be easy for a test to modify a metadata object using the python-tuf API. Explicit loading and saving seems unnecessary in 99% of cases. Maybe a context manager could handle the loading and saving so test could do:
        # the context manager yields signed here -- could yield metadata if that's more useful.
        # context manager handles saving the modified metadata as whatever the underlying storage format is
        with repo.edit("timestamp") as timestamp:
            timestamp.version = 9000
  • functionality used in a single test should stay in that test -- there is no reason to make RepositorySimulator as complex as it is. We can always move things to repositorysimulator when a feature is mature and is really needed in multiple places

Related items:

  • enable linting: this code would be better if we linted it
  • enable repository dumping for easier test debuggin: RepositorySimulator.write() sort of does this but it's likely not directly compatible with current implementation. I have a rough design for this, I just didn't want to complicate RepositorySimulator more in its current state.

should metadata be valid if it contains keytypes/schemes that client is unfamiliar with?

This relates to tests added in 0a81fc1: the test assumes that root should be considered valid by a client even if it contains keytypes/schemes that the client does not recognise (this assumes the signing threshold of root is still reached with the keys that it does understand)

The spec does not seem to really say anything about this. The argument against considering metadata like this valid are that

  • it's hard to imagine a realistic scenario where this would happen in the real world (meaning a situation where accepting metadata with unknown keys would lead to a functioning TUF client update: typically if keys are added to metadata, they are also required for verifying signatures...)
  • A client silently doing nothing with keys that it does not understand sounds like a potential for bugs later on

I'm filing this issues because I plan to remove the test for now: let's figure out what the correct behaviour is first and re-add them (or some simpler tests) afterwards if needed.

write a wrapper for go-tuf

While the end goal is to move the wrappers to the TUF implementations ( #9), including go-tuf in our early wrappers in this repo might make sense:

  • go-tuf is widely used
  • the two wrappers we have now are based on the same design so not likely to show new issues with test suite design
  • having coverage of both go-tuf and go-tuf-metadata would show how compatible they are (which is beneficial for go-tuf-metadata if it wants to replace current go-tuf at some point)

remove "--max-root-rotations" flag

part of #61: remove "--max-root-rotations" flag: it's not clear what we can actually require from clients as the maximum seems like it could be a library decision and not a configurable option.

tests: check for support of custom fields

  • spec requires support custom fields anywhere in the json :

    Implementers who encounter undefined attribute-value pairs in the format must include the data when calculating hashes or verifying signatures and must preserve the data when re-serializing

  • we should test this. python-tuf has a unrecognized_fields in (all?) it's objects that can be used for this
  • Mostly the test is interesting if the custom fields are withing "signed" (to ensure hash calculation contains these fields)

tests: consider adding tests with static data for specific repository implementations

We should consider a new test type: static metadata committed to git that works as an example of a specific repository implementations output

  • this would not be as much about spec conformance as it would be about practical compatibility
  • I don't think these test are a priority for the conformance project but we should accept PRs from repository implementation maintainers (as an example I might do a PR for tuf-on-ci)
  • the static metadata should form a complete repository version that stays valid for a long time (say, 10 years)

FYI @lukpueh & @kairoaraujo for RSTUF and @dennisvang for tufup -- please leave comments if you have ideas.

todo list

Here's a rough draft of tasks ahead -- individual issues should be filed as needed:

  • improve the infrastructure of "verifying client-under-test state" -- e.g. make it easy to verify client metadata cache contents in a test #10
  • Refactor, polish:
    • current code is a quick hack, e.g. we should probably use unittest or something as a runner: #7
    • setup linting, mypy, CI: #6, #2
  • define the initial "client-under-test" CLI: see ClientRunner and the initial test client clients/python-tuf.py
  • write the client-under-test scripts for at least two implementations (python-tuf, go-tuf-metadata?)
  • Document the CI story
    • in the end the client-under-test implementations should live in the client library repos themselves (like python-tuf) and their CI should run the tests -- no point in doing this quite yet though
    • test suite could offer a github action for the implementations to use #8
    • before all of that: the test suite needs tests as well. Makes sense to build something within this repo first: #2
  • enable debugging: it has to be possible to dig into possible failures
    • More optional logging
    • Ability to preserve the client temp directory
    • Ability to export repository state (see RepositorySimulator.write()), and using it in testrunner or the tests
  • start designing and implementing individual tests -- python-tuf test suite is a good starting point I guess

check all client return values

we still have client commands in tests that do not check return value (e.g. client.refresh(init_data)) for some reason.

  • This should only happen in unusual cases where we think either one may be valid.
  • Most return values should be checked: this is not only better testing but also makes the tests easier to read -- it's more clear what we expect

document the "client wrapper" interface

We should document what we expect from the client wrapper:

  • command line options etc
  • file storage locations
  • faketime handling
  • ?

This doc does not need to be 100% complete (and it's existence does not have to mean the "API" is frozen) but we should have something.

decide file name encoding / path handling safety requirements

We expect client-under-test to write both metadata and artifacts onto disk for verification purposes (and we also expect most clients to do that as part of their normal operation). Unfortunately we don't specify the expectations in detail...

There are several decision to make that all relate to each other:

  • what do we expect the "download --target-name "path/file" to do -- should it create the subdirectory "path"?
    • python-tuf as an example will not create the directory by default (instead it uses url encoded filename path%2Ffile)
  • do we want to test url/filename encoding issues? For the record, I believe clients must URL encode rolenames and target paths since they are path of URL path fragments
    • what happens if a delegation uses rolenames like "/" "?" or "๐Ÿ˜€" ?
    • same question for artifacts (target paths)
  • do we want to verify that client does not have path traversal security issues?
    • delegation name "../delegate" should not lead to writing in the higher level directory
    • target path "../filename" should not lead to writing in the higher level directory

As far as I know, the spec does not specify any of this. Testing these things feels important but I'm not sure how to do it without forcing clients into specific design choice.

Possible design

I think at least for the role name cases we can avoid enforcing a specific filename encoding but

  • we can test that client never writes into the higher level directory even if we have a role named "../delegation"
  • we can test some weird role names in a single test: this way clients can skip that test if they decide to not support those

for artifacts I wonder if we should just change the CLI so that instead of --target-dir we have --target-filename: so the test suite specifically names the specific file name that it wants saved on disk?

  • we can still test that strange target paths work (correct URL gets requested)
  • but we avoid making a decision on how the file should be stored on disk by default (and as a result can't really test for path traversal issues in this case)

Make it easier to use the test suite

Two options

  1. Provide a GitHub Action that TUF implementations can use
  2. Make releases to pypi, let TUF implementations handle the CI integration

Doing both might make sense, but the GH action might be most important?

GitHub Action

If this repo includes the github action, then that action does not actually need the pypi releases (as it can just install from the sources). The action should take at least the client-wrapper path as argument, and should then

  • install test suite requireements
  • install test suite
  • run test suite with given client wrapper

PyPI releases

Releases in pypi would make using the test suite in CI is easy for the TUF implementations.

python-tuf release job might be useful: https://github.com/theupdateframework/python-tuf/blob/develop/.github/workflows/cd.yml

tests: test valid metadata with invalid signature

This is a specific case where folks have interpreted spec in two ways before:

When metadata signatures contains an invalid signature the metadata can still be valid (as long as threshold of valid signatures is reached)

Figure out an ergonomic way to verify clients current metadata state

We want to ensure that what the client considers the current valid metadata matches our opinion of the same.
It should be easy for each unitttest to verify this even multiple times during the test run.

  • is it enough to verify that the version of each metadata is what we expect? or should we actually compare the file to the file on the repository? I think maybe the latter...
  • there's likely several designs that would work: one idea is a method in our UnitTest based class (that does not exist yet. Assume the class knows about the RepositorySimulator and ClientRunner). That method would get a list of roles that we expect to be in client cache and would then compare for each role the metadata in local cache to the metadata in the RepositorySimulator. Then the check in the test would be e.g. self.assert_client_cache(["root", "timestamp"]) for a test where client refresh ended after timestamp

separate self tests from actual test suite?

Currently the "self tests" (tests that verify the test suite components work correctly) and actual conformance tests are in the same directory and they all get executed at once

  • this is probably incorrect: we should have separate runs for test suite and and "self tests" (currently test_repository_simulator.py)
  • it's also confusing for someone trying to find the actual conformance tests

opinions @AdamKorcz ?

tests: Test optional features

Specification has a few optional features. This list is from memory so possibly not complete:

hashes & length in METAFILEs

The single-item dictionary in timestamp and the dictionary in snapshot may or may not contain hashes and/or length. We should check that at least both reasonable options are supported (neither hashes nor length included; and both hashes and length included)

Consistent snapshot

According to spec the "consistent snapshot" feature is SHOULD only. I don't think we're very interested in the "non-consistent" version: I think we should only possibly test it with security in mind -- but have no practical examples at the moment

tests for rolename/targetpath encoding

We want to test that

  • clients use correctly URL encoded rolenames and targetpaths in the request URLs. My read on spec is:
    • rolenames should be URL encoded so that also "/" is encoded: rolename should not include "sub-directories"
    • targetpath should be URL encoded so that "/" is not encoded: targetpath may include sub-directories
  • clients do not use unsafe encoding when handling filepaths:
    • we don't want to define how to encode filepaths because the spec does not
    • we can test that when we download delegated metadata or artifacts
      • the metadata download or artifact download succeeds when we expect it to
      • a file appears in clients TARGET_DIR or METADATA_DIR (and was not e.g. written to ../)

I have a crude initial test, will upload after #115

README: write an actual README

  • describe the project: purpose, how does it work
  • what is currently offered, what is coming in future (whatever the results of #9 are)
  • point further to client wrapper docs so client developers can see what they need to do: #26

pin dependencies

This is not immediately required but at some point:

  • pin dependencies (like python-tuf)
  • setup dependabot to manage them

use a proper testrunner

Currently there is no test architecture to speak of in run_tests.py.

I think pythons unittest should work fine: We can use unittest testrunner and define a unittest class of our own (that can provide helper methods to e.g. verify client metadata state)

pytest is totally reasonable as well

Using assert client._version_equals makes failures hard to debug

E.g.

        # Assert that root version was increased with no more
        # than 'max_root_rotations'
>       assert client._version_equals(
            Root.type, initial_root_version+3
        )
E       AssertionError: assert False
E        +  where False = _version_equals('root', (1 + 3))
E        +    where _version_equals = <tuf_conformance.client_runner.ClientRunner object at 0x106685fd0>._version_equals
E        +    and   'root' = Root.type

tuf_conformance/test_basic.py:91: AssertionError

what would be helpful is an assertion error that says what the actual version is, so assert client._version(Root.type) == initial_root_version+3, if that's at all possible

RepositorySimulator.add_key() only works for top level roles

def add_key(self, role: str)

this assumes the delegating role is root. That's a fine default but I want to add keys for non-top-level roles too

This should work:
def add_key(self, role: str, delegator: str | None)

(and then use Root.type as the real default value)

"make install" runs bare naked pip install

Installing things into (potentially) user local python modules is not great...

I think it would be better to not have a make install (to make this the users problem) or to make it use a hard-coded venv (sigstore-python uses this solution).

Not having a "make install" is IMO fine if it's documented: the real way of using this should be either

  • a CI workflow in another project that can just run pip install tuf-conformance (once there are releases)
  • a workflow that uses a github action defined in this project

Use log grouping in GitHub Action

The log from running the GitHub Action currently has a couple of pages of dependency installation lines in the beginning. This is usually not interesting.

I think log grouping will help:

        echo "::group::Install the conformance test suite and dependencies"
        pip install -e "${{ github.action_path }}"
        echo "::endgroup::"

tests: add key rotation/update tests

We should have tests for various key rotation cases: root is especially interesting and complicated but delegated roles sohuld be tested too.

python-tuf tests are pretty good so we can copy from there -- admittedly they are a little complicated because of the way they are parametrized: I wouldn't object if someone comes up with a simpler approach

  • test_updater_key_rotations.py contains basic rotation tests
  • there are special case tests for the fast forward recovery that happens during key rotation: search for fast_forward_recovery in test_updater_top_level_update.py

investigate performance

The test suite feels a little slow: there should be no need for an individual test to take a perceptible amount of time

My guess is that the pytest integration has lead to infrastructure setup happening for each test: We should really only need one web server (SimulatorServer) as it gives each RepositorySimulator a subdirectory and should be able to serve all of them.

revisit http server design

I'll document a bit about the test "design" WRT http servers here:

  • currently an http server is seemingly started per test (I'm not 100% sure about this since the pytest fixture setup is so weird):
    • this is not necessarily needed as the setup also puts content for each test in a url-subdirectory
    • this could become a performance and reliability issue when there are more tests
  • the http server runs in the main test thread: ClientRunner._run() manually cranks the server event loop until the test is don
    • this is a bit weird but starting a server as a subprocess has its own issues (no visibility into when the server is actually ready etc)

Both of these decision can be revisited if there are issues

cc @segiddins

README: Document the action

  • we should focus on the GitHub action (+ the required client CLI) as the preferred way of interacting with the suite
  • manual installation (pip install etc) should documented as a developer/debug thing
  • other integration (like GitLab) accepted as PRs

I'll take this.

review the client-under-test CLI protocol

We should review the CLI protocol documented in clients/README.md

  • do the commands and options make sense? are they complete? could they be better?
  • are the required "side effects" (like local metadata storage, artifact storage) all reasonable? Are they documented clearly?

remove usage of securesystemslib.keys

Instead of using securesystemslib.keys methods to generate new keys in repository_simulator.py we should use something like

signer = CryptoSigner.generate_ecdsa()

(this returns a signer that has a signer.public_key attribute so should have the same data as the old return values)

Fix TODOs in tests or remove incomplete tests

example: it's not obvious what test_TestDelegateConsistentSnapshotDisabled() gives us: it seems to be lacking both a description of what it is testing and the HTTP request assertions that the upstream go-tuf test has.

Let's go through the tests with TODOs and FIXMEs and either complete them or remove them as incomplete.

Add delegation and targetpath search tests

This is a bit tricky to test but we should ensure that multiple levels of targets delegations work correctly and that target path search never finds incorrect artifacts.

python-tuf has some parametrized tests in https://github.com/theupdateframework/python-tuf/blob/develop/tests/test_updater_delegation_graphs.py: TestDelegationsGraphs for the delegation tests and then TestTargetFileSearch for targetpath searching. They are a bit complicated but I don't really see better ways.

Add configuration for "expected failures"

A recent PR was merged with failing tests (#42).

  • it's understandable that a conformance test run may have failures
  • it's not useful if CI just fails by default

Let's add a way for clients to set expected failures (sigstore-conformance has a way to do that although it is a little clumsy)

If this is going to take a while, lets disable the failing tests for now.

action expects to be running in the tuf-conformance source dir

The action I added works when tested as a local action (uses: ./) but fails when it's used externally:

This is likely because in the command pytest tuf_conformance --entrypoint "$ENTRYPOINT" tuf_conformance is a directory that is part of the tuf-conformance suite -- possibly it just needs to be prefixed with ${{ github.action_path }}

RepositorySimulator._compute_hashes_and_length is broken

EDIT: Every test that uses RepositorySimulator._compute_hashes_and_length = True is currently broken because :

  • the simulator signs at request time
  • ecdsa signatures are not deterministic
  • hashes/length calculation is done over the whole file (so it includes these changing signatures)

This should get fixed if any option of #130 is implemented

Original report was :


test_snapshot_rollback_with_local_snapshot_hash_mismatch() does not seem right:

  • the metadata changes in the test seem valid, yet client fails refresh
  • test claims in comments to do a target version rollback but does not?
  • client actually fails because length and hash are incorrect?

I can't quite see what the issue is.

We should

  • find the root cause here
  • fix the test
  • add a simple test that just uses hashes and length in metafiles without any fancy attacks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.