GithubHelp home page GithubHelp logo

theupdateframework / go-tuf Goto Github PK

View Code? Open in Web Editor NEW
605.0 30.0 104.0 2.15 MB

Go implementation of The Update Framework (TUF)

Home Page: https://theupdateframework.com

License: Apache License 2.0

Go 98.78% Makefile 1.22%
go golang security supply-chain tuf hacktoberfest chain software supply

go-tuf's Introduction

GitHub Workflow Status (with branch) codecov Go Reference Go Report Card License

TUF go-tuf/v2 - Framework for Securing Software Update Systems


The Update Framework (TUF) is a framework for secure content delivery and updates. It protects against various types of supply chain attacks and provides resilience to compromise.

About The Update Framework


The Update Framework (TUF) design helps developers maintain the security of a software update system, even against attackers that compromise the repository or signing keys. TUF provides a flexible specification defining functionality that developers can use in any software update system or re-implement to fit their needs.

TUF is hosted by the Linux Foundation as part of the Cloud Native Computing Foundation (CNCF) and its design is used in production by various tech companies and open-source organizations.

Please see TUF's website for more information about TUF!

Overview


The go-tuf v2 project provides a lightweight library with the following functionality:

  • creation, reading, and writing of TUF metadata
  • an easy object-oriented approach for interacting with TUF metadata
  • consistent snapshots
  • signing and verifying TUF metadata
  • ED25519, RSA, and ECDSA key types referenced by the latest TUF specification
  • top-level role delegation
  • target delegation via standard and hash bin delegations
  • support of succinct hash bin delegations which significantly reduce the size of the TUF metadata
  • support for unrecognized fields within the metadata (i.e. preserved and accessible through root.Signed.UnrecognizedFields["some-unknown-field"], also used for verifying/signing (if included in the Signed portion of the metadata))
  • TUF client API
  • TUF multi-repository client API (implements TAP 4 - Multiple repository consensus on entrusted targets)

Examples


There are several examples that can act as a guideline on how to use the library and its features. Some of which are:

  • basic_repository.go example which demonstrates how to manually create and maintain repository metadata using the low-level Metadata API.

To try it - run make example-repository (the artifacts will be located at examples/repository/).

To try it - run make example-client (the artifacts will be located at examples/client/)

  • tuf-client CLI - a CLI tool that implements the client workflow specified by The Update Framework (TUF) specification.

To try it - run make example-tuf-client-cli

To try it - run make example-multirepo

Package details


The metadata package

  • The metadata package provides access to a Metadata file abstraction that closely follows the TUF specification’s document formats. This API handles de/serialization to and from files and bytes. It also covers the process of creating and verifying metadata signatures and makes it easier to access and modify metadata content. It is purely focused on individual pieces of Metadata and provides no concepts like “repository” or “update workflow”.

The trustedmetadata package

  • A TrustedMetadata instance ensures that the collection of metadata in it is valid and trusted through the whole client update workflow. It provides easy ways to update the metadata with the caller making decisions on what is updated.

The config package

  • The config package stores configuration for an Updater instance.

The fetcher package

  • The fetcher package defines an interface for abstract network download.

The updater package

  • The updater package provides an implementation of the TUF client workflow. It provides ways to query and download target files securely while handling the TUF update workflow behind the scenes. It is implemented on top of the Metadata API and can be used to implement various TUF clients with relatively little effort.

The multirepo package

  • The multirepo package provides an implementation of TAP 4 - Multiple repository consensus on entrusted targets. It provides a secure search for particular targets across multiple repositories. It provides the functionality for how multiple repositories with separate roots of trust can be required to sign off on the same targets, effectively creating an AND relation and ensuring any files obtained can be trusted. It offers a way to initialize multiple repositories using a map.json file and also mechanisms to query and download target files securely. It is implemented on top of the Updater API and can be used to implement various multi-repository TUF clients with relatively little effort.

Documentation


History - legacy go-tuf vs go-tuf/v2

The legacy go-tuf (v0.7.0) codebase was difficult to maintain and prone to errors due to its initial design decisions. Now it is considered deprecated in favour of go-tuf v2 (originaly from rdimitrov/go-tuf-metadata) which started from the idea of providing a Go implementation of TUF that is heavily influenced by the design decisions made in python-tuf.

Contact


Questions, feedback, and suggestions are welcomed on the #tuf and/or #go-tuf channels on CNCF Slack.

We strive to make the specification easy to implement, so if you come across any inconsistencies or experience any difficulty, do let us know by sending an email, or by reporting an issue in the GitHub specification repo.

go-tuf's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-tuf's Issues

Support adding and committing files in one command

If I have all the necessary keys locally, I should be able to do all of the following with one command:

  • add staged files to the targets manifest
  • sign the targets manifest
  • update and sign snapshot manifest
  • update and sign timestamp manifest
  • commit

This could be supported with a commit flag, e.g. tuf commit --add

Tests fail for client while using repo created with python implementation

Result of go test -v

go-tuf/client [master●] » go test -v
=== RUN   Test

----------------------------------------------------------------------
FAIL: <autogenerated>:1: InteropSuite.TestGoClientPythonGenerated

interop_test.go:54:
    c.Assert(client.Init([]*data.Key{key}, 1), IsNil)
... value client.ErrDecodeFailed = client.ErrDecodeFailed{File:"root.json", Err:(*errors.errorString)(0xc4200964d0)} ("tuf: failed to decode root.json: tuf: valid signatures did not meet threshold")


OOPS: 31 passed, 1 FAILED
--- FAIL: Test (6.58s)
FAIL
exit status 1
FAIL	github.com/flynn/go-tuf/client	6.589s

Environment

» go version
go version go1.10 linux/amd64
» uname -a
Linux primary.aagat.com 4.15.3-2-ARCH #1 SMP PREEMPT Thu Feb 15 00:13:49 UTC 2018 x86_64 GNU/Linux

I had to make a few changes in order to generate repo (breaking changes upstream?).

Modified files:

client/testdata/generate/Dockerfile

FROM ubuntu:trusty

RUN apt-get update
RUN apt-get install -y python python-dev python-pip libffi-dev tree libssl-dev

# Use the develop branch of tuf for the following fix:
# https://github.com/theupdateframework/tuf/commit/38005fe
RUN apt-get install -y git
RUN pip install --upgrade pip
RUN pip install --upgrade setuptools
RUN pip install --no-use-wheel git+https://github.com/theupdateframework/tuf.git@develop && pip install tuf[tools]

ADD generate.py generate.sh /
CMD /generate.sh

Modified file: client/testdata/generate/generate.py

#
# A script to generate TUF repository files.
#
# A modification of generate.py from the Python implementation:
# https://github.com/theupdateframework/tuf/blob/v0.9.9/tests/repository_data/generate.py

import shutil
import datetime
import optparse
import stat

from tuf.repository_tool import *
import os

parser = optparse.OptionParser()
parser.add_option("-c","--consistent-snapshot", action='store_true',  dest="consistent_snapshot",
    help="Generate consistent snapshot", default=False)
(options, args) = parser.parse_args()

repository = create_new_repository('repository')

root_key_file = 'keystore/root_key'
targets_key_file = 'keystore/targets_key'
snapshot_key_file = 'keystore/snapshot_key'
timestamp_key_file = 'keystore/timestamp_key'

generate_and_write_ed25519_keypair(root_key_file, password='password')
generate_and_write_ed25519_keypair(targets_key_file, password='password')
generate_and_write_ed25519_keypair(snapshot_key_file, password='password')
generate_and_write_ed25519_keypair(timestamp_key_file, password='password')

root_public = import_ed25519_publickey_from_file(root_key_file+'.pub')
targets_public = import_ed25519_publickey_from_file(targets_key_file+'.pub')
snapshot_public = import_ed25519_publickey_from_file(snapshot_key_file+'.pub')
timestamp_public = import_ed25519_publickey_from_file(timestamp_key_file+'.pub')

root_private = import_ed25519_privatekey_from_file(root_key_file, 'password')
targets_private = import_ed25519_privatekey_from_file(targets_key_file, 'password')
snapshot_private = import_ed25519_privatekey_from_file(snapshot_key_file, 'password')
timestamp_private = import_ed25519_privatekey_from_file(timestamp_key_file, 'password')

repository.root.add_verification_key(root_public)
repository.targets.add_verification_key(targets_public)
repository.snapshot.add_verification_key(snapshot_public)
repository.timestamp.add_verification_key(timestamp_public)

repository.root.load_signing_key(root_private)
repository.targets.load_signing_key(targets_private)
repository.snapshot.load_signing_key(snapshot_private)
repository.timestamp.load_signing_key(timestamp_private)

target1_filepath = 'repository/targets/file1.txt'
if not os.path.exists('repository/targets/'):
    os.makedirs('repository/targets/')
target2_filepath = 'repository/targets/dir/file2.txt'
if not os.path.exists('repository/targets/dir/'):
    os.makedirs('repository/targets/dir/')

with open(target1_filepath, 'wt') as file_object:
  file_object.write('file1.txt')

with open(target2_filepath, 'wt') as file_object:
  file_object.write('file2.txt')

octal_file_permissions = oct(os.stat(target1_filepath).st_mode)[4:]
file_permissions = {'file_permissions': octal_file_permissions}
repository.targets.add_target(target1_filepath, file_permissions)
repository.targets.add_target(target2_filepath)

repository.root.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.targets.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.snapshot.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.timestamp.expiration = datetime.datetime(2030, 1, 1, 0, 0)

repository.targets.compressions = ['gz']


if options.consistent_snapshot:
  repository.writeall(consistent_snapshot=True)

else:
  repository.writeall()

shutil.move('repository/metadata.staged', 'repository/metadata')

#!/usr/bin/env python
#
# A script to generate TUF repository files.
#
# A modification of generate.py from the Python implementation:
# https://github.com/theupdateframework/tuf/blob/v0.9.9/tests/repository_data/generate.py

import shutil
import datetime
import optparse
import stat

from tuf.repository_tool import *
import os

parser = optparse.OptionParser()
parser.add_option("-c","--consistent-snapshot", action='store_true',  dest="consistent_snapshot",
    help="Generate consistent snapshot", default=False)
(options, args) = parser.parse_args()

repository = create_new_repository('repository')

root_key_file = 'keystore/root_key'
targets_key_file = 'keystore/targets_key'
snapshot_key_file = 'keystore/snapshot_key'
timestamp_key_file = 'keystore/timestamp_key'

generate_and_write_ed25519_keypair(root_key_file, password='password')
generate_and_write_ed25519_keypair(targets_key_file, password='password')
generate_and_write_ed25519_keypair(snapshot_key_file, password='password')
generate_and_write_ed25519_keypair(timestamp_key_file, password='password')

root_public = import_ed25519_publickey_from_file(root_key_file+'.pub')
targets_public = import_ed25519_publickey_from_file(targets_key_file+'.pub')
snapshot_public = import_ed25519_publickey_from_file(snapshot_key_file+'.pub')
timestamp_public = import_ed25519_publickey_from_file(timestamp_key_file+'.pub')

root_private = import_ed25519_privatekey_from_file(root_key_file, 'password')
targets_private = import_ed25519_privatekey_from_file(targets_key_file, 'password')
snapshot_private = import_ed25519_privatekey_from_file(snapshot_key_file, 'password')
timestamp_private = import_ed25519_privatekey_from_file(timestamp_key_file, 'password')

repository.root.add_verification_key(root_public)
repository.targets.add_verification_key(targets_public)
repository.snapshot.add_verification_key(snapshot_public)
repository.timestamp.add_verification_key(timestamp_public)

repository.root.load_signing_key(root_private)
repository.targets.load_signing_key(targets_private)
repository.snapshot.load_signing_key(snapshot_private)
repository.timestamp.load_signing_key(timestamp_private)

target1_filepath = 'repository/targets/file1.txt'
if not os.path.exists('repository/targets/'):
    os.makedirs('repository/targets/')
target2_filepath = 'repository/targets/dir/file2.txt'
if not os.path.exists('repository/targets/dir/'):
    os.makedirs('repository/targets/dir/')

with open(target1_filepath, 'wt') as file_object:
  file_object.write('file1.txt')

with open(target2_filepath, 'wt') as file_object:
  file_object.write('file2.txt')

octal_file_permissions = oct(os.stat(target1_filepath).st_mode)[4:]
file_permissions = {'file_permissions': octal_file_permissions}
repository.targets.add_target(target1_filepath, file_permissions)
repository.targets.add_target(target2_filepath)

repository.root.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.targets.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.snapshot.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.timestamp.expiration = datetime.datetime(2030, 1, 1, 0, 0)

repository.targets.compressions = ['gz']


if options.consistent_snapshot:
  repository.writeall(consistent_snapshot=True)

else:
  repository.writeall()

shutil.move('repository/metadata.staged', 'repository/metadata')

Result of make to generate repo.

client/testdata [master●] » make
docker build -t tuf-gen ./generate
Sending build context to Docker daemon   7.68kB
Step 1/9 : FROM ubuntu:trusty
 ---> dc4491992653
Step 2/9 : RUN apt-get update
 ---> Using cache
 ---> 4448229afdc9
Step 3/9 : RUN apt-get install -y python python-dev python-pip libffi-dev tree libssl-dev
 ---> Using cache
 ---> e76d647ae1d1
Step 4/9 : RUN apt-get install -y git
 ---> Using cache
 ---> 388e3c4d12f6
Step 5/9 : RUN pip install --upgrade pip
 ---> Using cache
 ---> bbc9ef4a7f4e
Step 6/9 : RUN pip install --upgrade setuptools
 ---> Using cache
 ---> 9b60f68e0734
Step 7/9 : RUN pip install --no-use-wheel git+https://github.com/theupdateframework/tuf.git@develop && pip install tuf[tools]
 ---> Using cache
 ---> 9ab38c82fee8
Step 8/9 : ADD generate.py generate.sh /
 ---> Using cache
 ---> 037b9501c3fd
Step 9/9 : CMD /generate.sh
 ---> Using cache
 ---> 0341e646ab74
Successfully built 0341e646ab74
Successfully tagged tuf-gen:latest
docker run tuf-gen | tar x
Creating '/tmp/tmp.CmokAtVEyB/with-consistent-snapshot/repository'
Creating u'/tmp/tmp.CmokAtVEyB/with-consistent-snapshot/repository/metadata.staged'
Creating u'/tmp/tmp.CmokAtVEyB/with-consistent-snapshot/repository/targets'
Creating '/tmp/tmp.CmokAtVEyB/without-consistent-snapshot/repository'
Creating u'/tmp/tmp.CmokAtVEyB/without-consistent-snapshot/repository/metadata.staged'
Creating u'/tmp/tmp.CmokAtVEyB/without-consistent-snapshot/repository/targets'
Files generated:
.
|-- with-consistent-snapshot
|   |-- keystore
|   |   |-- root_key
|   |   |-- root_key.pub
|   |   |-- snapshot_key
|   |   |-- snapshot_key.pub
|   |   |-- targets_key
|   |   |-- targets_key.pub
|   |   |-- timestamp_key
|   |   `-- timestamp_key.pub
|   |-- repository
|   |   |-- metadata
|   |   |   |-- 1.root.json
|   |   |   |-- 1.snapshot.json
|   |   |   |-- 1.targets.json
|   |   |   |-- 1.timestamp.json
|   |   |   |-- root.json
|   |   |   |-- snapshot.json
|   |   |   |-- targets.json
|   |   |   `-- timestamp.json
|   |   `-- targets
|   |       |-- 055dc805570eecebad4270774054ee4375ef9a7248d981cfa8155dc884817df31e8497684dd26addd018a30565c3ccf87eeb70445f2e76587af84ed6ce1e0302.file1.txt
|   |       |-- 55ae75d991c770d8f3ef07cbfde124ffce9c420da5db6203afab700b27e10cf9.file1.txt
|   |       |-- dir
|   |       |   |-- 04e2f59431a9d219321baf7d21b8cc797d7615dc3e9515c782c49d2075658701.file2.txt
|   |       |   |-- 2b85daf030ebc94d302822da4fd50216dc56f90c9bb60a95b272aa5b11fe81cd9b192b1a860896d6a8241d1a42cc97b6015d42100c9b46432a32db4b13a11c58.file2.txt
|   |       |   `-- file2.txt
|   |       `-- file1.txt
|   `-- tuf.log
`-- without-consistent-snapshot
    |-- keystore
    |   |-- root_key
    |   |-- root_key.pub
    |   |-- snapshot_key
    |   |-- snapshot_key.pub
    |   |-- targets_key
    |   |-- targets_key.pub
    |   |-- timestamp_key
    |   `-- timestamp_key.pub
    |-- repository
    |   |-- metadata
    |   |   |-- 1.root.json
    |   |   |-- root.json
    |   |   |-- snapshot.json
    |   |   |-- targets.json
    |   |   `-- timestamp.json
    |   `-- targets
    |       |-- dir
    |       |   `-- file2.txt
    |       `-- file1.txt
    `-- tuf.log

12 directories, 39 files

Document ability to revoke / remove keys

root.json metadata is currently populated with the keys available in the keys/<role>.json files. For example, if one wishes to add a root key to root.json, the tuf gen-key root command is issued. The public key of the newly generated key is specified in root.json by gen-key. However, there isn't a command to remove a key from a specific role. I suppose one can generate a new root.json key file with only the keys desired, however, this likely requires manually editing files.

In addition, the tools should also support the ability revoke keys for specific roles (i.e., not list their public key(s) in metadata), yet still sign metadata with the revoked keys to allow clients to successfully update. The specification goes into more detail about this aspect of key revocation and management:

"To replace a compromised root key or any other top-level role key, the root role signs a new root.json file that lists the updated trusted keys for the role. When replacing root keys, an application will sign the new root.json file with both the new and old root keys until all clients are known to have obtained the new root.json file (a safe assumption is that this will be a very long time or never). There is no risk posed by continuing to sign the root.json file with revoked keys as once clients have updated they no longer trust the revoked key. This is only to ensure outdated clients remain able to update."

Implement `tuf regenerate`

I should be able to run tuf regenerate to regenerate targets.json based on the target files in the committed targets directory.

Osx update

Hey guys,

I got this working on Linux.
Does this support updating osx apps ?

Is ecdsa verifier implemented according to the specification?

The ecdsa-sha2-nistp256 verifier,
which is implemented here: https://github.com/theupdateframework/go-tuf/blob/master/verify/verifiers.go#L49
assumes uncompressed form of public key specified in section 4.3.6 of ANSI X9.62 (because it uses elliptic.Unmarshal function)

The specification however says that this scheme should use format where "PUBLIC is in PEM format and a string."
theupdateframework/python-tuf#498

So it should use: x509.MarshalPKIXPublicKey + pem.EncodeToMemory.
Would it be fine to fix it now? Is such a fix constrained anyhow?

List signing keys

I should be able to list the IDs of the current signing keys and whether they are present locally, for example:

$ tuf list-keys
ROLE       KEYID                                                             LOCAL
root       8fb16df0010dfeeb5737245e527e598fdf38cf92aea18a205e19fbf97c5766fd  false
snapshot   5a83cb8e2db2dceada45b14c6ab36c44618b26518b7fc9800f737cc54de9ffe2  true
targets    6ee7468ea4027e8c813ed774f2959336e6fb0c7015a0df73fbe3b24aa76a34a2  false
targets    a418bc9ec8699df5903779fe2c80058ac0ba2843b956e535aa377a6c99c56499  true
timestamp  6a14c18be9ea8e3b53509e0644e79097f933303186bd649ef199f854c5f0a86d  true

Expired metadata

Downloaded metadata should be rejected if it is expired. Metadata from local storage however should not be rejected due to expiry (only signature checked for consistency).

Interop testing

We should have test fixtures generated by the Python implementation to ensure compatibility with our client, as well as a test harness that uses the Python client to test interoperability with repos generated by our code.

Improve CLI experience

The CLI should output informative messages, for example:

$ tuf gen-key root
Created root key with ID 8f782b389ce30f9296d5b850cfd21a2eb55f134ba6af8397dbed7b0fba259228

$ tuf sign root.json
Added 1 signature to root.json

panic: runtime error: index out of range

Sorry for the redacted stack trace.

I got a panic in what is essentially this line: https://github.com/flynn/go-tuf/blob/890a6cb82044de20e094222d137721d287f46b71/local_store.go#L265
I get that this is probably a case of some wrong input of sorts (e.g. name being empty or nil), but the program should still not crash here.

panic: runtime error: index out of range

goroutine 1 [running]:
[github.com/flynn/go-tuf.(*fileSystemStore).Commit.func3(0xc0003763e3](https://github.com/flynn/go-tuf.(*fileSystemStore).Commit.func3(0xc0003763e3), 0xc, 0xc000376301)
        /<readacted>/src/github.com/flynn/go-tuf/local_store.go:274 +0x1a2
[github.com/flynn/go-tuf.(*fileSystemStore).Commit.func4(0xc000376380](https://github.com/flynn/go-tuf.(*fileSystemStore).Commit.func4(0xc000376380), 0x6f, 0x86d0e0, 0xc00037fd40, 0x0, 0x0, 0x4b732a, 0xc00037fd40)

Support adding files to specific target manifests

I would like to be able to make historic versions of the same file available for download to people who want it.

As a concrete example, users can download the latest version (say v3) of my-binary by running client.Download("my-binary"), but I would also like them to be able to download v2, perhaps like client.DownloadVersion("my-binary", "v2").

I could handle this by adding version strings to files (e.g. have my-binary (the latest), my-binary.v2, my-binary.v3 etc.) but this will lead to an ever growing targets manifest.

I could also point the client at a historic snapshot which contains a targets manifest with the correct versions in it, but this would be considered a downgrade attack and be rejected.

Instead I would like to maintain a collection of target manifests (one per version of my software), and requesting a specific version of a file would look in the relevant targets manifest (identified using custom metadata).

This isn't explicitly discussed in the tuf spec, but what is discussed is the ability for targets to delegate trust for a subset of target files to other targets. I plan to use this feature to achieve my goal, but the delegated roles would likely have the same keys as the top-level target, and would also have overlapping target responsibility (the end user would be responsible for determining which delegated manifest to download a given file from based on custom metadata).

On the repo side, the following changes would be needed:

  • Add a tuf role command which will manage delegated roles (i.e. their keys and custom metadata)
  • Add a --role xxx flag to tuf add to support adding files to specific delegated roles
  • Update tuf sign to support signing delegated roles

On the client side:

  • Expose enough role information to allow the client to choose a role based on custom metadata (the roles can be nested so this would likely be something like role := client.FindRole(func(Role) bool) to find a single role, or roles := client.FindRoles(func(Role) bool, n int) to find multiple roles)
  • Add role.Download(name string) to download a file from a given delegated role

We need to decide whether our unintended use of delegated roles is a good idea, and whether it will be compatible with adding things like hashed bins in the future.

/cc @titanous

Add `tuf status`

I should be able to see the status of the repository by running tuf status, e.g.:

$ tuf status
MANIFEST        STATUS
root.json       committed
targets.json    staged
snapshot.json   missing
timestamp.json  missing

client: Update FileLocalStore to support multiple repositories

Clients should be able to use the same database file with multiple repositories (e.g Flynn users pulling images from arbitrary repositories).

This can be achieved by updating FileLocalStore to take a namespace argument which namespaces the boltdb lookups, and the remote repository URL can be used as the namespace.

tuf-client get increases the size of the tuf.db (exponentially?)

Hi,

this is how I check for new versions of Flynn:

Initially:

tuf-client init https://dl.flynn.io/tuf <<< '[{"keytype":"ed25519","keyval":{"public":"6cfda23aa48f530aebd5b9c01030d06d02f25876b5508d681675270027af4731"}}]'

and then for the checks for nightly and stable:

tuf-client get https://dl.flynn.io/tuf /channels/nightly
tuf-client get https://dl.flynn.io/tuf /channels/stable

Every time I execute the checks, the local tuf.db grows big times.

Call nr:

  1. 31MB
  2. 122M
  3. 489M
  4. 1.9G

etc..

Question on keyid

Hi guys,

I am wondering how do you generate a keyID for a given key (curiosity) ?

I found this but I did not get it: code

role.KeyIDs = append(role.KeyIDs, pk.ID())

Thanks :)

Local metadata which does not verify should not block updates

If a new root.json is downloaded during an update (e.g. because the local one is expired), and the new root.json has completely different keys, then other local metadata can potentially no longer be verified in Client.getLocalMeta, leaving the client unable to update.

Instead of returning an error when local data does not verify, it should be invalidated and re-downloaded, possibly logging the failed verification somewhere.

Support revoking top level keys

From section 6.1 of the TUF spec:

To replace a compromised root key or any other top-level role key, the root
role signs a new root.json file that lists the updated trusted keys for the
role. When replacing root keys, an application will sign the new root.json
file with both the new and old root keys until all clients are known to have
obtained the new root.json file (a safe assumption is that this will be a
very long time or never). There is no risk posed by continuing to sign the
root.json file with revoked keys as once clients have updated they no longer
trust the revoked key. This is only to ensure outdated clients remain able
to update.

Verify that Remote Store Interface returns the correct error code for Not Found for s3

Some part of the code-based is based on the assumption that the Remote Store Interface implementation for S3 returns the correct error code (i.e., ErrMissingRemoteMetadata). This is of importance because S3 uses 403 as a response code instead of commonly used 404.

This issue it to check whether that is the case and address that if it is not.

Prevent slow retrieval attacks

Slow retrieval attacks should be prevented by requiring that remote data is fetched in a timely manner.

The Python implementation guards against this attack by reading remote data in small chunks, and signalling an error if reading any given chunk takes longer than a specified time (see here).

Compress manifests

Staging the snapshot and timestamp manifests should optionally compress their dependant manifests, for example:

$ tree staged
staged
└── targets.json

$ tuf snapshot --compression=gzip

$ tree staged
staged
├── snapshot.json
├── targets.json
└── targets.json.gz

$ tuf timestamp --compression=gzip

$ tree staged
staged
├── snapshot.json
├── snapshot.json.gz
├── targets.json
├── targets.json.gz
└── timestamp.json

Comparison with Notary

How does this project compare with https://github.com/theupdateframework/notary? This could be useful to add to the readme.

It seems to me that this project essentially implements the "signer" portions of Notary, without any HTTP services. Is that correct?

When might a user decide to run go-tuf instead of Notary?

Improve client errors

For example ErrNotFound should contain information on what exactly was not found.

Delegation

We should have support for target role delegation.

Support changing keys file passphrase

I should be able to change the passphrase of a keys file, for example:

$ tuf change-passphrase root
Enter current root keys passphrase:
Enter new root keys passphrase:
Repeat new root keys passphrase:

Implementing a custom LocalStore

Hi, I wanted to implement a custom LocalStore for a store on an OCI registry, but I ran into a problem because the parameter type is private

go-tuf/repo.go

Line 39 in aee6270

type targetsWalkFunc func(path string, target io.Reader) error

I potentially could use the in-memory store and do some before/after conversion, but just wanted to see if it would be acceptable to make this public.

Revoking a key does not remove its signatures

I came across this confusion when trying to issue an update on the targets key, and I'm not sure if it's intended. The following test fails in repo_test.go. What it does is

  • commit a new repository with initial keys.
  • revoke the targets key and add a new one, thereby changing the root.json. indeed, signatures in root.json are updated.
  • re-sign targets.json (with the new key), and snapshot and timestamp
  • commit successfully

However, what we find after committing is that the old signature was NOT removed from targets.json. The old signature still exists, because it didn't get updated (because it's not in the signing key database).

My question is:

  • Should RevokeKey make sure to clear signatures associated with the key in the role file? OR
  • Should Sign only leave signatures for the valid signing keys (instead of leaving old ones)? Note: Commit was successful because verifying signatures skips invalid keys

I am leaning to (1) after initial thought. Dealing with this as early as possible makes sense to me.

func (rs *RepoSuite) TestRevokeTargets(c *C) {
	files := map[string][]byte{"foo.txt": []byte("foo")}
	local := MemoryStore(make(map[string]json.RawMessage), files)
	r, err := NewRepo(local)
	c.Assert(err, IsNil)

	// don't use consistent snapshots to make the checks simpler
	c.Assert(r.Init(false), IsNil)

	genKey(c, r, "root")
	targetIds := genKey(c, r, "targets")
	genKey(c, r, "snapshot")
	genKey(c, r, "timestamp")

	c.Assert(r.AddTarget("foo.txt", nil), IsNil)
	c.Assert(r.Snapshot(CompressionTypeNone), IsNil)
	c.Assert(r.Timestamp(), IsNil)
	c.Assert(r.Commit(), IsNil)

       // Update the targets key
	c.Assert(r.RevokeKey("targets", targetIds[0]), IsNil)
	newTargetIds := genKey(c, r, "targets")

        // Update the targets key
	c.Assert(r.RevokeKey("targets", targetIds[0]), IsNil)
	newTargetIds := genKey(c, r, "targets")

	// Re-sign, snapshot and timestamp
	c.Assert(r.Sign("targets.json"), IsNil)
	c.Assert(r.Snapshot(CompressionTypeNone), IsNil)
	c.Assert(r.Timestamp(), IsNil)
	c.Assert(r.Commit(), IsNil)

	// Signatures in targets.json should only be from the new key.
	checkSigIDs := func(role string, keyIDs ...string) {
		s, err := r.SignedMeta(role)
		c.Assert(err, IsNil)
		c.Assert(s.Signatures, HasLen, len(keyIDs))
		for i, id := range keyIDs {
			c.Assert(s.Signatures[i].KeyID, Equals, id)
		}
	}
        // THIS LINE FAILS
	checkSigIDs("targets.json", newTargetIds...)  
}

Client accepts expires values in non-UTC timezones

Note: description edited to clarify that this is not a go-tuf metadata generation issue.

While looking at the metadata generated for the sigstore root of trust I noticed that the expires entries in the metadata encodes a non-UTC timezone:

"expires": "2021-12-18T13:28:12.99008-06:00"

(from 1.root.json)

Whereas the the specification suggests time should always be in UTC:

Metadata date-time follows the ISO 8601 standard. The expected format of the combined date and time string is "YYYY-MM-DDTHH:MM:SSZ". Time is always in UTC, and the "Z" time zone designator is attached to indicate a zero UTC offset. An example date-time string is "1985-10-21T01:21:00Z".

@asraa pointed out below that the expires field is not set by go-tuf, so the issue here (if any) is that the client does not verify that the date-time in the expires field is in UTC.

Flag to specify the hash when adding a file

When adding a file that came from another system I should be able to provide a flag that verifies the hash of the file against the hash that was calculated. This saves me from performing an additional manual checksum verification before adding the file. It might also make sense to allow moving files from elsewhere on the filesystem as part of the same command?

Update the fast forward recovery implementation

At the time of submitting the #143 that updates root, there was another PR describing the fast forward recovery approach.
The current implementation of the root update is based on the current state of that PR (https://github.com/mnm678/specification/tree/e50151d9df632299ddea364c4f44fe8ca9c10184)

Specifically, the part of the updateRoots that removes some metadata files based on the top-level key rotation might need to be updated upon any change in the fast-forward recovery pr.

Flag for expires

There should be a flag when generating manifests that modifies the expires from the default.

Passphrase-protect keys

The tuf command should default to saving keys in encrypted form using a passphrase. The passphrase should be passed interactively by default with an option to pass it via an environment variable (as a very weak attempt to prevent it from leaking to observers of ps). An option to disable encryption (like --insecure-plaintext-key or similar) should be provided as well.

The encryption steps are as follows:

  • Retrieve a 32 byte random salt using crypto/rand
  • Pass the salt and passphrase to scrypt with the following parameters:
    • N=32768
    • r=8
    • p=1
    • length=32
  • Retrieve a 24 byte random nonce using crypto/rand
  • Encrypt the entire key JSON (in the same format that the plaintext key is stored) using nacl/secretbox
  • Persist the salt, parameters, nonce, and ciphertext to the JSON file

The N parameter was chosen to be ~100ms of work using the default implementation on the 2.3GHz Core i7 Haswell processor in my late-2013 Apple Retina Macbook Pro (it takes 113ms).

The JSON should probably look something like this:

{
  "_type": "encrypted-key",
  "kdf": {
    "name": "scrypt",
    "params": {
      "N": 32768,
      "r": 8,
      "p": 1
    },
    "salt": "<base64 encoded bytes>"
  },
  "algorithm": {
    "name": "nacl/secretbox",
    "nonce": "<base64 encoded bytes>"
  },
  "ciphertext": "<base64 encoded bytes>"
}

Hash is appended to file name.

I was running through the setup tutorial and after running "tuf commit" each file's hash was appended to the beginning of the file name, which doesn't match the tutorial. If that is the correct behavior it would be helpful to update the tutorial so that people don't dive into the code thinking there might be a bug to fix.

`tuf clean` can corrupt repo

If I run tuf clean before the first tuf commit, then root.json will be deleted and there is no way to re-create it.

Retain remote version update on error

If an update determines a newer version of a file exists, it should retain that information locally even if there is a subsequent error in the update process.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.