GithubHelp home page GithubHelp logo

enterprise-contract / ec-policies Goto Github PK

View Code? Open in Web Editor NEW
12.0 12.0 23.0 3.6 MB

Rego policies related to RHTAP Enterprise Contract

Home Page: https://enterprisecontract.dev/docs/ec-policies/

Makefile 1.40% Open Policy Agent 95.91% Shell 1.30% Go 1.12% Gherkin 0.28%

ec-policies's People

Contributors

arewm avatar caugello avatar cuipinghuo avatar danbev avatar dependabot[bot] avatar github-actions[bot] avatar gtrivedi88 avatar joejstuart avatar lcarva avatar mbestavros avatar pipeline-service-staging-ci[bot] avatar ralphbean avatar rh-tap-build-team[bot] avatar robnester-rh avatar simonbaird avatar step-security-bot avatar yashvardhannanavati avatar zregvart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ec-policies's Issues

Design and planning for introducing the "critical portion" concept

Background

Currently the "acceptable bundles" mechanism for EC is strict in that every task in the build pipeline must be included in the acceptable bundles list. For teams who want to add their own custom scanners or additional CI into their customized build pipeline, this strict lock-down is annoying. This work is intended to provide a way for custom tasks to be added without diminishing the tight security provided by EC and the acceptable bundles mechanism.

Requirements

The proposal here is to split up the pipeline into a "critical" and (for want of a better term) "non-critical" sections. In the "critical portion" of the build pipeline, only the known and acceptable tasks can exist. Any unknown task (i.e. any task not found in the acceptable bundles list) will produce a violation and EC will fail.

Conversely, in the "non-critical portion", a custom task not in the acceptable bundles list is allowed by EC, and will not produce a violation.

Determining how to define the distinction between critical and non-critical sections of the pipeline is part of the design work required here. Conceptually, the idea is that a task that can't impact the artifact produced by the build is considered non-critical. So a minimal viable definition might be any task that happens after the image is built and pushed. However, we should also consider related artifacts such as SBOMs. It's likely we want to consider pushing an SBOM to the registry also in the critical section.

Once we have figured out a way to define and identify whether a task is in or out of the critical portion, then there also needs to be some changes to the rego rules to relax the acceptable bundle requirements for tasks that are not in the critical portion.

This spike task is meant to cover the design and planning for how to do it, and the creation of some stories in the parent Epic to track building the implementation.

Acceptance criteria

  • Invent a way to define what we mean by "critical portion" of the build pipeline
  • Invent a way to decide which tasks are in the critical portion and which are not (which should be implementable in rego somehow)
  • Share the design in a google doc and invite comments/feedback from EC team members and other interested stakeholders
  • Draft a list of stories required to track the implementation
  • Create those stories as Jira issues in the parent Epic

Use .statement.predicate in all places

In #597, we adapted to a change in ec-cli where each item in input.attestations contains an attribute, statement, which contains the actual values of the attestation, .e.g predicate, predicateType, etc.

We introduced a new policy rule, deprecated_policy_attestation_format, to warn users about these changes and encourage them to update their version of ec-cli. This turned into a violation on September 1st.

It has been long enough. Let's cut over to using .statement.predicate instead of just .predicate everywhere.

Let's leave the deprecated_policy_attestation_format policy rule in place as that can be helpful in debugging issues caused by using a very old version of ec-cli.

Create policy rules to catch disallowed packages

HACBS-2617 calls for policy rules to disallow the usage of certain packages.

The requirement is that this works with a CycloneDX SBOM attestation.

The proposal below is verbose because it offers reasons for the design decisions. The implementation should be straight forward though.

IMPORTANT: This is likely to use the custom built-in functions provided in enterprise-contract/ec-cli#1053. Beware that if a policy rule uses that built-in function, users must be using a version of ec-cli that provides that function. Otherwise, ec-cli will crash spectacularly.

Proposal

Regarding version comparison, semver is not be sufficient. A lot of packages use invalid semver values. For example:

  • v1.1.0 - this one is pretty common and it's actually what the hashicorp packages use. (The leading "v" makes it an invalid semver.
  • 9.2-0.13.el9 - popular for RPMs which may not be a concern here. Note that RPMs are pretty much all over the place. Some do use something that could be considered a valid semver: 1.7.1-12.el9. However, this interpreting it as a semver is problematic. In the RPM world, the full version identifier is <version>-<release>. In semver, a dash indicates this is a pre-release so 1.1.1 is considered higher than 1.1.1-5, for example.
  • 1.1.4.Final - many Maven packages use this. Not sure if this is documented anywhere.

Trying to guess the version format is not trivial since a certain version could be valid in multiple formats.

With that in mind, let's allow different version formats. To start with let's support semver, semverv (a variation of semver that allows prefixing it with "v", and regex (more on that later). Different formats can be added later on as needed.

The disallow data is represented like this:

disallowed_packages:
  - purl: pkg:golang/github.com%2Fhashicorp%2Fvault
    format: semverv
    min: v1.13.0
    max: v1.13.9999

Constraints

  • Either min or max are required. If both are specified, max must be greater than min. Both min and max are inclusive.
  • format specifies the version format, e.g. "semver", "semverv".

In the example above, the package at version v1.13.4 would be disallowed, but versions v1.12.9 and v1.14.0 would be allowed.

regexes

A regex can either match something or not. You can't say, for example, that X is greater than a given regex. This makes the regex support special since the concept behind min and max don't apply.

Also, negative lookarounds are not supported in rego which means we must provide a mechanism so the user can specify whether or not matching the regex means the package is disallowed.

Example usage:

 disallowed_packages:
  - purl: pkg:golang/github.com%2Fhashicorp%2Fvault
    regex:
        expression: '9\.[0-2]\-.*'
        match: true

The above means that if the version matches the regular expression, the package is not allowed.

Constraints

  • match is an optional attribute. If omitted, it defaults to true.
  • Optionally, the user can add format: regex to the entry but that is not required since we can infer it from the use of the regex attribute. It's supported for completeness.
  • The attribute regex cannot be used together with min nor max. Similarly, it cannot also be used when format is set and its value is not regex.

Use json schema validation when processing SBOMs

When processing an SBOM, let's leverage json.match_schema to ensure the provided SBOM adheres to the expected schema.

We should do this when processing either CycloneDX or SPDX SBOMs.

Care must be taken regarding which version of the schema to be used. Consider creating a list of allowed versions and validating accordingly.

NOTE: In some cases, the schema is more lax than we'd like. For example, we have policy rules that ensure a CycloneDX SBOM provides a non-empty list of components. However, the schema does not enforce this.

Acceptance Criteria

  • SBOMs are verified against the expected JSON schema.

Move beta.packages to release.sbom_cyclonedx

In #732, new policy rules were added that use the newly added ec.purl.parse custom rego function. To avoid issues with older versions of the ec-cli that do not provide the function, they were added to a new namespace, policy/beta, instead of the usual policy/release.

This issue is about moving those policy rules to policy/release, likely under the sbom_cyclonedx package.

Let's aim for doing this right after January 1st as that should be sufficient time to allow clients to update.

Acceptance Criteria

  • The policy rules from beta.packages are moved under the release namespace.

Allow pushing bundles with custom rego function

We tried pushing an OCI bundle that contains policy rules that use a custom rego function (ec.purl.parse). It was not successful:

2023/11/29 12:01:33 pushing bundle to: quay.io/zregvart_redhat/ec-beta-policy:git-23b1223
Error: push bundle: pushing layers: load: loading policies: get compiler: 2 errors occurred:
policy/beta/packages.rego:39: rego_type_error: undefined function ec.purl.parse
policy/beta/packages.rego:42: rego_type_error: undefined function ec.purl.parse
exit status 1

That is because the bundle is being created by executing conftest push. The conftest CLI doesn't know anything about the ec custom rego functions.

We do have ec opa which wraps the opa cli to avoid this. However, we don't have a wrapper for conftest itself. We could add one. Or we could try switching from conftest bundles to opa bundles.

Acceptance Criteria

  • It is possible to push OCI bundles containing policy rules that use ec's custom rego functions.

New Task definition policy: enforce trusted artifacts convention

We should enforce the trusted artifacts result/parameter naming convention. That is:

Any step utilizing the quay.io/redhat-appstudio/build-trusted-artifacts image should have it's positional arguments in the form of $([params|results].*_ARTIFACT)=<any value>.

This rule should be present in the redhat collection.

Create policy rules to process an SBOM

Description

Once enterprise-contract/ec-cli#1070 and enterprise-contract/ec-cli#1071 are completed, let's introduce some policy rules that use the newly added rego function to assist in processing a CycloneDX SBOM. See example policy rule.

The policy rules should be relatively basic for this story. Consider something similar to what was done for SPDX SBOMs.

As a stretch goal, consider writing the policy rules in such a way that the SBOM could be fetched from the SLSA Provenance, or from the list of attestations.

Acceptance Criteria

  • Policy rules exist that can process the CycloneDX SBOM referenced in the SLSA Provenance (, or as loaded file from the image).

Support test tasks (producing TEST_OUTPUT results) that should not produce EC violations on test failures

Background

The motivation for this is for RHTAP's sast-snyk-check task.

The policy for this task aiui is that:

  • It must be present in the pipeline
  • It must not be skipped
  • It must produce a TEST_OUTPUT task result with the usual format
  • However, unlike the current behavior, and the expected behavior for other tasks producing a TEST_OUTPUT task result, a non-zero failure count should not produce an EC violation.
  • (This requirement was less clear, but) I think producing an EC warning would be the reasonable thing to do in the case where sast-snyk-check reported some failures in its TEST_OUTPUT result.

Requirement

So the requirement is for EC to be able to enforce the policy as stated above.

Implementation:

How to do this is open, but perhaps something like this would work:

  • Introduce a new piece of rule data with a suitable name like warn_only_for_test_failures that contains a list of task names.
  • The default for this can be an empty list, (but perhaps in the RH specific RHTAP data config it will include a single value, sast-snyk-check).
  • Modify the existing rego rule test.required_tests_passed so it doesn't produce a violation for tasks in that list.
  • Introduce a new rego rule that examines only tasks in that list. It should produce a warning if there are failures present in TEST_OUTPUT.

Support different output artifacts from Chains

Some of the policies in this repo look at the IMAGE_URL and IMAGE_DIGEST results included in the SLSA Provenance attestation generated by Tekton Chains. This is often used to answer the question "which Task produced the image being validated?"

However, those are not the only results that Chains understands.

Acceptance Criteria

Create generic SLSA policies

The policy rules in the slsa3 collection assume the SLSA Provenance was created for images built from a Tekton Pipeline.

This makes them not suitable for verifying images built on other build systems like GitLab and GitHub.

Acceptance Criteria

  • It is possible to perform basic SLSA checks with EC on images built outside of Tekton.

Add acceptance tests

Similarly to the acceptance tests in enterprise-contract/ec-cli, let's add some tests here that execute against the output of services/apps, instead of mocked data.

Acceptance Criteria

  • Create at least one test that executes against the provenace attestation output provided by Tekton Chains.
  • As part of the test execution,
    • Spin up a new Kubernetes-like cluster, e.g. kind
    • Install Tekton Pipelines and Tekton Chains.
    • Spin up an OCI registry.
    • Execute a Pipeline that generates an image which is then signed and attested by Chains.
  • An EC contributor should be able to execute acceptance tests on their laptop without requiring access to a managed service, e.g. AWS account to create a cluster.

Do consider copying the "framework" from ec-cli.

There's quite a bit of work in implementing these acceptance tests, so let's keep the first test very very simple. Things that are out of scope:

  • Identity based signatures (aka keyless)
  • Any sort of Rekor integration
  • Anything that is not strictly necessary to achieve the acceptance criteria

How to safely rename policy rules

In #793, the policy rule test.required_tests_passed is renamed to test.no_failed_tests.

This can have implications for users who explicitly include or exclude that rule. Let's come up with a mechanism that allows us to do this sort of operations in a more robust manner.

Add support for collections in all namespaces

Today, collections are widely used in the policies under the release namespace. The docs know how to display them. This is nice.

This issue is about adding the same level of support for all namespaces (or at a bare minimum to the task namespace.) If we try to do that today, the website fails to build with an obscure error:

[15:46:50.902] FATAL (antora): Cannot read properties of undefined (reading 'title')

This seems to originate from here.

Acceptance Criteria

  • When using collections in the policy rules from the task namespace, the website build does not break and they are properly described in the website.
  • Revert #873
  • As a stretch goal, make this true for all namespaces.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.