GithubHelp home page GithubHelp logo

hull's Introduction

hull

License

Hull is a Go testing framework for writing comprehensive tests on Helm charts.

Once you have defined your suite of tests targeting a specific chart (or multiple charts) using Hull, you can simply run your suite(s) of tests by running go test.

Who needs Hull?

Anyone who maintains a Helm repository or a set of Helm charts that would like to add an automated testing suite that allows them to lint charts and setup unit tests.

For more information on why you might want to use Hull, see the About guide.

Prerequisites

You will be expected to install the following dependencies locally on your machine to successfully run Hull:

  • Go (minimal requirement to be able to run go test)
  • Yamllint (only required if you use Hull to run YAML linting on manifests produced by helm template commands)

Getting Started

Please see examples/example_test.go for an example of a Go test written for a single chart in this fashion on the chart located in testdata/charts/example-chart. To run the example test, you can simply run:

go test examples/example_test.go

Under the hood, Hull leverages github.com/stretchr/testify/assert for test assertions; it is recommended, but not required, for users to also use this framework when designing Hull tests.

Developing

Which branch do I make changes on?

Hull is built and released off the contents of the main branch. To make a contribution, open up a PR to the main branch.

For more information, see the Developing guide.

Building

make

Running

./bin/hull

License

Copyright (c) 2022 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

hull's People

Contributors

vardhaman22 avatar eliyamlevy avatar adamkpickering avatar cbron avatar pjbgf avatar

Stargazers

Vojtěch Mareš avatar Bryan A. S. avatar Michael Lin avatar Rohan Murthy avatar

Watchers

Manuel Recena avatar Prachi Damle avatar Brian Downs avatar David Cassany Viladomat avatar Sakala Venkata Krishna Rohit avatar Arvind Iyengar avatar Kinara Shah avatar Alexander Demicev avatar  avatar Nicholas openSUSE Software Engineer avatar

hull's Issues

Add examples and fixes around working with subcharts

Need to add a more complex use case of a Helm chart using a local subchart (that exists in the charts/ directory) or one that uses a remote subchart (that is called out in the dependencies in the Chart.yaml).

In general, this issue should be an Epic around figuring out the right experience for working with subchart in Hull.

For example, perhaps it should be possible to define a test.Suite for the subchart that can be embedded into the test.Suite for the main chart?

Generating a JSON schema should not be done automatically

Currently the behavior of hull is to automatically handle overriding the JSON schema on seeing a test error; however, this behavior can result in pitfalls as users do not expect running a test to modify the content of what is being tested.

Instead, processing an automatic override should be guarded by an environment variable that needs to be set on a go test run (OVERWRITE_SCHEMA=1) to automatically perform this action.

Support the ability to generate templates on tests based on a live Kubernetes cluster

Currently, Hull utilizes e.Render instead of e.RenderWithClient in the following code used to generate a template:

https://github.com/aiyengar2/hull/blob/33cf243dca452b3fa1d5d471d6bcd6806e2f4205/pkg/chart/chart.go#L75-L78

As a result, the output of the template that is generated mimics what would be created in a helm template or helm install --dry-run instead of a true helm install (which would support additional functionality like verifying the output from lookup calls).

To support the ability for tests to mimic parsing the template generated during a Helm install instead, we should allow the ability for users to set USE_CLUSTER_FROM_ENV=1, which should populate the client used in this call with the one generated by finding the current KUBECONFIG from the environment (which can be done by leveraging the code here).

This KUBECONFIG should then be used to grab the latest Helm release secret for the given chart, which will contain the information necessary to create a chart.Template off of.

Add `lookup` and `.Capabilities` to coverage

When templates have special Helm actions like lookup or .Capabilities calls, these should be calculated as part of the unit testing coverage calculations to ensure that the chart is fully tested.

This would require introspecting the Helm templates that have been provided to understand what is specifically being queried against to include those as part of the coverage calculations.

Note: This is a fairly difficult problem to solve that I do not currently have a solution in mind for, so if there are any suggestions I'd appreciate it in the comments of this issue!

Support the ability to generate templates on tests based on a mocked Kubernetes cluster

As a followup to https://github.com/aiyengar2/hull/issues/2, while the solution requested in the previous issue allows users to be able to mimic a helm install instead of a helm template / helm install --dry-run, it would not allow for environment-agnostic tests since the underlying Kubernetes cluster used may be different for different users, which can produce different results in tests.

Instead of assuming that a consistent environment is used in all test runs, Hull should offer the ability to start a HTTP server on a port that mocks one or more "read-only" API servers that are preconfigured to contain certain manifests of resources.

Ideally, from a user standpoint, a user would simply need to provide a manifest of resources (either a manifest string that can be go embedded from a file or a []runtime.Object) and a mock API server should be generated with a rest.Config that points to it that can be used for the e.RenderWithClient call; as a result, this mocked API server will be used as the basis for calling Helm which can result in tests that are fully environment-agnostic as they don't need a live Kubernetes cluster to be pre-setup to run tests with specific expectations.

[Bug] error from parser when it tries to parse NOTES.txt file of chart

Describe the bug

While rendering templates of a chart, the parser tries to parse NOTES.txt and extract kubernetes object from it. While parsing following case is handled

if strings.HasPrefix(err.Error(), "error unmarshaling JSON: while decoding JSON: Object 'Kind' is missing in ") {
// not a valid kubernetes object, but some valid JSON
continue
}

if any other error occurs then the test case fails.

To Reproduce

  1. replace the content in example chart Notes.txt file (testdata/charts/example-chart/templates/NOTES.txt) with the following content
Welcome to Kiali! For more details on Kiali, see: https://kiali.io

The Kiali Server [{{ .Chart.AppVersion }}] has been installed in namespace [{{ .Release.Namespace }}]. It will be ready soon.

(Helm: Chart=[{{ .Chart.Name }}], Release=[{{ .Release.Name }}], Version=[{{ .Chart.Version }}])

  1. run the test added for example chart (examples/tests/example/example_test.go) to get the following error in the output
* error converting YAML to JSON: yaml: line 6: could not find expected ':'

Result
Test Failed.

Expected Result

Test Should Pass

Screenshots

Additional context

Add some default NamedChecks that can be easily imported into Rancher charts

For common fields like global.cattle.psp.enabled or global.cattle.systemDefaultRegistry that have well-defined expected behaviors, Hull should contain some default NamedChecks that can be imported into charts.

There are also general "best practice" NamedChecks that can be defined like if every workload has nodeSelectors and tolerations attached to it for windows / linux depending on the type of image being used and if the image being used has security vulnerabilities associated with it.

Write basic library functions to make it easier to write tests

Hull requires basic functions to make it easier for users to get started with running checks on Helm charts without having to define their own advanced CheckFunctions.

Additional utilities around chaining check functions would be useful too. Need to investigate with existing charts.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.