keikoproj / kubedog Goto Github PK
View Code? Open in Web Editor NEWKubedog is a Godog (cucumber) wrapper with pre-defined step implementations for Kubernetes/AWS
License: Apache License 2.0
Kubedog is a Godog (cucumber) wrapper with pre-defined step implementations for Kubernetes/AWS
License: Apache License 2.0
Looks like gobdd might join cucumber. It might replace godog or combine with it.
Joined forces discussion
go-bdd/gobdd#137
Add docs around development:
Write basic unit tests for the existing step implementations
When executing bdd tests, it may be the case that a user wants to execute a script or CLI command and make sure the command executes successfully. This would be particularly helpful if you are developing a CLI tool around Kubernetes and are using bdd testing for the tool.
In order to support this, kubedog should consider adding a step for executing a command, such as:
ctx.Step(`^I run the "(\S+)" command with the "([^"]*)" args`, runCommand)
The function could look something like:
func runCommand(command string, args string) error {
// split to support args being passed from .feature file.
// slice param type not supported by godog.
splitArgs := strings.Split(args, " ")
toRun := exec.Command(command, splitArgs...)
var stderr bytes.Buffer
toRun.Stderr = &stderr
log.Infof("Running command: %s", toRun.String())
err := toRun.Run()
if err != nil {
log.Errorf(stderr.String())
}
return err
}
This would allow users to run a command with arguments and make sure the command exits without error.
It may also be the case that users want to assert the command fails. So the step above could be written as follows to support this case:
ctx.Step(`^I run the "(\S+)" command with the "([^"]*)" args and the command "(fails|succeeds)"`, runCommand)
Here is the test feature file:
Feature: install my resource
Scenario: Install my resource
Given valid AWS Credentials
And a Kubernetes cluster
And I create the resources in test.yaml
Then the resource test.yaml should be created
Here is test.yaml
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
name: mayally0328-cpu-utilization
namespace: app-albcanary-base
spec:
args:
- name: namespace
- name: stable-hash
- name: canary-hash
- name: prometheus-port
- name: cpu-utilization-limit-perc
- name: initial-delay
value: 1m
- name: count
value: "10"
- name: interval
value: 60s
- name: failure-limit
value: "1"
- name: inconclusive-limit
value: "1"
metrics:
- count: '{{args.count}}'
failureLimit: '{{args.failure-limit}}'
inconclusiveLimit: '{{args.inconclusive-limit}}'
initialDelay: '{{args.initial-delay}}'
interval: '{{args.interval}}'
name: cpu-utilization
provider:
prometheus:
address: http://prometheus.addon-metricset-ns.svc.cluster.local:{{args.prometheus-port}}
query: (quantile(0.5, quantile_over_time(0.5, namespace_pod_cpu_utilization{namespace="{{args.namespace}}",
pod=~".*-{{args.canary-hash}}-.*"}[11m])))
successCondition: result[0] <= {{args.cpu-utilization-limit-perc}}
When running godog features
, below error is thrown:
ERRO[0002] Failed deleting old test resources: 'template: Resource:93: function "args" not defined'
We use UM BDD as usage example, we need to update it.
Use the non-capturing group (?:<optional-text>)?
to make the step syntax more flexible without having to duplicate steps
Add a GenerateTemplatedYaml like function to generate templated yamls generically. That can be used by users for any yaml templating needs other than k8s resources. Like configs, etc.
Most of the Unit Test:
Write an E2E Functional Test to test the functionality of each and every step E2E in a real k8s clusters. For this we would need to get access to a running cluster, we could use IM or UM test clusters. I think we can implement the test by writing a functional test where we call all the steps in a *.feature
file and use minimal required resource files in a template
directory.
There is a general state assumed to be use for the whole test suite. We need to restructure that so that each scenario has its own state or safe access to the state of the suite so that scenarios can be executed in parallel. GoDog already supports this.
Look into how to clean/restruct kubedog code better now that arktikas test code base is there. Maybe break pkgs into files of specific topics
Docs for #40
When using the update operation step, the following scenario can happen:
Instead of failing in this scenario, the step should create the new resource if it doesn't exist and also update all of the existing resources defined in the manifest.
This is important because you may have a tool that generates manifests and in later versions adds additional resources and want to validate they can be created successfully.
There are several update operation steps, so ideally we will fix all update operations in kubedog as part of this issue.
As noted in #58, kubedog's use of text/template
library for parsing resources from yaml files can lead to the following error when working with Argo Rollouts resources:
template: Resource:31: function "args" not defined
Argo Rollout resources like AnalysisTemplates feature an {{args.property}}
parameter syntax that conflicts with text/templates interpretation of {{}}
. text/template interprets this syntax to mean a function to be executed as part of making use of a template. Since the function does not exist in this case, the error above occurs.
The workaround in #58 works if manifests are defined in advance for tests, but if you need to generate manifests as part of a test, it would be very difficult to add in the workaround for each use of {{args.property}}
that is defined in an analysis template.
kubedog should amend it's use of templating to support this Argo Rollout syntax. One possible solution to this would be to not template values if no templating values are passed in.
This can be reproduced using same steps mentioned in #58.
We need to integrate this into the CI such that it fails if running go generate
alters syntax.md
. That would mean the contributor did not update the docs as part of the PR.
Continue work of #76
Make the DefaultWaiter customizable so that it can be modified by the users. Depending on the each use case a larger DefaultWaiter might be required.
Implement the same or better improvements of:
kdt.scenarioContext.Step(`^(some|all) pods in namespace (\S+) with selector (\S+) have "([^"]*)" in logs since ([^"]*) time$`, kdt.KubeContext.SomeOrAllPodsInNamespaceWithSelectorHaveStringInLogsSinceTime)
In:
kdt.scenarioContext.Step(`^some pods in namespace (\S+) with selector (\S+) don't have "([^"]*)" in logs since ([^"]*) time$`, kdt.KubeContext.SomePodsInNamespaceWithSelectorDontHaveStringInLogsSinceTime)
kdt.scenarioContext.Step(`^(?:the )?pods in namespace (\S+) with selector (\S+) have no errors in logs since ([^"]*) time$`, kdt.KubeContext.ThePodsInNamespaceWithSelectorHaveNoErrorsInLogsSinceTime)
kdt.scenarioContext.Step(`^(?:the )?pods in namespace (\S+) with selector (\S+) have some errors in logs since ([^"]*) time$`, kdt.KubeContext.ThePodsInNamespaceWithSelectorHaveSomeErrorsInLogsSinceTime)
Key improvements
some
or all
We should have a basic CI pipeline using Github Actions that does the following:
Look into how to make the arktika code base now in kubedog more generic and parametrized. Define a target state for each new step and create individual tickets for each one.
// TODO: define default suite hooks if any, check that the suite context was set
// TODO: define default scenario hooks if any
// TODO: define default step hooks if any
Implementation of initial/basic Kubernetes & AWS steps
When an object has no namespace, kubedog fails with the following error the server does not allow this method on the requested resource
. That is not a clear error msg.
❯ godog run features/no-namespace.feature
Feature: install my resource
INFO[0000] [KUBEDOG] Credentials: arn:aws:sts::663374536332:assumed-role/PowerUser/IntuitOlympus-agaro-50002266530-1657559762083
Scenario: Install my resource # features/no-namespace.feature:3
Given valid AWS Credentials # <autogenerated>:1 -> *Client
And a Kubernetes cluster # <autogenerated>:1 -> *Client
And I create the resources in ingress.yaml # <autogenerated>:1 -> *Client
the server does not allow this method on the requested resource
Then the resource ingress.yaml should be created # <autogenerated>:1 -> *Client
--- Failed steps:
Scenario: Install my resource # features/no-namespace.feature:3
And I create the resources in ingress.yaml # features/no-namespace.feature:7
Error: the server does not allow this method on the requested resource
1 scenarios (1 failed)
4 steps (2 passed, 1 failed, 1 skipped)
2.783943435s
We should improve that. To do so we need parse the objects to look for the absence of the namespace and log that properly.
When using the steps referenced below, I would like the ability to update
resources defined in manifests.
Lines 51 to 54 in 6b5a6f4
Currently, my workaround is to use kuebctl apply
as part of testing using a custom step, but I think it's a common enough use case to support in kubedog itself using the dynamic client.
This should include an option to specify a namespace like the other steps currently feature.
It should also be considered whether to have an idempotent approach with create
(i.e. update resources if they already exist instead of skipping operation like is currently done).
kubedog/pkg/kubernetes/kube.go
Line 168 in d84d160
I have been wanting to add automatic documentation generation for the docs/syntax.md file. As next steps we need to integrate this into the CI such that it fails if running go generate alters syntax.md. That would mean the contributor did not update the docs as part of the PR.
// TODO: support multiple ASG
Use all k8s related step within a time span. Like <GK> the resource <filename> should be <state> **within <time-frame>**
These k8s Clients/APIs are in v0.x.x and they can break contract/signatures at any time. They do not follow the Golang semantic import version standard for their go modules paths.. This causes dependency issues when projects that use them import Kubedog for their BDDs.
Lines 14 to 16 in 018e64a
The discussion about this issue starts here. It contains some of the reasons why some of the k8s.io maintainers dont want to move to the standard.
Until the above issue is resolved, for each new release of Kubedog, we need to have several versions/tags that use different versions of these dependencies. Which ones or how? That has to be defined - maybe a CD would be needed for this, better testing as well.
Run examples as part of Makefile under test or build to make sure example work and they are not broken. If they are broken by a change this will bring that to the surface.
The steps below can be added to feature files that allow them to update/upsert resources during tests.
Lines 48 to 53 in bb3a5e1
In some circumstances, there may be resources that are acted on by other services within a testing environment (e.g. Horizontal Pod Autoscaler). If that resource is updated by another actor, you can get errors like below:
Error: Operation cannot be fulfilled on horizontalpodautoscalers.autoscaling "iks-express-test-asset-rollout-hpa": the object has been modified; please apply your changes to the latest version and try again
To get around this, we can always update the resource version of the resource that is going to be updated in the test. However, we may need to retry in some cases if the resource is being continuously updated in a way that conflicts with the tests updated. If we implement a retry option for these steps, it may help to avoid these conflicts during tests.
Support the use of resource definition yaml files with custom templates in the kube package.
Need to have basic documentation of usage of Kubedog and should include:
NodesWithSelectorShouldBe
runtime error: invalid memory address or nil pointer dereference
runtime.gopanic
/usr/local/go/src/runtime/panic.go:965
runtime.panicmem
/usr/local/go/src/runtime/panic.go:212
runtime.sigpanic
/usr/local/go/src/runtime/signal_unix.go:734
github.com/keikoproj/kubedog/pkg/kubernetes.(*Client).NodesWithSelectorShouldBe
github.com/keikoproj/kubedog/pkg/kubernetes/kube.go:342
In this case kubernetes.Interface
was not set since AKubernetesCluster
was not called before calling NodesWithSelectorShouldBe
. This should be handle better in this and probably all the other methods.
DeleteAllTestResources
should only try to delete exiting resources instead of submitting deletion for everything without confirming existence first
We need to add more specific step syntax and implementation around core/v1 resources like pods, nodes, etc.
The following generic resource syntax are good examples:
<GK> I <operation> the resource <filename>.yaml
<GK> the resource <filename> should be <state>
<GK> the resource <filename> [should] converge to selector <complete key>=<value>
<GK> the resource <filename> condition <condition type> should be (true|false)
<GK> I update a resource <filename> with <complete key> set to <value>
More details about those here.
Automatically detect if a step is not being called in E2E test and make it part of the CI.
Test ticket: #91
AnASGNamed
panics with index out of range [0] with length 0
if the ASG doesn't exist. This case should be handled better.
Error: runtime error: index out of range [0] with length 0
runtime.gopanic
/usr/local/go/src/runtime/panic.go:965
runtime.goPanicIndex
/usr/local/go/src/runtime/panic.go:88
github.com/keikoproj/kubedog/pkg/aws.(*Client).AnASGNamed
/Users/agaro/go/pkg/mod/github.com/keikoproj/[email protected]/pkg/aws/aws.go:53
// TODO: support multiple resources
support multiple resources of the same type, like pods for example. Handle those by a generic name, like my-pod-*
.
Also something along the lines of associating manifests to alias. Then those alias can be used in the step syntax instead of the manifest file name.
Look into improving the doc generation with subtitles.
The subtitle could be based on the topics we break the pkgs into
Once #43 is done. Add docs for dependency management around k8s.io api/apimachinery/client-go. There are two main points:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.