kubernetes-sigs / controller-runtime Goto Github PK
View Code? Open in Web Editor NEWRepo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
License: Apache License 2.0
Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
License: Apache License 2.0
This should also respect the deadline semantics for the context argument.
There's currently no way to pass additional flags to the API server. For instance, if I want to use an alpha feature (like PriorityClass, prior to 1.11.0), I need to set a flag on the API server.
Redo #106 in a backward compatible way.
We can probably following the approach suggested in #146 (comment).
Currently, I do not see a way to use subresources from the client.
I suggest that we add a variadic argument for subresources. example: https://github.com/kubernetes/client-go/blob/master/dynamic/interface.go#L32
Or we may need to add the subresources to the Options for functions that already have a variadic?
I tried to write unit tests using the fake client and List()
panics reliably when accessing opts.Raw
:
controller-runtime/pkg/client/fake/client.go
Lines 80 to 82 in ddf0390
Why opts.Raw
must be filled (it's not necessary when using the real client) and why it's TypeMeta
should have any value?
If I understand the code (and I probably don't), it expects opts.Raw.TypeMeta.GroupVersionKind()
to be the kind of items in returned list, not the list itself (i.e. Pod instead of PodList). Such requirement looks very odd.
When adding a Channel to a controller by calling Watch()
, InjectStopChannel gets called with a value of nil
instead of a real channel.
This appears to be because nothing sets the value of stop
before Watch()
gets called. The value of stop
only gets set once the manager's Start()
method is called.
Here's the order of operations:
Watch()
method gets called with the new Channel
as an argumentStart()
method gets called, with a stop channel being passed in. This is the opportunity for a stop channel to be provided by a controller author.Start()
method calls each controller's Start()
method, also passing through the stop channel.At step 3, the manager's stop channel gets injected into the Channel. But the manager's stop channel doesn't get set until step 4, so at step 3 it has a nil
value.
The controller author can call InjectStopChannel
directly and pass it the same channel they'll later pass to the manager's Start()
method. But the InjectStopChannel
method is clearly documented as not being intended for this purpose, nor is this approach helpful to the controller author.
Currently the manager won't do any kind of leader election. I think that the manager should implement of it so that you can run highly available controllers.
Almost all Controllers have boilerplate at the top to check if the object has been deleted. Figure out a way to reduce this.
I would like to add prometheus metrics to the controllers I'm building using kubebuilder.
It would be good to have information both from the controller internals themselves as well as from the reconciler loops I'm implementing.
My initial idea is to add a prometheus metric registry to the controller manager that it can use itself and can be used by reconcil.Reconciler
s to register their own metrics. Then have the controller manager start the metrics server when Start
is called.
Does this seem reasonable or should people just handle metrics on their own? PR welcome?
Coverage results went down as a result of PR #6
Fix the coverage gaps that were introduced:
go test ./pkg/... -coverprofile cover.out -parallel 4 | grep -v "coverage: 100.0% of statements" | grep -v "pkg/admission/certprovisioner\|pkg/internal/admission\|pkg/cache\|pkg/client\|pkg/event\|pkg/client/config\|pkg/controller/controllertest\|pkg/reconcile/reconciletest\|\|pkg/runtime/inject\|pkg/runtime/log\|\|pkg/runtime/signals\|pkg/test\|pkg/runtime/inject\|pkg/runtime/signals"
ok github.com/kubernetes-sigs/controller-runtime/pkg/manager 7.544s coverage: 98.6% of statements
ok github.com/kubernetes-sigs/controller-runtime/pkg/source 7.257s coverage: 94.1% of statements
Reactors are a useful feature of the official client's fake implementation. They allow injecting errors, simulating the actions of other controllers, and tracking the actions of the controller under test instead of just the output. IIUC, none of those are possible with the current fake client.
Create e2e tests for webhook lib
Opening to track the remaining issue from #89. dep ensure
works now, but Mercurial is still an undocumented dependency.
When I run go test ./pkg/...
, I get a bunch of errors that seem to be complaining about a missing /usr/local/bin/kubebuilder/bin/etcd
executable:
fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory
So I try to run ./test.sh
but that exits immediately:
$ ./test.sh
using tools
Are there any docs on how to run tests?
https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#OwnerReference
OwnerReference contains enough information to let you identify an owning object. Currently, an owning object must be in the same namespace, so there is no namespace field
The admission controller secret writer sets the webhook as owner for the generated secret even if they are in different namespaces. I don't know if it's by design but it looks like a bug.
controller-runtime/pkg/admission/cert/writer/secret.go
Lines 136 to 151 in 5a961ac
I found it by #99 which enforces a check between owner and object.
Add linters to the .travis.yaml gometalinter.v2
script line using --enable
. Fix issues as needed to make the linters pass.
Provide an pure server
mode to run the webhook w/o bootstrapping.
If the user use this mode, they will first do a dry-run
with the webhook server to get a pile of yaml files.
Then applying the yaml files will install the webhookConfiguration, secret, service etc.
Break out from #106 (comment)
Fake client can only use scheme.Scheme
as the scheme. It should allow using an arbitrary Scheme
.
A test in sigs.k8s.io/controller-runtime/pkg/internal/controller
sometimes fails.
I checked out the master branch, currently 53fc44b. Then I ran:
TRACE=1 ./hack/check-everything.sh
I got a failed test. I immediately ran it again, and the test passed. The full results for the failed run:
$ TRACE=1 ./hack/check-everything.sh
++ NO_COLOR=
++ '[' -z '' ']'
++ header=''
++ reset=''
+ k8s_version=1.10.1
+ goarch=amd64
+ goos=unknown
+ [[ linux-gnu == \l\i\n\u\x\-\g\n\u ]]
+ goos=linux
+ [[ linux == \u\n\k\n\o\w\n ]]
+ tmp_root=/tmp
+ kb_root_dir=/tmp/kubebuilder
+ SKIP_FETCH_TOOLS=
+ header_text 'using tools'
+ echo 'using tools'
using tools
+ which gometalinter.v2
/home/mhrivnak/golang/bin/gometalinter.v2
+ fetch_kb_tools
+ '[' -n '' ']'
+ header_text 'fetching tools'
+ echo 'fetching tools'
fetching tools
+ kb_tools_archive_name=kubebuilder-tools-1.10.1-linux-amd64.tar.gz
+ kb_tools_download_url=https://storage.googleapis.com/kubebuilder-tools/kubebuilder-tools-1.10.1-linux-amd64.tar.gz
+ kb_tools_archive_path=/tmp/kubebuilder-tools-1.10.1-linux-amd64.tar.gz
+ '[' '!' -f /tmp/kubebuilder-tools-1.10.1-linux-amd64.tar.gz ']'
+ curl -sL https://storage.googleapis.com/kubebuilder-tools/kubebuilder-tools-1.10.1-linux-amd64.tar.gz -o /tmp/kubebuilder-tools-1.10.1-linux-amd64.tar.gz
+ tar -zvxf /tmp/kubebuilder-tools-1.10.1-linux-amd64.tar.gz -C /tmp/
kubebuilder/
kubebuilder/bin/
kubebuilder/bin/gen-apidocs
kubebuilder/bin/openapi-gen
kubebuilder/bin/lister-gen
kubebuilder/bin/informer-gen
kubebuilder/bin/client-gen
kubebuilder/bin/conversion-gen
kubebuilder/bin/deepcopy-gen
kubebuilder/bin/defaulter-gen
kubebuilder/bin/kube-controller-manager
kubebuilder/bin/kubectl
kubebuilder/bin/kube-apiserver
kubebuilder/bin/etcd
+ setup_envs
+ header_text 'setting up env vars'
+ echo 'setting up env vars'
setting up env vars
+ [[ -z '' ]]
+ export KUBEBUILDER_ASSETS=/tmp/kubebuilder/bin
+ KUBEBUILDER_ASSETS=/tmp/kubebuilder/bin
+ ./hack/verify.sh
++ NO_COLOR=
++ '[' -z '' ']'
++ header=''
++ reset=''
+ header_text 'running go vet'
+ echo 'running go vet'
running go vet
+ go vet ./pkg/...
+ header_text 'running gometalinter.v2'
+ echo 'running gometalinter.v2'
running gometalinter.v2
+ gometalinter.v2 --disable-all --deadline 5m --enable=misspell --enable=structcheck --enable=golint --enable=deadcode --enable=goimports --enable=errcheck --enable=varcheck --enable=goconst --enable=unparam --enable=ineffassign --enable=nakedret --enable=interfacer --enable=misspell --enable=gocyclo --line-length=170 --enable=lll --dupl-threshold=400 --enable=dupl --skip=atomic ./pkg/...
+ ./hack/test-all.sh
++ NO_COLOR=
++ '[' -z '' ']'
++ header=''
++ reset=''
+ setup_envs
+ header_text 'setting up env vars'
+ echo 'setting up env vars'
setting up env vars
+ [[ -z /tmp/kubebuilder/bin ]]
+ header_text 'running go test'
+ echo 'running go test'
running go test
+ go test ./pkg/... -parallel 4
? sigs.k8s.io/controller-runtime/pkg [no test files]
ok sigs.k8s.io/controller-runtime/pkg/builder 12.764s
ok sigs.k8s.io/controller-runtime/pkg/cache 21.267s
? sigs.k8s.io/controller-runtime/pkg/cache/informertest [no test files]
? sigs.k8s.io/controller-runtime/pkg/cache/internal [no test files]
ok sigs.k8s.io/controller-runtime/pkg/client 46.047s
? sigs.k8s.io/controller-runtime/pkg/client/apiutil [no test files]
? sigs.k8s.io/controller-runtime/pkg/client/config [no test files]
ok sigs.k8s.io/controller-runtime/pkg/client/fake 0.029s
ok sigs.k8s.io/controller-runtime/pkg/controller 12.362s
? sigs.k8s.io/controller-runtime/pkg/controller/controllertest [no test files]
ok sigs.k8s.io/controller-runtime/pkg/controller/controllerutil 10.605s
ok sigs.k8s.io/controller-runtime/pkg/envtest 10.071s
? sigs.k8s.io/controller-runtime/pkg/envtest/printer [no test files]
? sigs.k8s.io/controller-runtime/pkg/event [no test files]
ok sigs.k8s.io/controller-runtime/pkg/handler 0.034s
ok sigs.k8s.io/controller-runtime/pkg/internal/admission 0.007s [no tests to run]
Running Suite: Controller Integration Suite
===========================================
Random Seed: 1538060558
Will run 23 of 23 specs
•••••••••••••••••
------------------------------
• Failure [0.001 seconds]
controller
/home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:40
Processing queue items from a Controller
/home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:261
should requeue a Request if the Result sets Requeue:true and continue processing items [It]
/home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:329
Expected
<int>: 0
to equal
<int>: 1
/home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:346
------------------------------
2018-09-27T11:02:48.195-0400 INFO kubebuilder.controller Starting Controller {"Controller": ""}
2018-09-27T11:02:48.195-0400 INFO kubebuilder.controller Starting workers {"Controller": "", "WorkerCount": 1}
STEP: Invoking Reconciler which will ask for requeue
2018-09-27T11:02:48.196-0400 INFO kubebuilder.controller Stopping workers {"Controller": ""}
•••••
Summarizing 1 Failure:
[Fail] controller Processing queue items from a Controller [It] should requeue a Request if the Result sets Requeue:true and continue processing items
/home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:346
Ran 23 of 23 Specs in 9.660 seconds
FAIL! -- 22 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestSource (9.66s)
FAIL
FAIL sigs.k8s.io/controller-runtime/pkg/internal/controller 9.678s
ok sigs.k8s.io/controller-runtime/pkg/internal/recorder 11.082s
? sigs.k8s.io/controller-runtime/pkg/leaderelection [no test files]
? sigs.k8s.io/controller-runtime/pkg/leaderelection/fake [no test files]
ok sigs.k8s.io/controller-runtime/pkg/manager 11.825s
? sigs.k8s.io/controller-runtime/pkg/patch [no test files]
? sigs.k8s.io/controller-runtime/pkg/patterns/application [no test files]
? sigs.k8s.io/controller-runtime/pkg/patterns/operator [no test files]
ok sigs.k8s.io/controller-runtime/pkg/predicate 0.027s
ok sigs.k8s.io/controller-runtime/pkg/reconcile 0.011s
? sigs.k8s.io/controller-runtime/pkg/reconcile/reconciletest [no test files]
? sigs.k8s.io/controller-runtime/pkg/recorder [no test files]
ok sigs.k8s.io/controller-runtime/pkg/runtime/inject 0.013s
ok sigs.k8s.io/controller-runtime/pkg/runtime/log 0.021s
ok sigs.k8s.io/controller-runtime/pkg/runtime/scheme 0.014s
ok sigs.k8s.io/controller-runtime/pkg/runtime/signals 1.016s
ok sigs.k8s.io/controller-runtime/pkg/source 8.748s
ok sigs.k8s.io/controller-runtime/pkg/source/internal 0.023s
? sigs.k8s.io/controller-runtime/pkg/webhook [no test files]
ok sigs.k8s.io/controller-runtime/pkg/webhook/admission 0.064s
? sigs.k8s.io/controller-runtime/pkg/webhook/admission/builder [no test files]
? sigs.k8s.io/controller-runtime/pkg/webhook/admission/types [no test files]
ok sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert 0.040s
ok sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert/generator 1.007s
? sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert/generator/fake [no test files]
ok sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert/writer 1.398s
ok sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert/writer/atomic 0.075s
? sigs.k8s.io/controller-runtime/pkg/webhook/types [no test files]
In pkg/client/config/config.go there's a check to see if KUBECONFIG is set, but then it is not used
if len(os.Getenv("KUBECONFIG")) > 0 {
return clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
}
In using this from kubebuilder it worked for me when I changed the above to
if len(os.Getenv("KUBECONFIG")) > 0 {
return clientcmd.BuildConfigFromFlags(masterURL, os.Getenv("KUBECONFIG"))
}
go test ./pkg/client/... -coverprofile cover.out
go tool cover -html cover.out
Currently, in order to create a dependent resource (eg. a deployment) the code always looks like:
deploy := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: instance.Name + "-deployment",
Namespace: instance.Namespace,
},
Spec: appsv1.DeploymentSpec{
// ...
},
}
if err := controllerutil.SetControllerReference(instance, deploy, r.scheme); err != nil {
return reconcile.Result{}, err
}
// Check if the Deployment already exists
found := &appsv1.Deployment{}
err = r.Get(context.TODO(), types.NamespacedName{Name: deploy.Name, Namespace: deploy.Namespace}, found)
if err != nil && errors.IsNotFound(err) {
log.Printf("Creating Deployment %s/%s\n", deploy.Namespace, deploy.Name)
err = r.Create(context.TODO(), deploy)
if err != nil {
return reconcile.Result{}, err
}
} else if err != nil {
return reconcile.Result{}, err
}
// Update the found object and write the result back if there are any changes
if !reflect.DeepEqual(deploy.Spec, found.Spec) {
found.Spec = deploy.Spec
log.Printf("Updating Deployment %s/%s\n", deploy.Namespace, deploy.Name)
err = r.Update(context.TODO(), found)
if err != nil {
return reconcile.Result{}, err
}
}
Which is about 50 lines of boilerplate.
What I've found very handy when working on controllers, were the CreateOrPatch
methods from https://github.com/appscode/kutil.
Do you think that something similar can be implemented in the generic client?
The controller-runtime client doesn't seem to support passing DeleteOptions
in the body of a delete request.
Use case: when a Job is deleted, its pods are normally orphaned by the GC. To delete a Job and GC its pods with the normal client-go, I pass a DeleteOptions
with PropagationPolicy
set. I can't do that with the controller-runtime client.
RawExtension.Object should get either the go struct or an unstructured.Unstructured object from the client.
Steps to reproduce:
Object
populated with a Deployment, but it is nil.Currently the ListWatch for the cache's informers are non-namespaced.
https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/cache/internal/informers_map.go#L218-L227
This means the Manager always requires cluster scoped permissions to work. While kubebuilder uses ClusterRole and ClusterRolebinding by default, that assumption isn't always true for an operator/controller (at least not in our context with the operator-sdk).
With just a Role and Rolebinding, the informers fail to list resources at the cluster scope.
E0828 23:41:19.472228 1 reflector.go:205] github.com/operator-framework/operator-sdk-samples/app-operator/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:106: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:haseeb:default" cannot list pods at the cluster scope
E0828 23:41:20.141658 1 reflector.go:205] github.com/operator-framework/operator-sdk-samples/app-operator/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:106: Failed to list *v1alpha1.App: apps.app.example.com is forbidden: User "system:serviceaccount:haseeb:default" cannot list apps.app.example.com at the cluster scope
Unless this is already supported or I've missed an easier way to do this, I've found that I can easily pipe down the namespace as an option from the Manager->Cache->InfromersMap->ListWatch.
mgr, err := manager.New(cfg, manager.Options{Namespace: namespace})
Possible fix: hasbro17@55894c2
That fixes the permissions issue as the ListWatch requests are now restricted to the desired namespace.
And in the default case of not specifying a namespace the ListWatch goes back to making cluster-scoped requests.
https://github.com/kubernetes/client-go/blob/master/rest/request.go#L424
go test ./pkg/cache/... -coverprofile cover.out
go tool cover -html cover.out
I'm trying to set logger verbosity from command line through -v
flag, but how can I set the logging level? Currently, I'm initializing the logger like:
import logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
var log = logf.Log.WithName("sidecar")
func main () {
logf.SetLogger(logf.ZapLogger(true))
log.V(2).Info("msg")
}
but for any verbosity logging (V(2), V(3), V(4), ...) there is no output. Which is the best way to set the verbosity level for this logger?
Thank you!
From the code,
the Manager.GetClient returns an DelegatingClient which read from cache for structured object, but read directly for unstructured object.
This forbids us to use the client before controller loops(since cache needs to be synced). (Our use case is load configurations from configMap before controller loops).
From my perspective, users should use manager.GetCache if they need caching behavior(and we can do the delegation there to read directly for unstructured object).
For manager.GetClient, it's better to let it always read directly without cache.
We already provide a caching client for controller.
As @pwittrock pointed out, we should provide a no caching client. Because if the cache becomes out-of-date, the admission webhook may make wrong decision based on stale objects. A no-caching client may be helpful in this case.
Adding a watch on channels, as mentioned in kubebuilder docs, on a fresh generated controller fails when running tests with the following error:
testing_t_support.go:22:
/home/u/go/src/github.com/presslabs/mysql-operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:69 +0x1ed
github.com/presslabs/mysql-operator/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc42039ba40, 0x1332420, 0x1b92b68, 0x0, 0x0, 0x0, 0x1342260)
/home/u/go/src/github.com/presslabs/mysql-operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:43 +0xae
github.com/presslabs/mysql-operator/pkg/controller/mysqlbackup.TestReconcile(0xc42010e1e0)
/home/u/go/src/github.com/presslabs/mysql-operator/pkg/controller/mysqlbackup/mysqlbackup_controller_test.go:53 +0x2f5
testing.tRunner(0xc42010e1e0, 0x12ae268)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
Expected error:
<*errors.errorString | 0xc4200bf5d0>: {
s: "must call InjectStop on Channel before calling Start",
}
must call InjectStop on Channel before calling Start
not to have occurred
The changes that I made to a fresh generated controller are:
diff --git a/pkg/controller/mysqlbackup/mysqlbackup_controller.go b/pkg/controller/mysqlbackup/mysqlbackup_controller.go
index bebdfce8..7935c0fb 100644
--- a/pkg/controller/mysqlbackup/mysqlbackup_controller.go
+++ b/pkg/controller/mysqlbackup/mysqlbackup_controller.go
@@ -35,6 +35,8 @@ import (
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
+
+ "sigs.k8s.io/controller-runtime/pkg/event"
)
/**
@@ -78,6 +80,15 @@ func add(mgr manager.Manager, r reconcile.Reconciler) error {
return err
}
+ events := make(chan event.GenericEvent)
+ err = c.Watch(
+ &source.Channel{Source: events},
+ &handler.EnqueueRequestForObject{},
+ )
+ if err != nil {
+ return err
+ }
+
return nil
}
What happens:
Channel
source is registered in the Watch method the stop channel is inserted but its inserts a nil channel.All exported methods on client interface takes in context.Context
parameter, but the methods don't implement the deadline semantics.
I'm currently integrating multicluster-controller with controller-runtime's webhook package, which, unfortunately, depends on the manager package.
Multicluster-controller doesn't use controller-runtime's manager package, but a more lightweight version that only orchestrates runnables (caches and controllers), while the cluster dependencies are extracted into multiple Cluster structs.
Controller-runtime's webhook package depends on the manager.Manager interface, even though it only needs a fraction of it, namely Add(Runnable) for the server, and GetScheme() and GetRESTMapper() for the builder.
I could create an ad-hoc implementation of manager.Manager on my side, with lots of panics in the methods webhook doesn't need, forwarding the three that are actually needed to a multicluster-controller manager and cluster, but I believe there is a cleaner solution, which could also benefit others.
Define small interfaces where they're needed. They will be implemented implicitly by controller-runtime's controllerManager struct, and could be implemented by third-party packages.
// in pkg/webhook/server.go
type Manager interface {
Add(Runnable) error
}
// in pkg/webhook/admission/builder/builder.go
type Manager interface {
GetScheme() *runtime.Scheme
GetRESTMapper() meta.RESTMapper
}
It should also make testing easier, and would be arguably more idiomatic: https://blog.chewxy.com/2018/03/18/golang-interfaces/
If you agree to this proposed change, I can submit a PR.
Code under test may expect that setting PropagationPolicy
to Foreground
will cause child objects to be deleted before returning. Currently the fake client doesn't implement this, nor does it offer a hook to allow this behavior to be simulated (see #72).
runtime/log.go file has compilation issue. probably got introduced in this PR #71
vendor/sigs.k8s.io/controller-runtime/pkg/runtime/log/log.go:10:2: imported and not used: "cnrm-kube/vendor/github.com/go-logr/zapr"
vendor/sigs.k8s.io/controller-runtime/pkg/runtime/log/log.go:28:9: undefined: zaplogr
We should be able to watch for objects against unstructured objects from source.Kind.
Cache
supports a resync period option but Manager
doesn't allow setting it. This makes it impossible to change the default resync interval of 10 hours.
Adding a Resync
field to manager.Options
and passing it through to newCache
would fix this.
/cc @vaikas-google
When I try to run dep ensure
, I get this warning, plus the command never returns:
Warning: the following project(s) have [[constraint]] stanzas in Gopkg.toml:
✗ k8s.io/kube-aggregator
However, these projects are not direct dependencies of the current project:
they are not imported in any .go files, nor are they in the 'required' list in
Gopkg.toml. Dep only applies [[constraint]] rules to direct dependencies, so
these rules will have no effect.
Either import/require packages from these projects so that they become direct
dependencies, or convert each [[constraint]] to an [[override]] to enforce rules
on these projects, if they happen to be transitive dependencies.
When I Ctrl-C, I get this error:
grouped write of manifest, lock and vendor: error while writing out vendor tree: failed to write dep tree: failed to export bitbucket.org/ww/goautoneg:
(1) hg is not installed:
(2) hg is not installed:
(3) hg is not installed:
(4) failed to list versions for https://bitbucket.org/ww/goautoneg: remote: Not Found
fatal: repository 'https://bitbucket.org/ww/goautoneg/' not found
: exit status 128
(5) failed to list versions for ssh://[email protected]/ww/goautoneg: : signal: interrupt
(6) context canceled
(7) context canceled
When I install Mercurial, the warning remains but the command finishes.
go files should have license at the beginning.
At least the following 2 files don't have:
https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/builder/build_test.go
https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/builder/builder_suite_test.go
All exported methods on client interface takes in context.Context
parameter, but the methods don't implement the deadline semantics.
Sometimes Reconciler needs workqueue related metrics for ex. numRequeues to decide the fate of the request. So it will be useful to plumb those in the Request
object.
Right now I am getting this log when starting a watch
on a source of type Channel
{"level":"info","ts":1537378068.766453,"logger":"kubebuilder.controller","caller":"controller/controller.go:120","msg":"Starting EventSource","Controller":"database-controller","SourceError":"json: unsupported type: <-chan event.GenericEvent"}
It's not clear to me why the vendor directory exists in source code in the first place... deps will pull and "install" those as needed...
For what it's worth, I'm trying to consume controller runtime in a library project that uses glide and somehow go runtime hasn't figured out that libraries in controller-runtime/vendor/ are actually the same as the ones in my library's vendor directory and hence get type mismatching problems all over...
The Channel source expects the stop channel to be injected before calling Start
, but the stop channel on the controller manager will be nil until ControllerManager#Start
is called. Since most people will probably call Controller#Watch
before ControllerManager#Start
, this breaks the channel source.
I've got a patch to fix it in the works, but wanted to file this so that I don't forget.
I can not find any open issue about admissionwebkook branch. What is its status? What prevents it from being merged?
controller-runtime/pkg/source/source.go
Lines 89 to 90 in a8ea205
This seems to not be using logr correctly. I got the following error when I hit this line:
{"level":"dpanic","ts":1540850293.0815783,"logger":"kubebuilder.source","caller":"zapr/zapr.go:129","msg":"odd number of arguments passed as key-value pairs for logging" ...
Is there a plan to allow the inject interfaces to inject arbitrary dependencies? Or maybe there's a better way to do this?
Concrete use case: I want reconcilers to have access to a logger instance, but it's not a logr
logger so I can't use the log promise feature, and I'd like to avoid relying on package variables if possible. Also, I'd like each controller to have the same initialization signature ProvideController(manager.Manager, <something>)
(and I acknowledge this might be a silly requirement).
@droot maybe you have some context on plans for inject.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.