GithubHelp home page GithubHelp logo

argoproj / argo-events Goto Github PK

View Code? Open in Web Editor NEW
2.2K 2.2K 709.0 152.87 MB

Event-driven Automation Framework for Kubernetes

Home Page: https://argoproj.github.io/argo-events/

License: Apache License 2.0

Makefile 0.49% Go 98.26% Shell 0.98% Dockerfile 0.07% Smarty 0.20%
argo automation-framework cloud-native cloudevents event-driven event-source eventing-framework kubernetes pipelines triggers workflow-automation workflows

argo-events's Introduction

slack

Argoproj - Get stuff done with Kubernetes

Argo Image

What is Argoproj?

Argoproj is a collection of tools for getting work done with Kubernetes.

  • Argo Workflows - Container-native Workflow Engine
  • Argo CD - Declarative GitOps Continuous Delivery
  • Argo Events - Event-based Dependency Manager
  • Argo Rollouts - Progressive Delivery with support for Canary and Blue Green deployment strategies

Also argoproj-labs is a separate GitHub org that we setup for community contributions related to the Argoproj ecosystem. Repos in argoproj-labs are administered by the owners of each project. Please reach out to us on the Argo slack channel if you have a project that you would like to add to the org to make it easier to others in the Argo community to find, use, and contribute back.

Community Blogs and Presentations

Project specific community blogs and presentations are at

Adopters

Each Argo sub-project maintains its own list of adopters. Those lists are available in the respective project repositories:

Contributing

To learn about how to contribute to Argoproj, see our contributing documentation. Argo contributors must follow the CNCF Code of Conduct.

For help contributing, visit the #argo-contributors channel in CNCF Slack.

To learn about Argoproj governance, see our community governance document.

Project Resources

argo-events's People

Contributors

34fathombelow avatar alexec avatar blkperl avatar chaseterry avatar daniel-codefresh avatar dependabot[bot] avatar devstein avatar dfarr avatar dpadhiar avatar dtaniwaki avatar eduardodbr avatar github-actions[bot] avatar gokulav137 avatar hobti01 avatar juliev0 avatar magaldima avatar marxarelli avatar matt-magaldi avatar nstott avatar saradhis avatar shashwat-appdirect avatar shrinandj avatar tczhao avatar terrytangyuan avatar tmshn avatar vaibhavpage avatar whynowy avatar workflow avatar zachaller avatar zhaque44 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

argo-events's Issues

classify stream signals as a common type

Is your feature request related to a problem? Please describe.
The types.go is growing fairly large and complex with all the different types of signals. I foresee potential problems with maintaining and keeping track of so many configurations for different types of signals. In addition, this moves the project in the right direction for making signals a plugin feature.

Describe the solution you'd like
My goal here is to classify or group all of the message stream signals into a single StreamSignal specification. This spec should be of the form:

// Stream describes a queue stream resource
type Stream struct {
	// Type of the stream resource
	Type string `json:"type" protobuf:"bytes,1,opt,name=type"`

	// URL is the exposed endpoint for client connections to this service
	URL string `json:"url" protobuf:"bytes,2,opt,name=url"`

	// Attributes contains additional fields specific to each service implementation
	Attributes map[string]string `json:"attributes,omitempty" protobuf:"bytes,3,rep,name=attributes"`
}

Describe alternatives you've considered
The alternative is to keep what we have and make each type explicitly declared, however this limits the possible implementations.

Add webhook sensor

Is your feature request related to a problem? Please describe.
A new type of sensor where user can define the REST endpoints and triggers for corresponding REST methods. Although this might change the sensor spec a bit.

Describe the solution you'd like

apiVersion: core.events/v1alpha1
kind: Sensor
metadata:
  name: webhook-example
  labels:
    sensors.core.events/controller-instanceid: axis
spec:
  signals:
    - name: 
      webhook:
        method: GET
        endpoint: "/"
      triggerName: indexGetTrigger
   -  name:
      webhook:
        method: POST
        endpoint: "/"
      triggerName: indexPostTrigger      
  triggers:
    - name: indexGetTrigger
      resource:
        namespace: default
        group: argoproj.io
        version: v1alpha1
        kind: Workflow
        artifactLocation:
          s3:
            bucket: workflows
            key: hello-world.yaml
            endpoint: artifacts-minio.default:9000
            insecure: true
            accessKey:
              key: accesskey
              name: artifacts-minio
            secretKey:
              key: secretkey
              name: artifacts-minio
    - name: indexPostTrigger
      resource:
        namespace: default
        group: argoproj.io
        version: v1alpha1
        kind: Workflow
        artifactLocation:
          s3:
            bucket: workflows
            key: coinflip.yaml
            endpoint: artifacts-minio.default:9000
            insecure: true
            accessKey:
              key: accesskey
              name: artifacts-minio
            secretKey:
              key: secretkey
              name: artifacts-minio

Sensor with resource signal for config-map fails

Describe the bug
I created a sensor that was supposed to run a workflow when a config-map was created in the default namespace. However, when the config-map is created, the sensor controller failed with the following error:

time="2018-08-06T23:37:29Z" level=info msg="sensor message  -> listening for signal events" namespace=default sensor=resource-example-l4kvz
time="2018-08-06T23:37:29Z" level=error msg="Event Stream (resource-example-l4kvz/worklow-1) Msg: (Action:IGNORED) - Failed to filter event: unsupported event content type: "

To Reproduce
Steps to reproduce the behavior:

  1. Create the following sensor
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  generateName: resource-example-
  labels:
    sensors.argoproj.io/controller-instanceid: axis
spec:
  signals:
    - name: worklow-1
      resource:
        namespace: default
        group: ""
        version: "v1"
        kind: "ConfigMap"
        filter:
          prefix: my-cm
  triggers:
    - name: ns-workflow
      resource:
        namespace: default
        group: argoproj.io
        version: v1alpha1
        kind: Workflow
        source:
          inline: |
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                generateName: hello-world-
              spec:
                entrypoint: whalesay
                templates:
                  -
                    container:
                      args:
                        - "hello world"
                      command:
                        - cowsay
                      image: "docker/whalesay:latest"
                    name: whalesay
  1. Create the following config-map
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-cm
data:
  my-life: |
      my-rules

Expected behavior

  1. After the config-map is created, the hello-world workflow should be executed.

event log storage

Feature Gap
Currently, we do not store a log of the signal events in the sensor status as this would cause the sensor object to grow in size. We only store the latest event for a certain signal in the sensor status. These events should be persisted somewhere in order to allow replaying of events + signals and for repeatability purposes.

Design Requirements

  • Persistent storage with backup (guarantee that we don't lose an event)
  • CRUD actions
  • Query by sensor name and time

Proposal
We should implement an event-transaction-log storage interface by which users can implement and expose a data sink to store signal events.
Implementation ideas:

  • Event CRD (on etcd) - see #35
  • FluentD
  • ElasticSearch
  • Logrus -> route to sink?

Sensor-controller throws errors about wrong config map version

Describe the bug
I keep seeing the sensor-controller spewing log messages about old config map version even though there were no changes made to the config-map.

To Reproduce
Steps to reproduce the behavior:

  1. Run the sensor-controller.
  2. The sensor-controller pod logs the following messages:
ERROR: logging before flag.Parse: W0806 16:24:00.173164       1 reflector.go:341] github.com/argoproj/argo-events/controller/config.go:62: watch of *v1.ConfigMap ended with: too old resource version: 608242 (608445)
time="2018-08-06T16:24:01Z" level=info msg="detected ConfigMap update. updating the controller config."
ERROR: logging before flag.Parse: W0806 16:36:04.188869       1 reflector.go:341] github.com/argoproj/argo-events/controller/config.go:62: watch of *v1.ConfigMap ended with: too old resource version: 609334 (609828)
time="2018-08-06T16:36:05Z" level=info msg="detected ConfigMap update. updating the controller config."
ERROR: logging before flag.Parse: W0806 16:45:17.198521       1 reflector.go:341] github.com/argoproj/argo-events/controller/config.go:62: watch of *v1.ConfigMap ended with: too old resource version: 610721 (610887)
time="2018-08-06T16:45:18Z" level=info msg="detected ConfigMap update. updating the controller config."
ERROR: logging before flag.Parse: W0806 16:54:28.208333       1 reflector.go:341] github.com/argoproj/argo-events/controller/config.go:62: watch of *v1.ConfigMap ended with: too old resource version: 611778 (611949)
time="2018-08-06T16:54:29Z" level=info msg="detected ConfigMap update. updating the controller config."
ERROR: logging before flag.Parse: W0806 17:06:17.222437       1 reflector.go:341] github.com/argoproj/argo-events/controller/config.go:62: watch of *v1.ConfigMap ended with: too old resource version: 612832 (613300)
time="2018-08-06T17:06:18Z" level=info msg="detected ConfigMap update. updating the controller config."
ERROR: logging before flag.Parse: W0806 17:14:14.231619       1 reflector.go:341] github.com/argoproj/argo-events/controller/config.go:62: watch of *v1.ConfigMap ended with: too old resource version: 614188 (614209)
time="2018-08-06T17:14:15Z" level=info msg="detected ConfigMap update. updating the controller config."

Expected behavior
There should be no such errors in the logs

webhook signals should support TLS

Is your feature request related to a problem? Please describe.
We should support secure webhooks.

Describe the solution you'd like
@shrinandj outlined in #68 the following suggestions:

  • Self-signed cert (the webhook signal pod will simply create it's own cert and private key)
  • User provided cert and key (maybe volume mounted in the webhook signal pod?)

Sensor controller panics if sensor is created after the watched resource

Describe the bug
The sensor-controller panicked when a resource that the sensor depended upon already existed. More below...

To Reproduce

  1. I created a sensor that was supposed to run an Argo workflow in response to a Kubernetes secret.
$ cat examples/resource-sensor-secret.yaml
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  generateName: resource-example-
  labels:
    sensors.argoproj.io/controller-instanceid: axis
spec:
  signals:
    - name: secret-create
      resource:
        namespace: test
        group: ""
        version: "v1"
        kind: "Secret"
        filter:
          prefix: my-secret
  triggers:
    - name: ns-workflow
      resource:
        namespace: default
        group: argoproj.io
        version: v1alpha1
        kind: Workflow
        source:
          inline: |
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                generateName: hello-world-
              spec:
                entrypoint: whalesay
                templates:
                  -
                    container:
                      args:
                        - "hello world"
                      command:
                        - cowsay
                      image: "docker/whalesay:latest"
                    name: whalesay
  1. When the above sensor was created, the namespace test (and therefore the secret my-secret in it) did not exist.

  2. Then I created the test namespace and my-secret in it.

  3. The sensor correctly fired the trigger and the Argo workflow executed.

  4. After this, I create a second sensor identical to the above (kubectl create -f examples/resource-sensor-secret.yaml).

  5. This cause the sensor-controller to panic.

time="2018-08-08T22:06:54Z" level=error msg="runtime error: invalid memory address or nil pointer dereference" namespace=default sensor=resource-example-zz7lf
time="2018-08-08T22:06:54Z" level=error msg="recovered from panic: runtime error: invalid memory address or nil pointer dereference\ngoroutine 45 [running]:\nruntime/debug.Stack(0xc420305220, 0xc4206698b8, 
0x1)\n\t/usr/local/Cellar/go/1.10.3/libexec/src/runtime/debug/stack.go:24 +0xa7\ngithub.com/argoproj/argo-events/controller.(*sOperationCtx).operate.func1(0xc4200129a0)\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-
events/controller/operator.go:70 +0x95\npanic(0x13a5de0, 0x1fa7950)\n\t/usr/local/Cellar/go/1.10.3/libexec/src/runtime/panic.go:502 +0x229\ngithub.com/argoproj/argo-events/controller.(*sOperationCtx).processSignal(0xc4200129a0, 0xc4204ce0c0, 0xd, 
0x0, 0x0, 0x0, 0x0, 0xc420305180, 0x0, 0x0, ...)\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/signal.go:77 +0x2f0\ngithub.com/argoproj/argo-events/controller.(*sOperationCtx).operate(0xc4200129a0, 0x0, 
0x0)\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/operator.go:88 +0x1be\ngithub.com/argoproj/argo-events/controller.(*SensorController).processNextItem(0xc42014e640, 
0xc420134600)\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/controller.go:118 +0x21d\ngithub.com/argoproj/argo-events/controller.
(*SensorController).runWorker(0xc42014e640)\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/controller.go:182 +0x2b\ngithub.com/argoproj/argo-events/controller.(*SensorController).(github.com/argoproj/argo-
events/controller.runWorker)-fm()\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/controller.go:175 +0x2a\ngithub.com/argoproj/argo-
events/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4204f9b20)\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54\ngithub.com/argoproj/argo-
events/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc4204f9b20, 0x3b9aca00, 0x0, 0x1, 0x0)\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd\ngithub.com/argoproj/argo-
events/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4204f9b20, 0x3b9aca00, 0x0)\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d\ncreated by github.com/argoproj/argo-
events/controller.(*SensorController).Run\n\t/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/controller.go:175 +0x2f8\n" namespace=default sensor=resource-example-zz7lf

Expected behavior
Not sure what is expected here.

  1. Should the second sensor fire the trigger beacuse the secret my-secret exists?
  2. Maybe the second sensor should simply wait forever since the secret my-secret already exists (and the actual event of creation of the secret is already past)

make does not build executor

Describe the bug
Upto commit a781888, the make command built the controller as well as executor. However, at current HEAD (b20db08), make does not build the executor.

To Reproduce
BEFORE

make IMAGE_NAMESPACE=shrinand
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 make controller
go build -v -ldflags '-X github.com/argoproj/argo-events.version=0.1.2 -X github.com/argoproj/argo-events.buildDate=2018-06-29T17:36:35Z -X github.com/argoproj/argo-events.gitCommit=a781888cc4388947b53582683a5bfe3ba6fad208 -X github.com/argoproj/argo-events.gitTreeState=dirty' -o /Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/dist/sensor-controller ./cmd/sensor-controller
docker build -t shrinand/sensor-controller:latest -f ./controller/Dockerfile .
Sending build context to Docker daemon  210.6MB
Step 1/3 : FROM scratch
 --->
Step 2/3 : COPY dist/sensor-controller /
 ---> 6a45407f41cf
Step 3/3 : CMD [ "/sensor-controller" ]
 ---> Running in d1ac0ade72c5
Removing intermediate container d1ac0ade72c5
 ---> bda6d2eb002e
Successfully built bda6d2eb002e
Successfully tagged shrinand/sensor-controller:latest
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 make executor-job
go build -v -ldflags '-X github.com/argoproj/argo-events.version=0.1.2 -X github.com/argoproj/argo-events.buildDate=2018-06-29T17:36:41Z -X github.com/argoproj/argo-events.gitCommit=a781888cc4388947b53582683a5bfe3ba6fad208 -X github.com/argoproj/argo-events.gitTreeState=dirty' -o /Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/dist/sensor-executor ./cmd/sensor-job
docker build -t shrinand/sensor-executor:latest -f ./job/Dockerfile .
Sending build context to Docker daemon  210.6MB
Step 1/3 : FROM scratch
 --->
Step 2/3 : COPY dist/sensor-executor /
 ---> 698fb1abe7d7
Step 3/3 : CMD [ "/sensor-executor" ]
 ---> Running in 1d77647c280f
Removing intermediate container 1d77647c280f
 ---> 2412070864da
Successfully built 2412070864da
Successfully tagged shrinand/sensor-executor:latest

CURRENT HEAD

$ make IMAGE_NAMESPACE=shrinand
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 make controller
go build -v -ldflags '-X .version=0.1.2 -X .buildDate=2018-06-29T17:39:21Z -X .gitCommit=b20db08cbf5ba155137f8632409c813129d607d5 -X .gitTreeState=dirty' -o /Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/dist/sensor-controller ./cmd/sensor-controller
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 make stream-plugins
go build -v -ldflags '-X .version=0.1.2 -X .buildDate=2018-06-29T17:39:23Z -X .gitCommit=b20db08cbf5ba155137f8632409c813129d607d5 -X .gitTreeState=dirty' -o /Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/dist/plugins/nats ./signals/stream/builtin/nats
go build -v -ldflags '-X .version=0.1.2 -X .buildDate=2018-06-29T17:39:23Z -X .gitCommit=b20db08cbf5ba155137f8632409c813129d607d5 -X .gitTreeState=dirty' -o /Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/dist/plugins/mqtt ./signals/stream/builtin/mqtt
go build -v -ldflags '-X .version=0.1.2 -X .buildDate=2018-06-29T17:39:23Z -X .gitCommit=b20db08cbf5ba155137f8632409c813129d607d5 -X .gitTreeState=dirty' -o /Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/dist/plugins/kafka ./signals/stream/builtin/kafka
go build -v -ldflags '-X .version=0.1.2 -X .buildDate=2018-06-29T17:39:24Z -X .gitCommit=b20db08cbf5ba155137f8632409c813129d607d5 -X .gitTreeState=dirty' -o /Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/dist/plugins/amqp ./signals/stream/builtin/amqp
docker build -t shrinand/sensor-controller:latest -f ./controller/Dockerfile .
Sending build context to Docker daemon  214.9MB
Step 1/6 : FROM alpine:3.7
 ---> 3fd9065eaf02
Step 2/6 : COPY dist/sensor-controller /
 ---> de9a0f10d4cc
Step 3/6 : COPY dist/plugins/ /plugins/
 ---> 11fa86a474b7
Step 4/6 : ENV STREAM_PLUGIN_DIR=plugins
 ---> Running in 9ade3c38b227
Removing intermediate container 9ade3c38b227
 ---> 6b83a216f45f
Step 5/6 : RUN chmod -R 777 /plugins
 ---> Running in 2597a0ed7aae
Removing intermediate container 2597a0ed7aae
 ---> f2d5e4bcf3a8
Step 6/6 : CMD [ "/sensor-controller" ]
 ---> Running in 9bb65d9f760f
Removing intermediate container 9bb65d9f760f
 ---> 4ea378f39b33
Successfully built 4ea378f39b33
Successfully tagged shrinand/sensor-controller:latest

There is no explicit make target to build executor either.

Expected behavior
The controller as well as executor should be built.

Example workflow does not get executed

Describe the bug
I tried running the argo-events example wherein I expected an Argo workflow to be triggered based on a calendar event of time = 10 seconds. However, the workflow was not executed. More details below (this might just be a problem with my setup or the documentation).

To Reproduce
Steps to reproduce the behavior:

  1. I have a Kubernetes v1.10.0 cluster running on AWS.
  2. Built and installed the sensor-controller and related components (config-map).
  3. Installed Minio on the cluster using the helm chart.
    3.1. Verified that the minion pod was running correctly.
    3.2. Port-forwarded the port 9000 and was able to access the minio web interface.
    3.3. Created a bucket called my-test-bucket.
    3.4. Installed the command line client mc and copied the hello_world.yaml argo example inside the bucket.
  4. Installed Argo on the cluster.
    4.1. Verified that the argo controller was running.
    4.2. Ran a sample workflow using argo submit and it completed successfully.
  5. Create a sensor using the following YAML
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: cal-example
  labels:
    sensors.argoproj.io/controller-instanceid: axis
spec:
  signals:
    - name: time
      calendar:
        interval: 10s
  triggers:
    - name: done-workflow
      resource:
        namespace: default
        group: argoproj.io
        version: v1alpha1
        kind: Workflow
        artifactLocation:
          s3:
            bucket: my-test-bucket
            key: hello_world.yaml
            endpoint: laughing-llama-minio.default:9000
            insecure: true
            accessKey:
              key: AKIAIOSFODNN7EXAMPLE
              name: artifacts-minio
            secretKey:
              key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY S3v4
              name: artifacts-minio
5.1. The bucket name, key, endpoint, accessKey and secretKey are correct.

Expected behavior
The following behavior was expected after the above:

  • The sensor-controller will find the new sensor object that got created and run a new sensor-executor job. This worked correctly and I could see a new Job
$ k get job
NAME                 DESIRED   SUCCESSFUL   AGE
cal-example-sensor   1         1            13m
  • There was a Kubernetes pod that was run by the job above.
$ kp
NAME                                    READY     STATUS      RESTARTS   AGE
cal-example-sensor-kn72v                0/1       Completed   0          14m
...
  • However, this did not trigger any argo workflows.
$ argo list
NAME   STATUS   AGE   DURATION
  • The logs of the sensor-executor pod do not have much information either.
$ kl cal-example-sensor-kn72v
Starting new JOB!!!
2018-06-27T05:43:52.835Z	INFO	calendar/register.go:30	creating signal	{"sensor": "cal-example", "signal": "time", "raw": "&CalendarSignal{Schedule:,Interval:10s,Recurrence:[],}"}
2018-06-27T05:43:52.835Z	DEBUG	calendar/signal.go:103	expected next calendar event	{"sensor": "cal-example", "signal": "time", "t": "2018-06-27T05:44:02.000Z"}
2018-06-27T05:44:02.000Z	DEBUG	calendar/signal.go:103	expected next calendar event	{"sensor": "cal-example", "signal": "time", "t": "2018-06-27T05:44:12.000Z"}
2018-06-27T05:44:02.000Z	DEBUG	job/signal.go:65	checking	{"sensor": "cal-example", "signal": "time", "timestamp": "2018-06-27T05:44:02.000Z", "start": "1754-08-30T22:43:41.128Z", "stop": "1754-08-30T22:43:41.128Z"}
2018-06-27T05:44:02.000Z	DEBUG	calendar/signal.go:90	sending calendar event	{"sensor": "cal-example", "signal": "time", "nodeID": "cal-example-1564253156"}
2018-06-27T05:44:02.000Z	INFO	job/executor.go:133	received event	{"sensor": "cal-example", "source": "interval: 10s", "nodeID": "cal-example-1564253156"}
2018-06-27T05:44:02.010Z	INFO	job/executor.go:142	stopped signal	{"sensor": "cal-example", "id": "cal-example-1564253156"}
2018-06-27T05:44:02.019Z	INFO	job/executor.go:158	successfully resolved all signals; executor terminating	{"sensor": "cal-example"}

Signal Stream type amqp does not work

Describe the bug
After running amqp-sensor.yaml, the sensor amqp-example is in Error phase with the message "the signal 'amqp' does not exist with the signal universe. please choose one from: [calendar resource nats webhook]"

To Reproduce
Steps to reproduce the behavior:

  1. Following Quick Start Guide to set up Argo Events and Argo,
  2. kubectl apply -f examples/amqp-sensor.yaml
  3. kubectl describe sensor.argoproj.io/amqp-example
  4. See "Phase: Error" and "Message: the signal 'amqp' does not exist with the signal universe. please choose one from: [calendar resource nats webhook]"

Expected behavior
Signal Guide says that amqp type is supported, and I see amqp is under signals/stream/builtin, but why the sensor with signal type amqp does not work.

add check in controller for at least one signal service

Describe the bug
Since #49 signals are deployed separate to the sensor-controller. These were added to the hack/k8s/manifests/services directory but as pointed out some users, this fact isn't clearly apparent.

Expected behavior
Signal services are core critical features of sensors so ensuring the services are running as part of argo-events should be a feature of the controller.

Proposal
We should do 3 things:

  1. add a check in the controller for at least one signal service and error out if there is none. We can possibly even take this a step further and incorporate another signal api that allows us to keep track of the possible signal services and periodically check their health. This way, when someone creates a sensor, the controller will know during validation if the need signal services exist for the sensor.
  2. update the docs to say that the signal should be deployed separately.
  3. after 1 and 2, we should perform another pre-release, push up the tagged Docker images for all the built-in components including:
  • argoproj/sensor-controller
  • argoproj/artifact-signal
  • argoproj/calendar-signal
  • argoproj/resource-signal
  • argoproj/webhook-signal
  • argoproj/stream-nats-signal
  • argoproj/stream-kafka-signal
  • argoproj/stream-mqtt-signal
  • argoproj/stream-amqp-signal

implement RetryStrategy for triggers

Is your feature request related to a problem? Please describe.
Currently, the RetryStrategy is unimplemented/unenforced for Triggers.

Describe the solution you'd like
We can leverage the k8s.io/apimachinery/pkg/util/wait package's Backoff spec with wait.ExponentialBackoff for retries.

To start, let's add 3 fields to a RetryStrategy:

Steps int // Exit with error after this many steps
Duration float64 // the base duration
Factor float64 // Duration is multiplied by factor each iteration

argo workflow trigger with params

Is your feature request related to a problem? Please describe.
Argo-events was created in order to become the event framework for Argo. While we support creating argo workflow as part of a trigger, they are encompassed in the more general ResourceObjects which makes it difficult to couple valuable features like Argo params with the event framework.

Describe the solution you'd like
As discussed in #19 we should have an explicit argo workflow trigger so that we can easily pass arguments/params to the workflow.

Additional context
This may help to solve the problem of #41 depending on how the workflows are included in the sensor spec.

Restrict ResourceObject creation via RBAC roles

Is your feature request related to a problem? Please describe.
Currently, resource object triggers can only create resources which are implemented in the store package. If we want to add support for new resources in the future, we need to change the code. Also different users may want to restrict object creation differently.

Describe the solution you'd like
Resource object creation through triggers should be controlled through RBAC roles. This allows users to define different roles for their specific axis implementation and also allows easily adding/updating/removing resources without requiring code changes.

Additional context
In place of this, we should allow users to pass in a manifest or use an input artifact in object creation.

Make job executors imagePullPolicy configurable

Is your feature request related to a problem? Please describe.
The sensor jobs executor has the imagePullPolicy of PullIfNotPresent hard-coded. This makes development difficult.

Describe the solution you'd like
It will be good if the imagePullPolicy for the executor is also made configurable similar to the properties likes executor image name, resources, etc.

Describe alternatives you've considered
Continuing the current way would require deleting the executor image everytime from all the nodes in the cluster to ensure that the image is pulled again which is cumbersome.

signals-tls-configuration

Is your feature request related to a problem? Please describe.
Currently, signals do not rely on user's authorization, however most of the signals should rely on some kind of authorization whether that's dialing a connection to a queue, accessing an S3 bucket, or watching Kubernetes resources. We need to figure out a way to access these external signal resources with user or system credentials.

Describe the solution you'd like
Create a service/interface to manage auth/creds to certain resources. The simplest source of these should be Kubernetes Secrets. This can be implemented in a similar way to how we manage S3 credentials today.

Describe alternatives you've considered
Considering that the type of credentials are tied directly to what kind of resource a signal pod is trying to access, it makes sense that these credentials are passed in somehow in the sensor spec.
Alternatively, if axis is running in a headless mode, it may also make sense not to send this in the spec everytime and provide certain override values through a credential service that could watch secrets with certain labels...

Additional context
Add any other context or screenshots about the feature request here.

Add argo-ci and pr check

Is your feature request related to a problem? Please describe.
Since this project is now under the Argo Project, we should follow suite with the other projects and add an .argo-ci/ci.yaml file and setup the appropriate build/test checks.

Describe the solution you'd like
Create the CI process for Argo-Events.

Describe alternatives you've considered
I have a working .travis.yml file in the repo and was previously using Travis-CI test/builds to verify PR checks. I would need access to the Travis CI for Argo.

Calendar sensor with Schedule: @every 30s triggers only once

Describe the bug
Calendar sensor with Schedule: @every 30s triggers only once

To Reproduce
create sensor with following config:

apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: inline-example
namespace: playground
labels:
sensors.argoproj.io/controller-instanceid: axis
spec:
signals:
- name: time
calendar:
schedule: "@every 30s"
triggers:
- name: inline-workflow-trigger
resource:
namespace: playground
group: argoproj.io
version: v1alpha1
kind: Workflow
source:
inline: |
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: job-hello-world-
spec:
entrypoint: whalesay
templates:
-
container:
args:
- "hello world"
command:
- cowsay
image: "docker/whalesay:latest"
name: whalesay

specifying schedule: "@every 30s" i expect it to repeat every 30 seconds;
sensor triggers once and transitions to Phase: Complete

extensible signal interface

Is your feature request related to a problem? Please describe.
Users of Axis cannot be assumed to use the same signals as implemented in this project. Conversely, in order to maintain usefulness & extensibility, we would need to keep adding new signal types and sources, greatly adding to the complexity of the project.

Describe the solution you'd like
I suggest a containerized interface for signals. Each signal executor is essentially a packaged image that implements a signal interface. Executors could send events defined as CloudEvent protos via gRPC to the host controller. These executors would still get deployed within the sensor controller and monitored during lifecycle of a sensor. The advantages being we can still leverage K8s jobs + pods for resiliency and scheduling and we can become extensible to be able to support any type of signal combination. There are two problems with this approach.

  1. How do we ensure the executor implements the signal interface and doesn't just execute any arbitrary code?
  2. How do we define any extensible signal via a Sensor specification CRD? Will this entail hardcoded fields such as url and key and flexible fields such as maps to store additional information about that signal specification?

Describe alternatives you've considered
An alternative would be to define Axis as a library. The downside here is that the library itself would need to be tied very closely with Kubernetes and the Kube API.

repeatable sensors

Is your feature request related to a problem? Please describe.
Initially, Argo Events was envisioned so that a sensor would only live for a single cycle. Once its dependencies were met and it executed it's triggers, it completed. However, this is annoying because now you have to keep creating new sensors for every cycle.

Describe the solution you'd like
The ability to create a sensor that lives past a single cycle. This can be a signal that fires every 30min or every business day or every business day after an S3 object arrives between 7am and 9am. We can create this sensor once and forget about it. It should re-instantiate or reset itself so that you don't have to create an entirely new one again that does the same thing.

There are a couple important things to consider for this:

  • We don't want to over-write sensor history, we MUST keep track of what happened and never delete data.
  • We still want visibility into an entire history chain of multiple cycles of a sensor (how many times it triggered in the past week)

Describe alternatives you've considered

  • First alternative is that we don't support cyclical or repeatable sensors.
  • Second alternative is for some other super process to figure out the total # of completions for a sensor and pass that in the sensor spec.
  • Remove the concept of dependency and opt for a FaaS approach. This means that a sensor would then be restricted to at MAX one signal. With this approach, we can get repeatability easily, but we still need a way to store past events which #35 could be an easy solution or explore diff options through #4 .

Long running sensors

Is your feature request related to a problem? Please describe.
When a signal resolves, we resolve the state of sensor (consider it contains only one signal). If the sensor is repeatable, we reinitialize the sensor after the trigger is complete. But in case of signals like webhook, we want to keep HTTP server running and want to stop the signal only if it runs past certain predefined time or user/some process stops the signal.

Describe the solution you'd like
Add "type" to signal which specifies whether a signal is resolved when a single event occurs or it lives as long as specified timeout occurs. Timeout will be optional field, as we may want to keep the sensor running.

example-

signals:
  - name: signal1
     mqtt:
        url: tcp://localhost:1883
        type: singular
        topic: hello
    name: signal2
signals:
  - name: signal2
    webhook:
        port: 9000
        endpoint: "/app"
        method: "POST"
        type: continuous
        timeout: 10000s

NPE in artifact s3

Describe the bug
Artifact sensor controller throws NullPointerException when S3 sensor

To Reproduce
Steps to reproduce the behavior:

  1. Apply sensor (see example)
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: bucket-trigger
  labels:
    sensors.argoproj.io/controller-instanceid: axis
spec:
  signals:
  - name: s3Created
    artifact:
      s3:
        bucket: validS3Bucket
        event: s3:ObjectCreated:*
        endpoint: s3-eu-west-1.amazonaws.com
        insecure: false
        accessKey:
          key: accessKey
          name: argo-repo-argo
        secretKey:
          key: secretKey
          name: argo-repo-argo
      target:
        type: NATS
        url: nats://nats-nats-client.argoproj.svc.cluster.local:4222
        attributes:
          subject: s3 object creted
  triggers: 
  ...
  1. See error in signal-artifact pod logs
runtime error: invalid memory address or nil pointer dereferencegoroutine 13 [running]:
runtime/debug.Stack(0xc420257210, 0xde6de0, 0x154bb30)
	/usr/local/Cellar/go/1.10/libexec/src/runtime/debug/stack.go:24 +0xa7
github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc.(*grpcServer).accept.func1.1.1(0xc4202b49e0, 0xc4200c2500)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc/grpc.go:159 +0x79
panic(0xde6de0, 0x154bb30)
	/usr/local/Cellar/go/1.10/libexec/src/runtime/panic.go:505 +0x229
github.com/argoproj/argo-events/signals/artifact.(*s3).Listen(0xc4203a97c0, 0xc4200f8000, 0xc420084de0, 0x0, 0xc420084de0, 0xdc6c60)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/signals/artifact/s3.go:99 +0x254
github.com/argoproj/argo-events/sdk.(*microSignalServer).handshake(0xc4203a97d0, 0xfe4320, 0xc4202eec40, 0xc420084de0, 0x410379, 0xc4202eec40, 0x10)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/sdk/micro_server.go:84 +0xe4
github.com/argoproj/argo-events/sdk.(*microSignalServer).Listen(0xc4203a97d0, 0xfe1fe0, 0xc420421290, 0xfe4320, 0xc4202eec40, 0xe41ca0, 0xc4200ec6c0)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/sdk/micro_server.go:34 +0x82
github.com/argoproj/argo-events/sdk.(*signalServiceHandler).Listen(0xc4203a97e0, 0xfe1fe0, 0xc420421290, 0x7fe7b61d8218, 0xc4200ec6c0, 0xc4204395f8, 0x4116ac)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/sdk/signal.micro.go:174 +0x90
reflect.Value.call(0xc4200dc7e0, 0xc4200e2458, 0x13, 0xf0e5ca, 0x4, 0xc4204399d0, 0x3, 0x3, 0xe319e0, 0xe41ca0, ...)
	/usr/local/Cellar/go/1.10/libexec/src/reflect/value.go:447 +0x969
reflect.Value.Call(0xc4200dc7e0, 0xc4200e2458, 0x13, 0xc4203b19d0, 0x3, 0x3, 0x38, 0x38, 0xc4203d6ac0)
	/usr/local/Cellar/go/1.10/libexec/src/reflect/value.go:308 +0xa4
github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc.(*grpcServer).processStream.func1(0xfe1fe0, 0xc420421290, 0xfe44a0, 0xc4200e19f0, 0xe41ca0, 0xc4200ec6c0, 0x14, 0xc420324e68)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc/grpc.go:434 +0x189
github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc.(*grpcServer).processStream(0xc4200c2500, 0xfea500, 0xc42039da00, 0xc42029c200, 0xc420457c80, 0xc4200ec240, 0xfe1660, 0x15814d8, 0xc420258600, 0x16, ...)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc/grpc.go:446 +0x42b
github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc.(*grpcServer).serveStream(0xc4200c2500, 0xfea500, 0xc42039da00, 0xc42029c200)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc/grpc.go:249 +0x708
github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc.(*grpcServer).accept.func1.1(0xc4202b49e0, 0xc4200c2500, 0xfea500, 0xc42039da00, 0xc42029c200)
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc/grpc.go:163 +0x7d
created by github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc.(*grpcServer).accept.func1
	/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/micro/go-plugins/server/grpc/grpc.go:153 +0xbe

Additional info
Deployed from:

  • docker.io/argoproj/artifact-signal:latest
  • docker.io/argoproj/sensor-controller:latest

Add top level boolean logic operator for signals

Is your feature request related to a problem? Please describe.
With current Sensor spec, user can only use AND operator for multiple signals and it lacks support for complex signals circuiting.

Describe the solution you'd like
Use "--" for OR operator and "-" for AND operator.

Example-

signals:
   -- name: time
      calender:
        interval: 10s
    - name: minioS3
      artifact:
        s3:
          bucket: hello
          endpoint: artifacts-minio.default:9000
          insecure: true
          accessKey:
            key: accesskey
            name: artifacts-minio
          secretKey:
            key: secretkey
            name: artifacts-minio
          event: s3:ObjectCreated:Put
          arn:
            partition: minio
            service: sqs
            region: us-east-1
            accountID: "1"
            resource: nats
        stream:
          nats:
            url: nats://example-nats-cluster:4222
            subject: bucketevents
   -- name: worklow-1
       resource:
         namespace: default
         group: "argoproj.io"
         version: "v1alpha1"
         kind: "Workflow"
         filter:
           prefix: scripts-bash
           labels:
             workflows.argoproj.io/phase: Succeeded 
      

webhook multiple http registrations

Describe the bug
An error is encountered in the webhook signal when you have a webhook with a certain path, delete it, and then re-create it with the same path. (assuming both signal streams connect to the same webhook signal pod and the pod doesn't die between requests)

To Reproduce
Steps to reproduce the behavior:

  1. k create -f examples/webhook-sensor.yaml
  2. k delete sensor webhook-example
  3. k create -f examples/webhook-sensor.yaml

Expected behavior
One should be able to delete and re-create sensors without having to modify and/or change the spec.

Additional context
This is related to how multiplexers & servers work. We were initially creating a separate server with a new ServeMux on every Listen(), however we cannot do this since we cannot start 2 servers listening on the same port (and now the port is fixed at signal startup).

Trigger inputs

Is your feature request related to a problem? Please describe.
Currently sensor lacks the way of passing payload obtained from signal to triggers.

Describe the solution you'd like

add s3 sensor with resource parameter example

Is your feature request related to a problem? Please describe.
Triggering a workflow off an s3 bucket PUT is a popular use case.

Describe the solution you'd like
We should add an example for this.

Additional context
There's also another important thing here that I hope to address. Currently, the ArtifactListener looks like:

// ArtifactListener is the interface for listening with artifacts
// In addition to including the basic Listener interface, this also
// enables access to read an artifact object to include in the event data payload
type ArtifactListener interface {
	Listener
	// TODO: change to use io.Reader and io.Closer interfaces?
	Read(loc *v1alpha1.ArtifactLocation, key string) ([]byte, error)
}

and in the s3 signal implementation, we are attaching the actual s3 file contents to the data field of the event. I'm of the opinion to stop doing this for a couple reasons:

  1. we have no idea how big this file may be and it could cause serious memory issues and/or problems with updating the k8s sensor CRD resource
  2. we do not necessarily know the file exists since we support any type of s3 notification (DELETES included)
  3. we do not know the content type or any other information besides the size of the file and some basic metadata from the notification.

What I am proposing is to remove the extra Read method for ArtifactListeners, they should simply be regular Listeners. We should instead attach the data as the JSON marshalled event notification so it looks like: https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html

escalation message not sent; code unreachable

During a sensor pod failure, the logic of the operator is such that it returns an error after it evaluatesSensorJob, thus preventing an escalation message to be sent on failure. Let's fix this and add a unit test case to ensure that an escalation message is sent under the circumstances when the sensor reaches an Error state.

Sensor for reacting to Kubernetes objects fails

Describe the bug
I am trying to run a workflow in response to a Kubernetes object being created (specifically, a namespace). I am running the sensor-controller and the resource signals. However, when I created the sensor CR, the sensor-controller crashes with an error:

2018-07-31T21:56:17.374Z	ERROR	controller/controller.go:149	Error syncing sensor 'default/resource-example': the signal 'resource' does not exist with the signal universe. please choose one from: [calendar artifact webhook]

To Reproduce

  1. Run sensor-controller
  2. Run the resource signal.
$ kp
NAME                                 READY     STATUS    RESTARTS   AGE
artifacts-minio-85547b6bd9-vtbfd     1/1       Running   0          14d
sensor-controller-766675b9df-dzm64   1/1       Running   0          23m
signal-calendar-78d7c8f5c-v7r8j      1/1       Running   0          22m
signal-resource-64d57998d9-kj87m     1/1       Running   0          7m
signal-webhook-6c896d9b8f-xwnc2      1/1       Running   0          22m
  1. Verify that the service objects exist in the cluster:
$  k get svc
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
artifacts-minio   ClusterIP   None            <none>        9000/TCP   14d
calendar          ClusterIP   100.70.43.60    <none>        8080/TCP   6d
kubernetes        ClusterIP   100.64.0.1      <none>        443/TCP    15d
resource          ClusterIP   100.71.110.61   <none>        8080/TCP   8m
  1. Create the following sensor object:
$ cat /stash/argo-events-examples/k8s.yaml
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: resource-example
  labels:
    sensors.argoproj.io/controller-instanceid: axis
spec:
  signals:
    - name: worklow-1
      resource:
        namespace: default
        group: ""
        version: "v1"
        kind: "Namespace"
        filter:
          prefix: scripts-bash
  triggers:
    - name: ns-workflow
      resource:
        namespace: default
        group: argoproj.io
        version: v1alpha1
        kind: Workflow
        source:
          inline: |
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                generateName: hello-world-
              spec:
                entrypoint: whalesay
                templates:
                  -
                    container:
                      args:
                        - "hello world"
                      command:
                        - cowsay
                      image: "docker/whalesay:latest"
                    name: whalesay
  1. When the above sensor is created using kubectl create, the sensor controller throws the following errors:
2018-07-31T21:56:17.364Z	INFO	controller/signal.go:65	WARNING: event stream for signal 'worklow-1' is missing - could have missed events! reconnecting stream...	{"sensor": "resource-example", "namespace": "default"}
2018-07-31T21:56:17.374Z	ERROR	controller/controller.go:149	Error syncing sensor 'default/resource-example': the signal 'resource' does not exist with the signal universe. please choose one from: [calendar artifact webhook]
github.com/argoproj/argo-events/controller.(*SensorController).handleErr
	/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/controller.go:149
github.com/argoproj/argo-events/controller.(*SensorController).processNextItem
	/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/controller.go:121
github.com/argoproj/argo-events/controller.(*SensorController).runWorker
	/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/controller.go:185
github.com/argoproj/argo-events/controller.(*SensorController).(github.com/argoproj/argo-events/controller.runWorker)-fm
	/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/controller/controller.go:178
github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait.Until
	/Users/sjavadekar/ws/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
ERROR: logging before flag.Parse: W0731 22:02:50.478195       1 reflector.go:341] github.com/argoproj/argo-events/controller/config.go:61: watch of *v1.ConfigMap ended with: too old resource version: 2561690 (2562505)

Expected behavior
The sensor-controller should correctly execute the workflow when a namespace is created.

Add Docker events as signal

Is your feature request related to a problem? Please describe.
Docker offers multitude of events on different types of docker objects. It will be useful to capture these events and trigger workflows.

Describe the solution you'd like
Proposed sensor -

apiVersion: core.events/v1alpha1
kind: Sensor
metadata:
  name: docker-example
  namespace: cloud-native-scheduler
  labels:
    sensors.core.events/controller-instanceid: axis
spec:
  repeat: true
  signals:
    - name: docker
      docker:
        type: "container"
        action: "create"
        # https://docs.docker.com/engine/reference/commandline/ps/#filtering
        filters:
          name: "hello-world"
          label: "myLabel=myValue"
  triggers:
    - name: done-workflow
      resource:
        namespace: cloud-native-scheduler
        group: argoproj.io
        version: v1alpha1
        kind: Workflow
        artifactLocation:
          s3:
            bucket: workflows
            key: hello-world.yaml
            endpoint: minio-service.cloud-native-scheduler:9000
            insecure: true
            accessKey:
              key: accesskey
              name: artifacts-minio
            secretKey:
              key: secretkey
              name: artifacts-minio

codegen makes PR conflicts on every types.go change

This is not a true issue but more of an annoyance.

I'm noticing that the file: generated.pb.go has many merge conflicts whenever the types.go file is changed in 2 concurrent PRs. Resolving these conflicts is annoying. I wonder if this is a result of:

[[constraint]]
  name = "k8s.io/code-generator"
  branch = "release-1.10"

as this wasn't an issue before #51 .

Publish containers to docker hub

Describe the bug
The k8s deployment manifest references argoproj/sensor-controller:latest however this image doesn't exist

To Reproduce
Steps to reproduce the behavior:

  1. kubectl create -f hack/k8s/manifests/*
  2. kubectl get po && kubectl describe <sensor-controller-pod>

Expected behavior
Pod is running

Screenshots

  Type     Reason                 Age   From                         Message
  ----     ------                 ----  ----                         -------
  Normal   Scheduled              8s    default-scheduler            Successfully assigned sensor-controller-6dc9694978-t2npf to docker-for-desktop
  Normal   SuccessfulMountVolume  8s    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "argo-events-token-2kd6f"
  Normal   Pulling                7s    kubelet, docker-for-desktop  pulling image "argoproj/sensor-controller:latest"
  Warning  Failed                 7s    kubelet, docker-for-desktop  Failed to pull image "argoproj/sensor-controller:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for argoproj/sensor-controller, repository does not exist or may require 'docker login'
  Warning  Failed                 7s    kubelet, docker-for-desktop  Error: ErrImagePull
  Normal   BackOff                6s    kubelet, docker-for-desktop  Back-off pulling image "argoproj/sensor-controller:latest"
  Warning  Failed                 6s    kubelet, docker-for-desktop  Error: ImagePullBackOff```

Environment (please complete the following information):

  • OS: macOS

url_test hangs when behind proxy

Describe the bug
Running make test hangs on the store package and specifically on the url_test.go TestURLReader test.

To Reproduce
Steps to reproduce the behavior:

  1. run make test

Expected behavior
The test should pass and the test should not be dependent on an internet connection or the outside world.

We should use the httptest package to run a localhost server while running this test.

Add validation for sensor

Is your feature request related to a problem? Please describe.
Currently there is no validation for sensors other than checking that each signal type is supported.

Describe the solution you'd like
we need to add validation for the following rules:

  • each signal only defines one of: (stream, artifact, calendar, resource, webhook)
  • all resource object trigger parameters reference a signal that's defined
  • all signal time filters specify a start time before stop time and a stop time after current time (allow nil values here to represent +/- infinity?)
  • calendar signals all have either a schedule or interval defined and the recurrence patterns are okay

Documentation for quickstart is not accurate

The doc docs/quickstart.md seems wrong.

Here are the steps I had to do in order to generate the images:

  1. go get github.com/argoproj/argo-events
  2. cd ~/go/src/github.com/argoproj/argo-events/
  3. dep ensure -vendor-only

Then I was able to build the image.

Events as CRD

Making events as CRD,

  1. As soon as signal receives the event, we create a CRD with appropriate labels.

  2. Once all signals are resolved, we create and "watch" the trigger. As soon as trigger resource successfully completes, we mark associated Event CRDs as complete and delete them(we may provide option to store them in a datastore like ElasticSearch) (Drawback here is what if the trigger is not a Kubernetes resource then how to make sure it completed). Deleting event CRDs as soon as trigger resource completes prevents us from having useless event CRDs.

  3. In case a sensor dies(for any reason whatsoever) in middle of execution, we already have the events received as CRDs. So when user recreates that sensor, we provide an option whether to replay all events associated with that sensor or cherry pick the events.

Also we can have a vacuum process just how Brigade has it to clear the old events CRD.

Looking at Brigade.js, they are considering making events as CRD.

Brigade events are currently specified as Kubernetes Secrets with particular labels. We use secrets because at the time of development, Third Party Resources were deprecated and Custom Resource Descriptions are not final. This aspect of the system may change between the 0.1.0 release of Brigade and the 1.0.0 release.

Also I am wondering whether we should have a sensor gateway similar to Brigade gateway, although I am not sure how that fits with our architecture.

Add Link to community resources

Describe the solution you'd like

  1. We should add a link to the README for the Argo Events slack channel.
  2. Should we also link to the Argo Google Forum?

Any other ideas?

evalulate re-architecting around knative-eventing

Is your feature request related to a problem? Please describe.
At the 2018 Google Cloud Next, Google unveiled the Knative project which contains an eventing repo with the following motivations:

  1. Services are loosely coupled during development and deployed independently
  2. A producer can generate events before a consumer is listening, and a consumer can express an interest in an event or class of events that is not yet being produced.
  3. Services can be connected to create new applications
    without modifying producer or consumer, and
    with the ability to select a specific subset of events from a particular producer.

Describe the solution you'd like
The eventing solution presents solid building blocks: EventSources, Channels, Buses, Subscriptions, and Flows as Kubernetes CRDs to utilize in solving the above.

Argo Events has been focused on solving the problem of a triggering arbitrary Kubernetes "actions" through external events. We've been able to solve 1, but our implementation stands to benefit from the Knative implementation for 2 and 3. We should present a path forward in leveraging the Knative eventing platform and focus on creating a solid interface to Argo Workflows.

Additional context
Let's keep a close eye on the development of the eventing platform, especially as it relates to their dependency on Istio and the Knative serving platform.

clean up controller logging

Is your feature request related to a problem? Please describe.
The sensor-controller logs are difficult to read and too verbose. We need to simplify the logging.

Describe the solution you'd like
I recommend removing the go.uber.org/zap log dependency and use another alternative instead as we don't care too much about performance and we're always using the Sugared() logger anyway. We can follow suit with the argo workflow-controller and use logrus or use glog. I'm open to alternatives as long as they are simple and easy to use.

We should also use the same logger for the signals as well...

Argo-events should not require workflows to be in an S3 bucket

Is your feature request related to a problem? Please describe.
Currently, argo-events requires and artifactLocation in the Trigger. This artifactLocation has the details of the workflow to run when the trigger is fired. This is cumbersome to setup (even if it is with minio).

Describe the solution you'd like
It will be better if there is no need for the S3 bucket. Ideally, the workflow to run could be inlined with the Sensor yaml. If not, there could be other options of have the workflow yaml in a config-map.

Describe alternatives you've considered
None

controller panic after 20 times syncing sensor if user not config Spec.Escalation

Describe the bug
controller panic after 20 times syncing sensor

To Reproduce
Steps to reproduce the behavior:

  1. stop signal deploy
  2. wait 20 times retry
  3. See error

the error is produce here, It should not sendMessage if user not config Spec.Escalation.

	if err != nil {
		// now let's escalate the sensor
		// the context should have the most up-to-date version
		log.Infof("escalating sensor to level %s via %s message", ctx.s.Spec.Escalation.Level, ctx.s.Spec.Escalation.Message.Stream.Type)
		err := sendMessage(&ctx.s.Spec.Escalation.Message)
		if err != nil {
			log.Panicf("failed escalating sensor '%s'", key)
		}
	}
ERROR: logging before flag.Parse: E0810 07:30:23.167717       1 runtime.go:66] Observed a panic: &logrus.Entry{Logger:(*logrus.Logger)(0xc4203f0140), Data:logrus.Fields{}, Time:time.Time{wall:0xbed36da3c9fcf676, ext:4169274126066, loc:(*time.Location)(0x1fbbf40)}, Level:0x0, Message:"failed escalating sensor 'argo/test2'", Buffer:(*bytes.Buffer)(nil)} (&{0xc4203f0140 map[] 2018-08-10 07:30:23.16757311 +0000 UTC m=+4169.274126066 panic failed escalating sensor 'argo/test2' <nil>})
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/Cellar/go/1.10/libexec/src/runtime/asm_amd64.s:573
/usr/local/Cellar/go/1.10/libexec/src/runtime/panic.go:505
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/entry.go:126
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/entry.go:194
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/entry.go:242
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/logger.go:181
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/exported.go:155
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/controller/controller.go:125
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/controller/controller.go:182
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/controller/controller.go:175
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/Cellar/go/1.10/libexec/src/runtime/asm_amd64.s:2361
panic: (*logrus.Entry) (0x1533b40,0xc4213f0e10) [recovered]
        panic: (*logrus.Entry) (0x1533b40,0xc4213f0e10)

goroutine 88 [running]:
github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
panic(0x1533b40, 0xc4213f0e10)
        /usr/local/Cellar/go/1.10/libexec/src/runtime/panic.go:505 +0x229
github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus.Entry.log(0xc4203f0140, 0xc420959320, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/entry.go:126 +0x2d2
github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus.(*Entry).Panic(0xc4213f0c30, 0xc4210fdce0, 0x1, 0x1)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/entry.go:194 +0xaa
github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus.(*Entry).Panicf(0xc4213f0c30, 0x157ff71, 0x1d, 0xc4210fdde0, 0x1, 0x1)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/entry.go:242 +0xed
github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus.(*Logger).Panicf(0xc4203f0140, 0x157ff71, 0x1d, 0xc4210fdde0, 0x1, 0x1)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/logger.go:181 +0x85
github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus.Panicf(0x157ff71, 0x1d, 0xc4210fdde0, 0x1, 0x1)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/github.com/sirupsen/logrus/exported.go:155 +0x5f
github.com/argoproj/argo-events/controller.(*SensorController).processNextItem(0xc420122820, 0xc4201c3200)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/controller/controller.go:125 +0x3a4
github.com/argoproj/argo-events/controller.(*SensorController).runWorker(0xc420122820)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/controller/controller.go:182 +0x2b
github.com/argoproj/argo-events/controller.(*SensorController).(github.com/argoproj/argo-events/controller.runWorker)-fm()
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/controller/controller.go:175 +0x2a
github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4208dc9b0)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc4208dc9b0, 0x3b9aca00, 0x0, 0x1, 0x0)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4208dc9b0, 0x3b9aca00, 0x0)
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/argoproj/argo-events/controller.(*SensorController).Run
        /Users/mmagaldi/go/src/github.com/argoproj/argo-events/controller/controller.go:175 +0x2f8

Expected behavior
It should not sendMessage if user not config Spec.Escalation.

clean up old sensors

We need the ability to clean up resolved/successful sensors after a certain amount of time. Ideally, we also want to keep a record/store of these sensors in an external store like s3,etc...

This feature should exist as a separate deployment to the controller.

Remove port from WebhookSignal

Is your feature request related to a problem? Please describe.
WebhookSignals currently allow users to specify a port to listen on, however this wouldn't be respected because the ClusterIP Service (for the webhook signal deployment) needs to be configured at deployment time and therefore is fixed at runtime.

Describe the solution you'd like
We should not let the user choose a port to listen on. I envision this would present a problem for the webhook signal unit tests so we'd likely have to modify the webhook signal implementation. I'm leaning toward making the webhook struct contain a reference to the http server which would be started on calling New(). This would further entail creating a custom http Mux that would allow us to not only register new endpoints, but also deregister endpoints. We'll have to look into this further as the DefaultHttpMux purposely chose not to do this because https://groups.google.com/forum/#!topic/golang-dev/kgN2TiUQf3M

Code generation with ./hack/update-codegen.sh fails

Describe the bug

I was trying to add some additional fields to a struct in types.go. I added the code and tried to run ./hack/update-codegen.sh to generate new code. However, that failed with the following error:

$ ./hack/update-codegen.sh
bash: ./vendor/k8s.io/code-generator/generate-groups.sh: No such file or directory

Expected behavior
./hack/update-codegen.sh should execute without any errors.

use event context to decode data

Is your feature request related to a problem? Please describe.
Currently, the renderEventDataAsJSON function expects either an event context Content-Type = JSON or yaml in order to decode the event's data. This means, that the event cannot "describe its own structure". Should it be able to and if so is this a solvable problem? If not, should be support any other MediaTypes like protocol buffers?

Describe the solution you'd like
I'm not sure what this looks like yet, but maybe worth a discussion in the CloudEvents repo.

Describe alternatives you've considered
The alternative is the current implementation. Leave as is and just add support for other encodings.

Webhook unit test panics with invalid memory address

Describe the bug
Unit tests for webhook signals panic with the following stacktrace:

=== RUN   TestSignal/delete
2018/07/23 04:23:25 server successfully shutdown
2018/07/23 04:23:25 received a request from 'localhost:9000'
2018/07/23 04:23:25 server successfully shutdown
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x831f6e]
goroutine 10 [running]:
testing.tRunner.func1(0xc420240000)
        /usr/local/go/src/testing/testing.go:711 +0x5d9
panic(0x17ad5a0, 0x22762b0)
        /usr/local/go/src/runtime/panic.go:491 +0x2a2
net/http.(*Server).Shutdown(0xc420274000, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:2506 +0x20e
github.com/argoproj/argo-events/signals/webhook.(*webhook).Stop(0xc4203faa20, 0xc420124400, 0xc4203ece10)
        /go/src/github.com/argoproj/argo-events/signals/webhook/signal.go:105 +0x89
github.com/argoproj/argo-events/signals/webhook.makeAPIRequest(0xc420240000, 0x18f0592, 0x6, 0x18f16cf, 0x7)
        /go/src/github.com/argoproj/argo-events/signals/webhook/signal_test.go:70 +0x72d
github.com/argoproj/argo-events/signals/webhook.testDeleteRequest(0xc420240000)
        /go/src/github.com/argoproj/argo-events/signals/webhook/signal_test.go:85 +0x63
testing.tRunner(0xc420240000, 0x1970078)
        /usr/local/go/src/testing/testing.go:746 +0x16d
created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:789 +0x569
FAIL    github.com/argoproj/argo-events/signals/webhook 0.559s

To Reproduce
http://argo.applatix.net/workflows/default/argo-events-ci-cprp2?tab=workflow&nodeId=argo-events-ci-cprp2-613907525&sidePanel=logs%3Aargo-events-ci-cprp2-613907525%3Amain

Expected behavior
Unit tests should succeed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.