GithubHelp home page GithubHelp logo

eventing-redis's People

Contributors

aavarghese avatar ahmedwaleedmalik avatar amarflybot avatar arghya88 avatar bvennam avatar cali0707 avatar creydr avatar dprotaso avatar evankanderson avatar hanfi avatar harwayne avatar isenr avatar juliafriedman8 avatar knative-automation avatar lionelvillard avatar markusthoemmes avatar mattmoor avatar pierdipi avatar rhuss avatar tdeverdiere avatar vaikas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eventing-redis's Issues

Support for cloud based Redis instance

Problem
Currently, the set up instructions for the RedisStreamSource have us install a cluster local version of redis. We should be able to use other instances of Redis as well, for example a cloud provided Redis instance.

Persona:
Which persona is this feature for?
Event Producer & Consumer

Exit Criteria
A measurable (binary) test that would indicate that the problem has been resolved.
I am able to use a redis instance from a cloud provider instead of the locally installed sample/redis.

Time Estimate (optional):
How many developer-days do you think this may take to resolve?

Additional context (optional)
Add any other context about the feature request here.

It appears that stream source component is dropping some requests

Describe the bug
When doing some load testing of the async-component, I noticed that some requests are lost. It looks like the requests are not being sent to my sink.

Expected behavior
I would expect all items to be sent to my sink. Maybe we should retry failed requests?

To Reproduce
I'm using the loadtest NPM module to make 10 requests per second for 20 seconds to my application. You can see setup instructions for the async component in our sandbox project: https://github.com/knative-sandbox/async-component

Knative release version

Additional context
Add any other context about the problem here such as proposed priority

On trying to connect to a cloud based redis db, my stream pod crashes

Describe the bug
Using the new feature to connect to a cloud based redis db, my stream pod crashes.

To Reproduce

  1. For stream source, use:
    address: "rediss://$USERNAME:$PASSWORD@7f41ece8-ccb3-43df-b35b-7716e27b222e.b2b5a92ee2df47d58bad0fa448c15585.databases.appdomain.cloud:32086"
  2. This is the error I see:
    panic: dial tcp: address: rediss://username:[email protected]:30285: too many colons in address

I also tried removing the rediss:// but neither worked.

Cloud Redis Instance

Describe the bug
When trying to use a cloud install of redis with my stream source, the stream pod will not start. I see some error in the logs & the pod crashes:

{"severity":"INFO","timestamp":"2021-01-25T17:26:08.423313416Z","caller":"logging/config.go:115","message":"Successfully created the logger.","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/logging/config.go","line":"115","function":"knative.dev/pkg/logging.newLoggerFromConfig"}}
{"severity":"INFO","timestamp":"2021-01-25T17:26:08.423422275Z","caller":"logging/config.go:116","message":"Logging level set to: info","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/logging/config.go","line":"116","function":"knative.dev/pkg/logging.newLoggerFromConfig"}}
{"severity":"INFO","timestamp":"2021-01-25T17:26:08.423486502Z","caller":"logging/config.go:78","message":"Fetch GitHub commit ID from kodata failed","error":"open /var/run/ko/HEAD: no such file or directory","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/logging/config.go","line":"78","function":"knative.dev/pkg/logging.enrichLoggerWithCommitID"}}
{"severity":"ERROR","timestamp":"2021-01-25T17:26:08.423816221Z","logger":"redis-stream-source","caller":"metrics/exporter.go:148","message":"Failed to get a valid metrics config","error":"metrics domain cannot be empty","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/metrics/exporter.go","line":"148","function":"knative.dev/pkg/metrics.UpdateExporter"},"stacktrace":"knative.dev/pkg/metrics.UpdateExporter\n\tknative.dev/[email protected]/metrics/exporter.go:148\nknative.dev/eventing/pkg/adapter/v2.MainWithInformers\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:131\nknative.dev/eventing/pkg/adapter/v2.MainWithEnv\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:105\nknative.dev/eventing/pkg/adapter/v2.MainWithContext\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:80\nknative.dev/eventing/pkg/adapter/v2.Main\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:76\nmain.main\n\tknative.dev/eventing-redis/source/cmd/receive_adapter/main.go:26\nruntime.main\n\truntime/proc.go:203"}
{"severity":"ERROR","timestamp":"2021-01-25T17:26:08.423952234Z","logger":"redis-stream-source","caller":"v2/main.go:132","message":"failed to create the metrics exporter{error 26 0  metrics domain cannot be empty}","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/pkg/adapter/v2/main.go","line":"132","function":"knative.dev/eventing/pkg/adapter/v2.MainWithInformers"},"stacktrace":"knative.dev/eventing/pkg/adapter/v2.MainWithInformers\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:132\nknative.dev/eventing/pkg/adapter/v2.MainWithEnv\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:105\nknative.dev/eventing/pkg/adapter/v2.MainWithContext\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:80\nknative.dev/eventing/pkg/adapter/v2.Main\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:76\nmain.main\n\tknative.dev/eventing-redis/source/cmd/receive_adapter/main.go:26\nruntime.main\n\truntime/proc.go:203"}
{"severity":"WARNING","timestamp":"2021-01-25T17:26:08.424032536Z","logger":"redis-stream-source","caller":"v2/config.go:185","message":"Tracing configuration is invalid, using the no-op default{error 26 0  empty json tracing config}","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/pkg/adapter/v2/config.go","line":"185","function":"knative.dev/eventing/pkg/adapter/v2.(*EnvConfig).SetupTracing"}}
{"severity":"WARNING","timestamp":"2021-01-25T17:26:08.424068367Z","logger":"redis-stream-source","caller":"v2/config.go:178","message":"Sink timeout configuration is invalid, default to -1 (no timeout)","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/pkg/adapter/v2/config.go","line":"178","function":"knative.dev/eventing/pkg/adapter/v2.(*EnvConfig).GetSinktimeout"}}
{"severity":"INFO","timestamp":"2021-01-25T17:26:08.440873333Z","logger":"redis-stream-source","caller":"adapter/adapter.go:87","message":"Retrieving group info","stream":"mystream","group":"mygroup","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/eventing-redis/source/pkg/adapter/adapter.go","line":"87","function":"knative.dev/eventing-redis/source/pkg/adapter.(*Adapter).Start"}}
{"severity":"ERROR","timestamp":"2021-01-25T17:26:08.443150692Z","logger":"redis-stream-source","caller":"v2/main.go:199","message":"Start returned an error","error":"read tcp 172.30.55.31:34040->169.63.54.170:30285: read: connection reset by peer","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/pkg/adapter/v2/main.go","line":"199","function":"knative.dev/eventing/pkg/adapter/v2.MainWithInformers"},"stacktrace":"knative.dev/eventing/pkg/adapter/v2.MainWithInformers\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:199\nknative.dev/eventing/pkg/adapter/v2.MainWithEnv\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:105\nknative.dev/eventing/pkg/adapter/v2.MainWithContext\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:80\nknative.dev/eventing/pkg/adapter/v2.Main\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:76\nmain.main\n\tknative.dev/eventing-redis/source/cmd/receive_adapter/main.go:26\nruntime.main\n\truntime/proc.go:203"}

Expected behavior
I expected the pod to start.

To Reproduce
To exactly reproduce, install async-component by following instructions (https://github.com/knative-sandbox/async-component/), and then use address to remote redis:
address: "rediss://username:[email protected]:30285"

Knative release version

Additional context
Add any other context about the problem here such as proposed priority

When the stream is deleted while the redis stream source is running, the service should recreate the stream and the group

Problem
When the stream is deleted, while the redis stream source is running, the service does not recreate the stream and the group so it stays in error logging :
"GROUP sync is missing and creates error : NOGROUP No such key 'my_stream' or consumer group 'a_group' in XREADGROUP with GROUP option"

To fix this, i would capture the error and send a Redis command to recreate the group. But i did not look much at the code.

Persona:
Event consumer (developer)
System Operator

Exit Criteria
No more error logs written like described above when the Redis stream is deleted

Time Estimate (optional):
1 day ?

Additional context (optional)
Restarting the service fixes the problem, so it is a workaround now.

Performance testing

/kind performance

From async-component:
We're looking at initially supporting about 1000 requests/second - and eventually hoping to slowly increase that over time. How long each request's processing time is will obviously vary, but for now we can assume about 5 seconds per request.

Fix metrics config setup for source controller

{"severity":"ERROR","timestamp":"2021-01-27T17:31:31.7621829Z","logger":"redis-stream-source","caller":"metrics/exporter.go:148","message":"Failed to get a valid metrics config","error":"metrics domain cannot be empty","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/metrics/exporter.go","line":"148","function":"knative.dev/pkg/metrics.UpdateExporter"},"stacktrace":"knative.dev/pkg/metrics.UpdateExporter\n\tknative.dev/[email protected]/metrics/exporter.go:148\nknative.dev/eventing/pkg/adapter/v2.MainWithInformers\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:131\nknative.dev/eventing/pkg/adapter/v2.MainWithEnv\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:105\nknative.dev/eventing/pkg/adapter/v2.MainWithContext\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:80\nknative.dev/eventing/pkg/adapter/v2.Main\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:76\nmain.main\n\tknative.dev/eventing-redis/source/cmd/receive_adapter/main.go:26\nruntime.main\n\truntime/proc.go:204"}
{"severity":"ERROR","timestamp":"2021-01-27T17:31:31.7622881Z","logger":"redis-stream-source","caller":"v2/main.go:132","message":"failed to create the metrics exporter{error 26 0  metrics domain cannot be empty}","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"knative.dev/[email protected]/pkg/adapter/v2/main.go","line":"132","function":"knative.dev/eventing/pkg/adapter/v2.MainWithInformers"},"stacktrace":"knative.dev/eventing/pkg/adapter/v2.MainWithInformers\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:132\nknative.dev/eventing/pkg/adapter/v2.MainWithEnv\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:105\nknative.dev/eventing/pkg/adapter/v2.MainWithContext\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:80\nknative.dev/eventing/pkg/adapter/v2.Main\n\tknative.dev/[email protected]/pkg/adapter/v2/main.go:76\nmain.main\n\tknative.dev/eventing-redis/source/cmd/receive_adapter/main.go:26\nruntime.main\n\truntime/proc.go:204"}

Install source, and an instance of redis stream source. These error message appears in the logs.
Reference: #68 (comment)

Socket error occurs when restarting Redis

Describe the bug
In k8s cluster, when Redis is restarted eventing-redis use an old socket to connect and there is this error in logs:

{"severity":"ERROR","timestamp":"2022-10-03T14:04:38.422117731Z","logger":"redis-stream-source","caller":"adapter/adapter.go:185","message":"Cannot read from stream","commit":"6becb00-dirty","stream":"vms:changes","error":"write tcp 10.6.4.171:58374->172.20.208.153:6379: use of closed network connection","stacktrace":"knative.dev/eventing-redis/source/pkg/adapter.(*Adapter).processEntry\n\tknative.dev/eventing-redis/source/pkg/adapter/adapter.go:185\nknative.dev/eventing-redis/source/pkg/adapter.(*Adapter).Start.func1\n\tknative.dev/eventing-redis/source/pkg/adapter/adapter.go:156"}

Expected behavior
Eventing-redis should reopen a socket when the error is "use of closed network connection".
Or may be we should use a pool of connection to handle those error.

To Reproduce
Restart Redis in the k8s cluster

Be able to pull receiver adapter image from a private docker repository

Problem
The receive adapter is in our private repository. If i understand well the controller creates a service account. And this service account tries to pull the image of the receiver adapter.
But it can't download it from a private docker registry because it misses imagePullSecrets. So we would like to have a configuration parameter to do so.
For now we must add the imagePullSecrets after the service account is created.

Persona:
System Operator

Exit Criteria
Allow pulling of the receiver adapter from a private repository

TLS Cert should be a kubernetes secret

Describe the bug
TLS cert should be in a kube secret

Expected behavior
TLS cert is currently in a config map, should be a kubernetes secret.

To Reproduce
Steps to reproduce the behavior.

Knative release version

Additional context
Add any other context about the problem here such as proposed priority

Add support for delayed message delivery

Problem
Delayed message processing is a common requirements in many business processes. Sometimes you want to delay the delivery of messages for a certain time so that subscribers doesn't see them immediately.

is it possible to expose the following with XRANGE mystream unix_time_start unix_time_end ?

What need to be changed ?

Error "unexpected group reply size (12)" when the process restart

Describe the bug
On Redis 7.0, when the group is already created, there is a systematic error:

{"severity":"EMERGENCY","timestamp":"2022-10-03T14:25:55.175745803Z","logger":"redis-stream-source","caller":"v2/main.go:260","message":"Start returned an error","commit":"6becb00-dirty",
"error":"unexpected group reply size (12)","stacktrace":"knative.dev/eventing/pkg/adapter/v2.MainWithInformers
	knative.dev/[email protected]/pkg/adapter/v2/main.go:260
knative.dev/eventing/pkg/adapter/v2.MainWithEnv
	knative.dev/[email protected]/pkg/adapter/v2/main.go:158
knative.dev/eventing/pkg/adapter/v2.MainWithContext
	knative.dev/[email protected]/pkg/adapter/v2/main.go:133
knative.dev/eventing/pkg/adapter/v2.Main
	knative.dev/[email protected]/pkg/adapter/v2/main.go:129
main.main
	knative.dev/eventing-redis/source/cmd/receive_adapter/main.go:26
runtime.main
	runtime/proc.go:250"}

Expected behavior
Start must continue its initialization and should not fail

To Reproduce
Use Redis 7.0
Start the process eventing-redis
Populate the stream
Restart the process eventing-redis
=> The error occurs

Watch config maps and redeploy: Number of consumers per adapter should be re-configurable & expired TLS certificates must be rotated

The controller should watch for the config-redis ConfigMap and re-configure the receive adapter with the new number of consumers, when modified.

The controller should also watch for the tls-secret Secret and re-create the Redis connection when a new TLS certificate is added since certs do expire and need rotation. (https://github.com/knative-sandbox/eventing-redis/blob/8dae13bfc4e1d5b107c83d6d2de0a7718936cbe4/source/pkg/reconciler/streamsource/controller.go#L91-L99)

Relevant PRs: #10
#47

cc: @lionelvillard @beemarie

Add Kind test verifying that the source comes ready and sends events

Overview

We recently found that even though build and unit tests had been passing, the receive adapter panicked when it started when you actually deployed the redis source. To address this, we should add a KinD test job that will:

  1. Install eventing and the redis source, along with redis
  2. Create a redis source
  3. Verify that the source becomes ready and delivers events

More info

These resources will likely be helpful while working on this:

Have tagged docker images for receive_adapter and controller

Problem
The following latest docker image is not tagged with a v1.9.0 version so it's difficult to automatize its pulling.
https://console.cloud.google.com/gcr/images/knative-releases/global/knative.dev/eventing-redis/cmd/source/receive_adapter

Also we did not find how to add our docker registry secrets so the receive_adapter image can be pulled from our private registry. It looks like something is possible with the service account name, but we haven't figured out how.

Persona:
System Operator
One have our own docker registry so we prefer to pull the docker image from gcr and push it in our own registry. Then our CI will use our own registry. It is easier to deal with tags in the form of versions than sha

Exit Criteria
Working docker pull:
docker pull gcr.io/knative-releases/knative.dev/eventing-redis/cmd/source/receive_adapter:v1.9.0

Redis v6 expects to see a username, not setting it as blank.

Describe the bug
In previous versions of redis, no username was expected -- in v6 (default from IBM Cloud at least - not sure about others), a username is required. If a username is not provided, then i get an error in my stream for "wrongpass invalid username-password pair"

Expected behavior
No error.

To Reproduce
Create a v6 redis instance, try to create a redisstreamsource, see error.

Knative release version
latest

Additional context
Add any other context about the problem here such as proposed priority

[KNEP] Multi-tenant RedisSource

Description
Introduce a multi-tenant KafkaRedis implementation capable of handling more than one source instance at a time, typically all source instances in a namespace or all source instances in a cluster

TLS handshake timeout when using cloud Redis with large # of consumers

panic: TLS handshake timeout
goroutine 75 [running]:
knative.dev/eventing-redis/source/pkg/adapter.(*Adapter).newPool.func1(0x0, 0x0, 0x0, 0x0)
	knative.dev/eventing-redis/source/pkg/adapter/adapter.go:256 +0x418
knative.dev/eventing-redis/source/pkg/adapter.(*Adapter).Start.func1(0xc0004ec140, 0xc000187570, 0xc000433470, 0x1f7d720, 0xc000432030, 0xc000054027, 0x8, 0xc0003bc5d0, 0x12)
	knative.dev/eventing-redis/source/pkg/adapter/adapter.go:132 +0x7c
created by knative.dev/eventing-redis/source/pkg/adapter.(*Adapter).Start
	knative.dev/eventing-redis/source/pkg/adapter/adapter.go:129 +0xb25

To reproduce:
config-redis set to 5000 consumers
config-tls set cert
set address in redisstreamsource to cloud instance
Install everything and logs show the panic above sometimes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.