datalust / helm.datalust.co Goto Github PK
View Code? Open in Web Editor NEWHelm charts hosted on helm.datalust.co
License: Apache License 2.0
Helm charts hosted on helm.datalust.co
License: Apache License 2.0
#7 shows that we could do a better job of surfacing releases here that are published to helm.datalust.co
so that users can tie versions of the chart to API versions used in source here.
Most changes to the chart just involve bumping the version of the Seq container, so users could reasonably pin an older version of the chart but keep their version of Seq itself more up-to-date.
Our OIDC endpoint uses a custom CA and issuer. Currently have to use a custom SEQ docker image to add the certificates to the trust store. This is because seq uses OIDC backchannel handler and it fails SSL validation if CA/Issuer certs are not present on machine. Would be very nice to add the following features.
certificates: [] # PEM encoded array of certs.
If this value is present, the chart would create a configmap w/ specified certs, mount the configmap and finally run update-ca-certificates on start.
Another solution is be to add extraEnvs, extraVolumes and extraVolumeMounts like bitnami charts. The chart would also have to allow customizing the command and args in order to run update-ca-certificates.
It should be possible to provide the priorityClassName
parameter, to configure pod priority and pre-emption: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
It might be useful to add the option to label and annotate the different pods from the values.yaml. That enables service meshes like consul to autoinject sidecars through hooks.
Version 2021.3
of Seq includes a dedicated health check url. The probes should use that as default, as well as allow users to customize that URL.
I deploy the helm chart without the enabling ingress
option as I have an traefik
instance and use IngressRoute
CRDs. I create a route to forward to the seq container on port 80 but am unable to connect.
In traefik the IngressRoute
appears to correctly point to http://<seq-container-ip>:80
.
How can I get this to work?
In the logs I see:
Seq listening on ["http://localhost/", "https://localhost/", "http://localhost:5341/", "https://localhost:45341/"]
...so it seems the container is only listening on localhost
and not <seq-container-ip>
- is that the problem? If so, how do I get it to listen on <seq-container-ip>
?
I create this issue with reference to a discussion here: datalust/seqcli#188.
There should be a possibility to set SEQ_FIRSTRUN_ADMINUSERNAME
and SEQ_FIRSTRUN_ADMINPASSWORDHASH
for the helm chart through passing appropriate values.yaml.
We should expose a property that can set the canonical URI for deployments behind proxies. Seq currently has the SEQ_API_CANONICALURI
environment variable for this, it'll just be a matter of exposing a property for it in the chart.
There appears to be a missing left chomp - on line 31: {{ if .Values.ui.ingress.enabled }} of
of https://github.com/datalust/helm.datalust.co/blob/main/charts/seq/templates/ingress.yaml
This leads to rendered manifests with a blank line after "rules"
Line 31 should read:
{{- if .Values.ui.ingress.enabled }}
I'm trying to get the first use username/password hash set, but it's not working, I can see from the commit history this was only added to the chart in mid April, how can I see the release or chart version this update made it in to?
I'm sharing mine that finally work for ingestion.
The ui was working out of the box, but It was quite difficult to understand how to setup the ingestion url/port, given that the nginx controller listens only on ports 80/443.
Serilog config
.WriteTo.Seq("https://in.seq.yourdomain.eu", apiKey: "xxxxxxx")
To apply:
helm repo add datalust https://helm.datalust.co
helm repo update
helm upgrade seq datalust/seq -n yournamespace -f seq.yourdomain.helm.yaml --install --dry-run
values.yaml
baseURI: https://seq.yourdomain.eu/
ingress:
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
tls:
- hosts:
- seq.yourdomain.eu
- in.seq.yourdomain.eu
secretName: seq
persistence:
size: 2Gi
existingClaim: seq-yourdomain
ui:
ingress:
enabled: true
path: /
hosts:
- seq.yourdomain.eu
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
ingestion:
ingress:
#nginx controller only supports ports 80/443
enabled: true
path: /
hosts:
- in.seq.yourdomain.eu
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "TCP"
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
We have a policy of using end to end TLS for all containers.
I am deploying seq to Kubernetes (AKS cluster + traefik ingress controller). When I switch the UI port to use 443 in the config.yaml like the following
ui:
containerPort: 443
service:
port: 443
The above fails the probes. When I looked at the Helm chart from Seq, I see the port UI is > used in probes but the "scheme" value is not parameterized in helm.
https://github.com/datalust/helm.datalust.co/blob/main/charts/seq/templates/deployment.yaml#L62I had to manually edit the Kubernetes Seq deployment to set the following in liveness , readiness and startup probe.
scheme: HTTPS
After making the above manual tweak , my pod comes up fine with probes. I was unable to set scheme using the helm and I had to manually tweak the deployment.
My observation:
Anyone who wants to run end to end TLS including the UI port, can't because all probe default to use the HTTP scheme. This do not works with TLS probes. Can you please have a look at the scheme parameter in the helm chart and advise if I have missed something or the scheme in probes needs to be parameterized in the helm chart from Seq.
Originally posted by @Atif-Syed-1 in datalust/seq-tickets#2000
I just wanted to upgrade via helm chart and found that the helm repo at helm.datalust.co does not contain the latest version 2021.4.6986. Though the Github release says "Published to helm.datalust.co.". Am I missing something?
I've been trying to set up Seq in our Kubernetes cluster with GELF input coming from Fluentbit based on this documentation. I have deployed Seq using the Helm chart as described in the documentation. While setting up Fluentbit I've seen that our cluster isn't using the default DNS name of *.cluster.local
but is using a different name instead, so I had to configure Fluentbit slightly different. This could be because our cluster has been rolled out using Rancher.
After fixing that though, I still didn't see any logs. So I had a look at the logs of the seq-gelf
container inside my Seq pod and I saw the following error message:
Failed to send an event batch
System.Net.Http.HttpRequestException: Name or service not known
---> System.Net.Sockets.SocketException (0xFFFDFFFF): Name or service not known
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean allowHttp2, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.GetHttpConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
at SeqCli.Ingestion.LogShipper.SendBatchAsync(SeqConnection connection, String apiKey, IReadOnlyCollection`1 batch, Boolean logSendFailures) in /home/appveyor/projects/seqcli/src/SeqCli/Ingestion/LogShipper.cs:line 155
at SeqCli.Ingestion.LogShipper.ShipEvents(SeqConnection connection, String apiKey, ILogEventReader reader, InvalidDataHandling invalidDataHandling, SendFailureHandling sendFailureHandling, Func`2 filter) in /home/appveyor/projects/seqcli/src/SeqCli/Ingestion/LogShipper.cs:line 55
After inspecting the pod configuration I noticed that the seq-gelf
container had an environment variable SEQ_ADDRESS
set to an address that ended with *.cluster.local
as well, which caused the above error message.
Unfortunately there isn't a way to override this from the Helm chart. I had to manually patch the deployment to change the value of that environment variable. Interestingly I don't think it needs to be a FQDN at all, since the seq-gelf
container is running within the same pod, so it can just use localhost
and it will work. I've manually made that change on my cluster and now I'm seeing logs coming into Seq.
Hi, I am using seq on Linux Ubuntu 22.04 LTS. I also use an app to send information to telegram bot. Periodically I get an exception:
Telegram.Bot.Exceptions.RequestException: Request timed out
---> System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.
---> System.TimeoutException: A task was canceled.
---> System.Threading.Tasks.TaskCanceledException: A task was canceled.
at System.Threading.Tasks.TaskCompletionSourceWithCancellation1.WaitWithCancellationAsync(CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.HttpConnectionWaiter
1.WaitForConnectionAsync(Boolean async, CancellationToken requestCancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at System.Net.Http.HttpClient.HandleFailure(Exception e, Boolean telemetryStarted, HttpResponseMessage response, CancellationTokenSource cts, CancellationToken cancellationToken, CancellationTokenSource pendingRequestsCts)
at System.Net.Http.HttpClient.g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at Telegram.Bot.TelegramBotClient.g__SendRequestAsync|22_0[TResponse](HttpClient httpClient, HttpRequestMessage httpRequest, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at Telegram.Bot.TelegramBotClient.g__SendRequestAsync|22_0[TResponse](HttpClient httpClient, HttpRequestMessage httpRequest, CancellationToken cancellationToken)
at Telegram.Bot.TelegramBotClient.MakeRequestAsync[TResponse](IRequest1 request, CancellationToken cancellationToken) at Telegram.Bot.TelegramBotClientExtensions.SendTextMessageAsync(ITelegramBotClient botClient, ChatId chatId, String text, Nullable
1 parseMode, IEnumerable1 entities, Nullable
1 disableWebPagePreview, Nullable1 disableNotification, Nullable
1 replyToMessageId, Nullable1 allowSendingWithoutReply, IReplyMarkup replyMarkup, CancellationToken cancellationToken) at Seq.App.Telegram.TelegramReactor.OnAsync(Event
1 evt) in C:\projects\seq-app-telegram\src\Seq.App.Telegram\TelegramReactor.cs:line 83
at SeqCli.Apps.Hosting.AppContainer.SendTypedEventAsync(String clef) in C:\projects\seqcli\src\SeqCli\Apps\Hosting\AppContainer.cs:line 113
Are there any ideas why this exception happens? It hangs my server and i need to reboot it every time...
In our deployments we use a proxy container registry which requires the use of credentials.
Can you please add support for 'imagePullSecrets' in the 'Values' ?
Please consider adding support for spec.ingressClassName
when creating Ingress
resources. This field has replaced the deprecated ingress class annotations. There is currently no way to specify this on the generated resources and so must be managed manually. Please add first-class support for this feature.
Additionally, I'd ask that you consider splitting the UI/Ingestion ingresses into separate Ingress
resources so that a different value for ingressClassName
may be specified for each of them.
Why I need the latter: we run two different ingress controllers in our environment; one is for public traffic and the other is for internal/private network traffic. We expose the Seq UI publicly via the first controller (mainly to support AzureAD) and run the auxiliary ingestion endpoint over the private network only.
I must currently manage the Seq ingresses since I both need the ingressClassName
feature and for it to differ between the two ingresses
Similar to #9, please add support for adding annotations and labels to the generated ServiceAccount
.
We are in the process of adopting Azure Workload Identity. In particular we are using it to leverage managed identity access to the Seq metastore. A requirement for this feature is a set of annotations and labels on the ServiceAccount.
For the moment we are using Helm's Kustomize post-processor to add the needed annotations and labels to the ServiceAccount
, however it's a bit clunky. First class support would be appreciated and likely useful for other scenarios.
installing the chart with ingress as true in the config file
getting this error:
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1"
my versions
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"c86ad89b8715ed17fd55b87cbb2888ccc6fa9878", GitTreeState:"clean", BuildDate:"2020-09-25T01:53:27Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
I believe V1 is only available on v19+ and not on 18. can this be fixed?
how can I set up my config.yaml to enable ssl on the ingestion port?
In our workload we would like to use the datastore which is supported through seq's environment variables, however it does not seem to be exposed through this helm chart, specifically we're using the SEQ_METASTORE_POSTGRES_CONNECTIONSTRING.
Version: 2021.3.6800
Is it possible that this option can be exposed in the future?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.