GithubHelp home page GithubHelp logo

Comments (11)

tgruenert avatar tgruenert commented on July 20, 2024

I´ve run in the same issue at this moment. It would be nice to configure the required level within values. "green" can be default to be backwards compatible

readinessProbe.level: green

from helm-charts.

rwittrick avatar rwittrick commented on July 20, 2024

Doing that would add level to the configuration of the readinessProbe itself and require awkward logic in the template to filter it out. A couple of suggestions for the values file are:

  1. Put the value at the top level outside of the readinessProbe
minClusterHealthStatus: green
  1. Move the readinessProbe configuration fields under config, and the status field under readinessProbe
readinessProbe:
  config:
    failureThreshold: 3
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 3
    timeoutSeconds: 5
  minClusterHealthStatus: green
  1. Similar to above, but put the status value more explicitly in the command key to differentiate it from k8s config
readinessProbe:
  config:
    failureThreshold: 3
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 3
    timeoutSeconds: 5
  command:
    minClusterHealthStatus: green

I'll work on a PR using option 3, for now

from helm-charts.

rwittrick avatar rwittrick commented on July 20, 2024

Actually option 3 above would be a breaking change for anyone that has already implemented values for the readinessProbe. The safest option is 1.

from helm-charts.

tgruenert avatar tgruenert commented on July 20, 2024

thank you for your feedback. after thinking about, my first suggestion was not the best one.
because i needed a quick fix for my problem, i also forked the project and did a change simular to your option 3.
it would be nice to see your PR for tis topic. thank you!

from helm-charts.

rwittrick avatar rwittrick commented on July 20, 2024

@tgruenert - see #62 :)

from helm-charts.

jordansissel avatar jordansissel commented on July 20, 2024

the cluster health status has a hardcoded value of green

My reading of the readinessProbe is that it checks for green once and once green only checks the node's root path GET / after that. It doesn't require the cluster to always be green.

I know it doesn't require green all the time because my cluster goes yellow (normal) during maintenance and none of my Elasticsearch nodes fail their readinessProbe.

from helm-charts.

rwittrick avatar rwittrick commented on July 20, 2024

This issue was raised related to the coordinator node issue and this comment - #58 (comment).

I found this issue because our data nodes restarted and some shards were left unnasigned. In the interim, our coordinator nodes restarted, and, because the cluster was not green, the readiness probe failed, meaning kibana, which was pointing at the coordinator node, could not connect to es, and we could not do index management through the UI to get the cluster green.

In conjunction with defining how coordinator nodes will work, this is to allow those types of nodes (i.e non mutating) to still be accessible and become ready in this scenario

from helm-charts.

jordansissel avatar jordansissel commented on July 20, 2024

Ahh. Makes sense.

I think the broader problem is that the readinessProbe currently evaluates a cluster-wide value (cluster state color) when readinessProbes should probably focus on individual pod health, not the health of the whole Elasticsearch cluster.

A red cluster state doesn't imply the cluster can't serve traffic, but that's what the readinessProbe is inferring today. Let's think about a broader per-node health check that doesn't look at the cluster health?

from helm-charts.

rwittrick avatar rwittrick commented on July 20, 2024

I think the broader problem is that the readinessProbe currently evaluates a cluster-wide value

agree

readinessProbes should probably focus on individual pod health

There is no real Node Health API for individual nodes, and I think it's still valid to check that the instance is part of the cluster before it's deemed ready. The closest I can think of is the Node Stats API, but that (given the query itself is valid) always returns success, and there is not "wait_for" parameters, so using it would involve a further step in interpreting the json response.
e.g.

curl -s elasticsearch-master:9200/_nodes/$(hostname -i),master:false,data:false,ingest:false/stats/http |grep -q '"successful":1'

and with that, to know you are part of a cluster, it's probably better to query against a service that is not localhost.

FYI, the PR for the original issue that I raised has been merged. Once this discussion ends I am happy to look at this again

from helm-charts.

Crazybus avatar Crazybus commented on July 20, 2024

I think the broader problem is that the readinessProbe currently evaluates a cluster-wide value (cluster state color) when readinessProbes should probably focus on individual pod health, not the health of the whole Elasticsearch cluster.

This would be great, but unfortunately statefulsets use the readinessProbe to also determine whether or not the next node in the statefulset can be upgraded. By using a purely node level health check here it would mean that rolling upgrades of Elasticsearch or Kubernetes would take down the Elasticsearch cluster. If anyone has a better idea of how to accomplish this in a statefulset without using a "cluster aware on startup" readinessProbe I would love to hear it!

I found this issue because our data nodes restarted and some shards were left unnasigned. In the interim, our coordinator nodes restarted

Just to clarify what happened here. Did all of these nodes get restarted at the same time? Or was this a controlled rolling upgrade scenario where Kubernetes was doing a planned rolling restart?

If all nodes do get restarted at exactly the same time it is indeed a shame that the readinessProbe will mean that no traffic is served until the cluster is green. So having coordinating nodes that always serve traffic does make sense in that situation.

from helm-charts.

Crazybus avatar Crazybus commented on July 20, 2024

Closing this as it was fixed by @rwittrick in #62. Thank you so much for opening and fixing the issue :)

from helm-charts.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.