GithubHelp home page GithubHelp logo

cetic / helm-nifi Goto Github PK

View Code? Open in Web Editor NEW
209.0 19.0 220.0 1.94 MB

Helm Chart for Apache Nifi

License: Apache License 2.0

Mustache 21.34% Shell 34.73% JavaScript 43.94%
kubernetes helm charts nifi

helm-nifi's Introduction

Helm Chart for Apache NiFi

CircleCI License version test

$${\color{red}Maintainers \space Wanted}$$

$${\color{red}This \space project \space is \space not \space maintained \space anymore.}$$

If you are interested in maintaining a fork of this project, please chime in in the dedicated issue.

Introduction

This Helm chart installs Apache NiFi 1.23.2 in a Kubernetes cluster.

Prerequisites

  • Kubernetes cluster 1.10+
  • Helm 3.0.0+
  • Persistent Volumes (PV) provisioner support in the underlying infrastructure.

Installation

Add Helm repository

helm repo add cetic https://cetic.github.io/helm-charts
helm repo update

Configure the chart

The following items can be set via --set flag during installation or configured by editing the values.yaml file directly (need to download the chart first).

Configure how to expose nifi service

  • Ingress: The ingress controller must be installed in the Kubernetes cluster.
  • ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster.
  • NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). You’ll be able to contact the NodePort service, from outside the cluster, by requesting NodeIP:NodePort.
  • LoadBalancer: Exposes the service externally using a cloud provider’s load balancer.

Configure how to persist data

  • Disable(default): The data does not survive the termination of a pod.
  • Persistent Volume Claim: Enable persistence so that data survives termination of the pod. There is the choice of using one large persistent volume (using subPath) or seven separate persistent volumes for config, data, logs, repos, etc. A default StorageClass is needed in the Kubernetes cluster to dynamically provision the volumes. Specify another StorageClass in the persistence.storageClass setting.

Configure authentication

  • By default, the authentication is a Single-User authentication. You can optionally enable ldap or oidc to provide an external authentication. See the configuration section or doc folder for more details.

Use custom processors

To add custom processors, the values.yaml file nifi section should contain the following options, where CUSTOM_LIB_FOLDER should be replaced by the path where the libs are:

  extraVolumeMounts:
    - name: mycustomlibs
      mountPath: /opt/configuration_resources/custom_lib
  extraVolumes: # this will create the volume from the directory
    - name: mycustomlibs
      hostPath:
        path: "CUSTOM_LIB_FOLDER"
  properties:
    customLibPath: "/opt/configuration_resources/custom_lib"

Configure prometheus monitoring

  • You first need monitoring to be enabled which can be accomplished by enabling the appropriate metrics flag (metrics.prometheus.enabled to true). To enable the creation of prometheus metrics within Nifi we need to create a Reporting Task. Login to the Nifi UI and go to the Hamburger menu on the top right corner, click Controller Settings --> Reporting Tasks After that use the + icon to add a task. Click on the Reporting in the wordcloud on the left and select PrometheusReportingTask --> change Send JVM metrics to true and click on the play button to enable this task.

If you plan to use Grafana for the visualization of the metrics data the following dashboard is compatible with the exposed metrics.

Install the chart

Install the nifi helm chart with a release name my-release:

helm install my-release cetic/nifi

Install from local clone

You will find how to perform an installation from a local clone on this page.

Uninstallation

To uninstall/delete the my-release deployment:

helm uninstall my-release

Configuration

The following table lists the configurable parameters of the nifi chart and the default values.

Parameter Description Default
ReplicaCount
replicaCount Number of nifi nodes 1
Image
image.repository nifi Image name apache/nifi
image.tag nifi Image tag 1.23.2
image.pullPolicy nifi Image pull policy IfNotPresent
image.pullSecret nifi Image pull secret nil
SecurityContext
securityContext.runAsUser nifi Docker User 1000
securityContext.fsGroup nifi Docker Group 1000
sts
sts.useHostNetwork If true, use the host's network nil
sts.serviceAccount.create If true, a service account will be created and used by the statefulset false
sts.serviceAccount.name When set, the set name will be used as the service account name. If a value is not provided a name will be generated based on Chart options nil
sts.serviceAccount.annotations Service account annotations {}
sts.podManagementPolicy Parallel podManagementPolicy Parallel
sts.AntiAffinity Affinity for pod assignment soft
sts.pod.annotations Pod template annotations security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
sts.hostAliases Add entries to Pod /etc/hosts []
sts.startupProbe.enabled enable Startup Probe on Nifi server container false
sts.startupProbe.failureThreshold sets Startup Probe failureThreshold field value 60
sts.startupProbe.periodSeconds sets Startup Probe periodSeconds field value 10
secrets
secrets Pass any secrets to the nifi pods. The secret can also be mounted to a specific path if required. nil
configmaps
configmaps Pass any configmaps to the nifi pods. The configmap can also be mounted to a specific path if required. nil
nifi properties
properties.algorithm Encryption method NIFI_PBKDF2_AES_GCM_256
properties.sensitiveKey Encryption password (at least 12 characters) changeMechangeMe
properties.sensitiveKeySetFile Update Sensitive Properties Key if this file does not exist, and then create it. nil
properties.sensitiveKeyPrior Prior sensitiveKey when updating via sensitiveKeySetFile mechanism nil
properties.externalSecure externalSecure for when inbound SSL false
properties.isNode cluster node properties (only configure for cluster nodes) false
properties.httpPort web properties HTTP port 8080
properties.httpsPort web properties HTTPS port null
properties.clusterPort cluster node port 6007
properties.clusterNodeConnectionTimeout cluster node connection timeout 5 sec
properties.clusterNodeReadTimeout cluster node read timeout 5 sec
properties.zookeeperConnectTimeout zookeeper connect timeout 3 secs
properties.zookeeperSessionTimeout zookeeper session timeout 3 secs
properties.archiveMaxRetentionPeriod nifi content repository archive max retention period 3 days
properties.archiveMaxUsagePercentage nifi content repository archive max usage 85%
properties.provenanceStorage nifi provenance repository max storage size 8 GB
properties.provenanceMaxStorageTime nifi provenance repository max storage time 10 days
properties.flowArchiveMaxTime nifi flow archive max time 30 days
properties.flowArchiveMaxStorage nifi flow archive max storage 500 MB
properties.siteToSite.secure Site to Site properties Secure mode false
properties.siteToSite.port Site to Site properties Secure port 10000
properties.safetyValve Map of explicit 'property: value' pairs that overwrite other configuration nil
properties.customLibPath Path of the custom libraries folder nil
properties.webProxyHost Proxy to access to Nifi through the cluster ip address Port:30236
Authentication
Single-user authentication Automatically disabled if Client Certificate, OIDC, or LDAP enabled
auth. admin Default admin identity. It will overwrite the LDAP Bind DN for this purpose, when both is filled CN=admin, OU=NIFI
auth.singleUser.username Single user identity username
auth.singleUser.password Single user password changemechangeme
Client Certificate authentication
auth.clientAuth.enabled Enable User auth via Client Certificates false
Ldap authentication
auth.ldap.admin Default admin identity and LDAP Bind DN
auth.ldap.enabled Enable User auth via ldap false
auth.ldap.host ldap hostname ldap://<hostname>:<port>
auth.ldap.searchBase ldap searchBase CN=Users,DC=example,DC=com
auth.ldap.searchFilter ldap searchFilter CN=john
auth.ldap.userSearchScope ldap userSearchScope ONE_LEVEL
auth.ldap.groupSearchScope ldap groupSearchScope ONE_LEVEL
Oidc authentication
auth.oidc.enabled Enable User auth via oidc false
auth.oidc.discoveryUrl oidc discover url https://<provider>/.well-known/openid-configuration
auth.oidc.clientId oidc clientId nil
auth.oidc.clientSecret oidc clientSecret nil
auth.oidc.claimIdentifyingUser oidc claimIdentifyingUser email
auth.oidc.preferredJwsAlgorithm The preferred algorithm for validating identity tokens. If this value is blank, it will default to RS256 which is required to be supported by the OpenID Connect Provider according to the specification. If this value is HS256, HS384, or HS512, NiFi will attempt to validate HMAC protected tokens using the specified client secret. If this value is none, NiFi will attempt to validate unsecured/plain tokens. nil
auth.oidc.admin Default OIDC admin identity [email protected]
Note that OIDC authentication to a multi-NiFi-node cluster requires Ingress sticky sessions See background Also how
postStart
postStart Include additional libraries in the Nifi containers by using the postStart handler nil
Headless Service
headless.type Type of the headless service for nifi ClusterIP
headless.annotations Headless Service annotations service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
UI Service
service.type Type of the UI service for nifi NodePort
service.httpPort Port to expose service 8080
service.httpsPort Port to expose service in tls 443
service.annotations Service annotations {}
service.loadBalancerIP LoadBalancerIP if service type is LoadBalancer nil
service.loadBalancerSourceRanges Address that are allowed when svc is LoadBalancer []
service.processors.enabled Enables additional port/ports to nifi service for internal processors false
service.processors.ports Specify "name/port/targetPort/nodePort" for processors sockets []
ContainerPorts
containerPorts Additional containerPorts for the nifi-container. Example is given in values.yaml []
Ingress
ingress.enabled Enables Ingress false
ingress.className Ingress controller Class nginx
ingress.annotations Ingress annotations {}
ingress.path Path to access frontend (See issue #22) /
ingress.hosts Ingress hosts []
ingress.tls Ingress TLS configuration []
Persistence
persistence.enabled Use persistent volume to store data false
persistence.storageClass Storage class name of PVCs (use the default type if unset) nil
persistence.accessMode ReadWriteOnce or ReadOnly [ReadWriteOnce]
persistence.subPath.enabled Use only one persistent volume with subPath instead of seven separate persistent volumes false
persistence.subPath.name Name of the one persistent volume claim when using subPath data
persistence.subPath.size Size of the one persistent volume claim when using subPath 36Gi
persistence.configStorage.size Size of persistent volume claim 100Mi
persistence.authconfStorage.size Size of persistent volume claim 100Mi
persistence.dataStorage.size Size of persistent volume claim 1Gi
persistence.flowfileRepoStorage.size Size of persistent volume claim 10Gi
persistence.contentRepoStorage.size Size of persistent volume claim 10Gi
persistence.provenanceRepoStorage.size Size of persistent volume claim 10Gi
persistence.logStorage.size Size of persistent volume claim 5Gi
jvmMemory
jvmMemory bootstrap jvm size 2g
SideCar
sidecar.image Separate image for tailing each log separately and checking zookeeper connectivity busybox
sidecar.tag Image tag 1.32.0
sidecar.imagePullPolicy Image imagePullPolicy IfNotPresent
Resources
resources Pod resource requests and limits for logs {}
logResources
logresources. Pod resource requests and limits {}
affinity
affinity Pod affinity scheduling rules {}
nodeSelector
nodeSelector Node labels for pod assignment {}
terminationGracePeriodSeconds
terminationGracePeriodSeconds Number of seconds the pod needs to terminate gracefully. For clean scale down of the nifi-cluster the default is set to 60, opposed to k8s-default 30. 60
tolerations
tolerations Tolerations for pod assignment []
initContainers
initContainers Container definition that will be added to the pod as initContainers []
extraVolumes
extraVolumes Additional Volumes available within the pod (see spec for format) []
extraVolumeMounts
extraVolumeMounts VolumeMounts for the nifi-server container (see spec for details) []
env
env Additional environment variables for the nifi-container (see spec for details) []
envFrom Additional environment variables for the nifi-container from config-maps or secrets (see spec for details) []
extraOptions
extraOptions Additional bootstrap.conf properties (see properties for details) []
extraContainers
extraContainers Additional container-specifications that should run within the pod (see spec for details) []
extraLabels
extraLabels Additional labels for the nifi pod nil
openshift
openshift.scc.enabled If true, a openshift security context will be created permitting to run the statefulset as AnyUID false
openshift.route.enabled If true, a openshift route will be created. This option cannot be used together with Ingress as a route object replaces the Ingress. The property properties.externalSecure will configure the route in edge termination mode, the default is passthrough. The property properties.httpsPort has to be set if the cluster is intended to work with SSL termination false
openshift.route.host The hostname intended to be used in order to access NiFi web interface nil
openshift.route.path Path to access frontend, works the same way as the ingress path option nil
zookeeper
zookeeper.enabled If true, deploy Zookeeper true
zookeeper.url If the Zookeeper Chart is disabled a URL and port are required to connect nil
zookeeper.port If the Zookeeper Chart is disabled a URL and port are required to connect 2181
registry
registry.enabled If true, deploy Nifi Registry false
registry.url If the Nifi Registry Chart is disabled a URL and port are required to connect nil
registry.port If the Nifi Registry Chart is disabled a URL and port are required to connect 80
ca
ca.enabled If true, deploy Nifi Toolkit as CA false
ca.server CA server dns name nil
ca.port CA server port number 9090
ca.token The token to use to prevent MITM 80
ca.admin.cn CN for admin certificate admin
ca.serviceAccount.create If true, a service account will be created and used by the deployment false
ca.serviceAccount.name When set, the set name will be used as the service account name. If a value is not provided a name will be generated based on Chart options nil
ca.openshift.scc.enabled If true, an openshift security context will be created permitting to run the deployment as AnyUID false
certManager
certManager.enabled If true, use cert-manager to create and rotate intra-NiFi-cluster TLS keys (note that cert-manager is a Kubernetes cluster-wide resource, so is not installed automatically by this chart) false
certManager.clusterDomain Kubernetes cluster top level domain, to generate fully qualified domain names for certificate Common Names cluster.local
certManager.keystorePasswd Java Key Store password for NiFi keystore changeme
certManager.truststorePasswd Java Key Store password for NiFi truststore changeme
certManager.additionalDnsNames Additional DNS names to incorporate into TLS certificates (e.g. where users point browsers to access the NiFi UI) [ localhost ]
certManager.caSecrets Names of Kubernetes secrets containing ca.crt keys to add to the NiFi truststore [ ]
certManager.refreshSeconds How often the sidecar refreshes the NiFi keystore (not truststore) from the cert-manager Kubernetes secrets 300
certManager.resources Memory and CPU resources for the node certificate refresh sidecar 100m CPU, 128MiB RAM
certManager.replaceDefaultTrustStore Use the certManager truststore, not the default Java trusted CA collection (for [e.g.] private OIDC provider) false
certManager.certDuration NiFi node certificate lifetime (90 days) 2160h
certManager.caDuration Certificate Authority certificate lifetime (10 years) 87660h
metrics
metrics.prometheus.enabled Enable prometheus to access nifi metrics endpoint false
metrics.prometheus.port Port where Nifi server will expose Prometheus metrics 9092
metrics.prometheus.serviceMonitor.enabled If true, creates a Prometheus Operator ServiceMonitor (also requires metrics.prometheus.enabled to be true) false
metrics.prometheus.serviceMonitor.namespace In which namespace the ServiceMonitor should be created
metrics.prometheus.serviceMonitor.labels Additional labels for the ServiceMonitor nil
customFlow
customFlow Use this file (uncompressed XML; possibly from a configmap) as the Flow definition nil

Troubleshooting

Before filing a bug report, you may want to:

  • check the FAQ
  • check that persistent storage is configured on your cluster
  • keep in mind that a first installation may take a significant amount of time on a home internet connection
  • check if a pod is in error:
kubectl get pod
NAME                  READY   STATUS    RESTARTS   AGE
myrelease-nifi-0             3/4     Failed   1          56m
myrelease-nifi-registry-0    1/1     Running   0          56m
myrelease-nifi-zookeeper-0   1/1     Running   0          56m
myrelease-nifi-zookeeper-1   1/1     Running   0          56m
myrelease-nifi-zookeeper-2   1/1     Running   0          56m

Inspect the pod, check the "Events" section at the end for anything suspicious.

kubectl describe pod myrelease-nifi-0

Get logs on a failed container inside the pod (here the server one):

kubectl logs myrelease-nifi-0 server

Credits

Initially inspired from https://github.com/YolandaMDavis/apache-nifi.

TLS work/inspiration from https://github.com/sushilkm/nifi-chart.git.

Contributing

Feel free to contribute by making a pull request.

Please read the official Helm Contribution Guide from Helm for more information on how you can contribute to this Chart.

License

Apache License 2.0

helm-nifi's People

Contributors

a-nldisr avatar alexnuttinck avatar ayadiamen avatar banzo avatar carstenpohllhind avatar cf250024 avatar combineads avatar ebcflagman avatar ecl996 avatar emrge-michaeld avatar frasmarco avatar gforeman02 avatar hobe avatar iammoen avatar kangshung avatar kirkxd avatar ksubileau avatar majinghe avatar makeacode avatar nathluu avatar nothinking avatar novakov-alexey avatar octopyth avatar patsevanton avatar shivam9268 avatar stoetti avatar subv avatar tunaman avatar wknickless avatar zakaria2905 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-nifi's Issues

[cetic/nifi] error when scaling past 1 secure node

Describe the bug
when 2nd replica of the statefulset starts up i get errors in the app-log after i try to log into the ui. with a single replica everything works fine.

[apache-nifi-0 app-log] 2019-12-04 13:15:00,250 WARN [Process Cluster Protocol Request-10] o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message from 172-30-147-92.apache-nifi.observability.svc.cluster.local due to javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
[apache-nifi-1 app-log] 2019-12-04 13:15:05,264 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 'CONNECTION_REQUEST' protocol message due to: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors
[apache-nifi-0 app-log] 2019-12-04 13:15:05,273 WARN [Process Cluster Protocol Request-2] o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message from 172-30-147-92.apache-nifi.observability.svc.cluster.local due to javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
[apache-nifi-1 app-log] 2019-12-04 13:15:10,277 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 'CONNECTION_REQUEST' protocol message due to: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors

Version of Helm and Kubernetes:

kube 1.14.9
helm 3.0.0

What happened:
i get this after entering my userid/password:

Unable to continue login sequence

home

Purposed state does not match the stored state. Unable to continue login process.

What you expected to happen:

to be redirected to the nifi canvas ui

How to reproduce it (as minimally and precisely as possible):
I added this to the command section in the statefulset.yaml

      # setup tls
      /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone -n ${FQDN} -f /opt/nifi/nifi-current/conf/nifi.properties -P {{ .Values.nifi.trustStorePassword }} -S {{ .Values.nifi.keyStorePassword }} || true
      mv /opt/nifi/nifi-current/${FQDN}/* /opt/nifi/nifi-current/conf

Anything else we need to know:
This setup works fine with a single node, it stops working when i scale the statefulset to greater than 1 node

after changes made in nifi when container restart all data loss

Hi ,
I have used above charts for deploying on azure aks ,i have enabled persistance .after that i have created some processor in nifi and killed and restart pods ...
when nifi get up i have lossed all Processor which i have created earlier .
image

i have following question here

  1. why each time pvc is created like above example
  2. I have some 3rd Party custom nar where i need put a d

[cetic/nifi] External secure does not work

Hi,

I'm in trouble today because when i set externalSecure to "true" it does not accept openId connection
I have an error "authentification is only avaiable in https".
I 'm using https through my nginx ingress and not directly on apache nifi.
Is there a way to solve this issue ?

Thank you by advance

Unable to host with OAuth and if https enabled crashing loop back occurs

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[cetic/nifi] CrashLoopBackOff

Describe the bug
A clear and concise description of what the bug is.
i have deployed nifi using helm chart which is working fine after 20 days later got below error

Version of Helm and Kubernetes:
ubuntu@ip-10-0-0-202:~$ helm version --short
Client: v2.14.3+g0e7f3b6
Server: v2.14.2+ga8b13cc

What happened:

image

error:

2020-01-02 04:50:52,662 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080 requested disconnection from cluster due to org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow.
2020-01-02 04:50:52,663 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator Status of dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080 changed from NodeConnectionStatus[nodeId=dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080, state=CONNECTING, updateId=1754] to NodeConnectionStatus[nodeId=dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080, state=DISCONNECTED, Disconnect Code=Node's Flow did not Match Cluster Flow, Disconnect Reason=org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow., updateId=1754]
2020-01-02 04:50:52,672 ERROR [main] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080 -- Node disconnected from cluster due to org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow.
2020-01-02 04:50:52,672 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager Cannot unregister Leader Election Role 'Primary Node' becuase that role is not registered
2020-01-02 04:50:52,673 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.IllegalStateException: Already closed or has not been started
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at org.apache.curator.framework.recipes.leader.LeaderSelector.close(LeaderSelector.java:270)
at org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager.unregister(CuratorLeaderElectionManager.java:152)
at org.apache.nifi.controller.FlowController.setClustered(FlowController.java:2217)
at org.apache.nifi.controller.StandardFlowService.handleConnectionFailure(StandardFlowService.java:578)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:542)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1009)
at org.apache.nifi.NiFi.(NiFi.java:158)
at org.apache.nifi.NiFi.(NiFi.java:72)
at org.apache.nifi.NiFi.main(NiFi.java:297)
2020-01-02 04:50:52,673 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2020-01-02 04:50:52,677 INFO [Process Cluster Protocol Request-2] o.a.n.c.c.node.NodeClusterCoordinator Status of dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080 changed from NodeConnectionStatus[nodeId=dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080, state=DISCONNECTED, Disconnect Code=Node's Flow did not Match Cluster Flow, Disconnect Reason=org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow., updateId=1754] to NodeConnectionStatus[nodeId=dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080, state=DISCONNECTED, Disconnect Code=Node's Flow did not Match Cluster Flow, Disconnect Reason=org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow., updateId=1754]
2020-01-02 04:50:52,677 INFO [Process Cluster Protocol Request-2] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 0d664ace-24f4-4443-af02-235a34f0cc44 (type=NODE_STATUS_CHANGE, length=1639 bytes) from dev-nifi-1.dev-nifi-headless.dev.svc.cluster.local in 1 millis
2020-01-02 04:50:52,679 INFO [Thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector@3a94d716{HTTP/1.1,[http/1.1]}{dev-nifi-2.dev-nifi-headless.dev.svc.cluster.local:8080}
2020-01-02 04:50:52,679 INFO [Thread-1] org.eclipse.jetty.server.session node0 Stopped scavenging

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

[cetic/nifi] annotations

Describe the bug
A clear and concise description of what the bug is.

Version of Helm and Kubernetes:

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

[cetic/nifi] correct the unintuitive Service

Is your feature request related to a problem? Please describe.

The current values.yaml is unintuitive to setup the service part. (#7 , #16)

Describe the solution you'd like

Refactor the Service part: maybe rename the loadbalancer into ui and add some comments into the values.yaml

Ability to include additional libraries in the Nifi containers

Database processors need database drivers libraries in the Nifi containers.

The current hack is:

wget https://jdbc.postgresql.org/download/postgresql-42.2.6.jar
kubectl cp ./postgresql-42.2.6.jar fadi/fadi-nifi-0:/opt/nifi/postgresql-42.2.6.jar
rm postgresql-42.2.6.jar

It would be nice to be able to specify a list of libraries that would be downloaded and put in nifi_home/lib as an additional option.

Escape dot sequences wont work in ingress.annotations

Describe the bug
Helm chart runs successfully without showing any errors, but ingress is not getting created after adding below command

 helm install random-nifi cetic/nifi --set persistence.enabled=true --set service.type=NodePort --set ingress.hosts={nifi.nonprod.random.com} --set ingress.annotations."kubernetes\.io/ingress\.class"=alb

I can run this successfully by modifying values.yaml and running helm locally, but my scenario requires me to run this along with --set parameter on the command line which doesn't work.

Version of Helm and Kubernetes:
Helm v3.1.2
K8s v1.14

after restarted pods getting zookker exception

I have staretd pods no error all pods running but when i see nifi logs its not able to connect zookeeper here is error

    at java.lang.Thread.run(Thread.java:748)

2019-12-27 07:46:20,043 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: RECONNECTED
2019-12-27 07:46:20,043 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@33b917f2 Connection State changed to RECONNECTED
2019-12-27 07:46:20,147 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
2019-12-27 07:46:20,147 ERROR [main-EventThread] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:647)
at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
at org.apache.curator.framework.imps.FindAndDeleteProtectedNodeInBackground$2.processResult(FindAndDeleteProtectedNodeInBackground.java:104)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:630)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2019-12-27 07:46:20,147 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@33b917f2 Connection State changed to SUSPENDED
2019-12-27 07:46:22,030 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: RECONNECTED
2019-12-27 07:46:22,031 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@33b917f2 Connection State changed to RECONNECTED
2019-12-27 07:46:22,133 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
2019-12-27 07:46:22,133 WARN [main] o.a.nifi.controller.StandardFlowService There is currently no Cluster Coordinator. This often happens upon restart of NiFi when running an embedded ZooKeeper. Will register this node to become the active Cluster Coordinator and will attempt to connect to cluster again
2019-12-27 07:46:22,133 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered
2019-12-27 07:46:22,133 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@33b917f2 Connection State changed to SUSPENDED
2019-12-27 07:46:23,223 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: RECONNECTED
2019-12-27 07:46:23,224 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@33b917f2 Connection State changed to RECONNECTED
2019-12-27 07:46:23,229 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
2019-12-27 07:46:23,229 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@33b917f2 Connection State changed to SUSPENDED
2019-12-27 07:46:23,229 ERROR [main-EventThread] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:647)
at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
at org.apache.curator.framework.imps.FindAndDeleteProtectedNodeInBackground$2.processResult(FindAndDeleteProtectedNodeInBackground.java:104)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:630)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2019-12-27 07:46:23,329 ERROR [main-EventThread] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:647)
at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
at org.apache.curator.framework.imps.FindAndDeleteProtectedNodeInBackground$2.processResult(FindAndDeleteProtectedNodeInBackground.java:104)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:630)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2019-12-27 07:46:23,329 ERROR [main-EventThread] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:647)
at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
at org.apache.curator.framework.imps.GetConfigBuilderImpl$2.processResult(GetConfigBuilderImpl.java:222)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:601)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2019-12-27 07:46:23,430 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,430 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,530 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,531 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,630 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,630 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,731 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,731 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,831 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,831 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,931 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:23,931 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,031 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,032 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,131 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,131 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,231 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,232 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,331 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,331 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,431 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,431 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,531 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:990)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,532 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 07:46:24,592 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: RECONNECTED
2019-12-27 07:46:24,592 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@33b917f2 Connection State changed to RECONNECTED
2019-12-27 07:46:24,593 WARN [main] o.a.nifi.controller.StandardFlowService There is currently no Cluster Coordinator. This often happens upon restart of NiFi when running an embedded ZooKeeper. Will register this node to become the active Cluster Coordinator and will attempt to connect to cluster again
2019-12-27 07:46:24,593 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered
2019-12-27 07:46:24,697 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
2019-12-27 07:46:24,697 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@33b917f2 Connection State changed to SUSPENDED
2019-12-27 07:46:24,697 ERROR [main-EventThread] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:862)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:647)
at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
at org.apache.curator.framework.imps.FindAndDeleteProtectedNodeInBackground$2.processResult(FindAndDeleteProtectedNodeInBackground.java:104)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:630)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2019-12-27 07:46:25,916 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: RECONNECTED

Impossible to connect to HMI via Ingress

Hi

after Helm chart deployment of NIFI (0.4.3), I can't access to HMI via ingress. I get a 503 error.
Here are my services :
nifi NodePort 10.43.37.108 8080:31620/TCP 7m13s
nifi-headless ClusterIP None 8080/TCP,6007/TCP 7m13s

and my ingress
Name: nifi-ingress
Namespace: datalake-poc
Address: 10.24.53.56
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends


atmdatalake.nifi.capgemini.com
/ nifi:8080 ()

However, when I display my endpoints, there is a lack for nifi service:
nifi 12m
nifi-headless 10.42.0.80:6007,10.42.0.80:8080 12m

Can it be the cause of the 503 error ?

Regards

cetic/helm-nifi apache/nifi docker environment variables

Is your feature request related to a problem? Please describe.
I am trying to pass in environment variables that the parent apache/nifi docker image supports, but seem to be disregarded by helm. Specifically NIFI_VARIABLE_REGISTRY_PROPERTIES which sets the value of the nifi.variable.registry.properties= property in the /opt/nifi/nifi-current/conf/nifi.properties file. Executing as a simple docker docker run -e NIFI_VARIABLE_REGISTRY_PROPERTIES=/opt/nifi/nifi-current/conf/custom/registry-secret.properties apache/nifi the value is set correctly from the entrypoint start.sh script

Describe the solution you'd like
Support parent Dockerfile env vars
Could also expose a value to override the nifi.variable.registry.properties property by adding a call to
prop_replace

prop_replace nifi.variable.registry.properties ${NIFI_VARIABLE_REGISTRY_PROPERTIES}

[cetic/nifi] How to automatically sclae up/scale down

Hi,

I'm in trouble at scaling up/scaling down my apache nifi instance on kubernetes.
In Kubernetes everythings look fine but in apache nifi, scale up looks fine but scaling down create a shadow node that I need to destroy in the nifi interface.
Do you know a way to do it properly ?

Thanks by advance !

cetic/helm-nifi NodePort Service spec.clusterIP cannot be "None"

When trying to run cetic/helm-nifi locally (using Docker for Mac, and the integrated Kubernetes), if you change the service to be NodePort instead of ClusterIP, Helm throws and error when trying to install:

work-mbp-2:nifi-resources devin$ helm install --values nifi-values.yaml cetic-nifi cetic/nifi --namespace nifi-test Error: Service "cetic-nifi-headless" is invalid: spec.clusterIP: Invalid value: "None": may not be set to 'None' for NodePort services

This is the only part of values that I modified, I just changed the type to "NodePort", and LoadBalancer to false - I didn't adjust any of the other values:

service:
headless:
type: NodePort
loadBalancer:
enabled: false
type: LoadBalancer
httpPort: 80
httpsPort: 443
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24

It looks like in the Service.yaml we have "clusterIP: None", has anyone has any luck using NodePort to access the NiFi UI on a local K8S cluster?

[cetic/nifi] define custom initContainers in values

Is your feature request related to a problem? Please describe.
We would like to instantiate nifi including a custom processor and a pre-defined flow but without having to create a separate nifi-image or helm-chart.

Describe the solution you'd like
If the chart could provide a value that makes it possible to define a custom init-container this could give us the possibility to copy the additional files (nar and flow.xml.gz) to the nifi-container itself by using the same volumes.
This adds the possibility to define extra volumeMounts as well.

Describe alternatives you've considered
The current solution that is provided via the docs, using postStart-hook and download the files will not work for us as we have the need to run the chart in some highly secured environment without connection to the outside world. It is easier to bring some docker-image containing the files into this secure environment than providing file downloads.

Additional context
Here is a chart I found that provides the mechnisms described above. https://github.com/fluxcd/flux/tree/master/chart/flux

No storage class seems to exist in the PVC

Describe the bug

No storage class seems to exist in the PVC

Version of Helm and Kubernetes:

[centos@jay-apachecon2019 ms-spark]$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
[centos@jay-apachecon2019 ms-spark]$ kubectl get vdersion
error: the server doesn't have a resource type "vdersion"
[centos@jay-apachecon2019 ms-spark]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
[centos@jay-apachecon2019 ms-spark]$ 

What happened:

Looks like my PVCs are created but they dont define a StorageClass. I assume thats not intentional ?

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2019-08-24T13:01:30Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: zookeeper
    component: server
    release: nifi
  name: data-nifi-zookeeper-0
  namespace: bigdata
  resourceVersion: "188873"
  selfLink: /api/v1/namespaces/bigdata/persistentvolumeclaims/data-nifi-zookeeper-0
  uid: 39728f4c-82d7-4627-a9c6-4e19961986d4
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  volumeMode: Filesystem
status:
  phase: Pending

What you expected to happen:

My PVC would have an explicit storageClassName, that i could fulfill.

How to reproduce it (as minimally and precisely as possible):

helm install --name nifi cetic/nifi --namespace=bd --set persistence.storageClass=default --set storageClass=default2 has no effect, i.e. you
dont see the storageClassName in kubectl get pvc -n bd

Anything else we need to know:

I guess one option as a workaorund would be to just use default storage classes for everything.

[cetic/nifi] Customize config files such logback.xml

I want to customize logback.xml to add my log appenders

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
I want to set a custom ConfigMap name with new logback.xml which needs to be used by Helm install, instead of default file from configs folder.

Describe alternatives you've considered
Now I have to fork Helm charts to change logback.xml

Additional context
Add any other context or screenshots about the feature request here.

cetic/chart:The charts directory is not available

I cloned this got repo and added some environmental properties in values.yaml file. But I don't see any charts directory:
helm install --name my-release helm-nifi/
Error: found in requirements.yaml, but missing in charts/ directory: zookeeper

Describe the bug
A clear and concise description of what the bug is.

Version of Helm and Kubernetes: helm: v2.13.0, K8s: v1.14.3

Which chart: helm-nifi

What happened: Could not run "helm install" as it looks for charts directory (Error: found in requirements.yaml, but missing in charts/ directory: zookeeper)

What you expected to happen: I need the charts directory with the required file so that I can clone the repo and run "helm install --name my-release helm-nifi/"

How to reproduce it (as minimally and precisely as possible): By cloning the repo and running it using the local repo

Anything else we need to know: No

LoadBalancer ip specifi in value.yml not reflecting in svc

Hello,
I have use azure aks to deploy nifi ,i have chnaged below line in value.yml

headless service

headless:
type: ClusterIP
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

ui service

service:
type: LoadBalancer
enabled: true
httpPort: 80
httpsPort: 443
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: mc_visurdremio_dremio_canadacentral
loadBalancerIP: 52.228.103.111

Load Balancer sources

https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service

loadBalancerSourceRanges:

- 10.10.10.0/24

Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/

ingress:
enabled: false
annotations: {}
tls: []
hosts: []
path: /

after that i have restarted the helm ,this ip i have created static and trying to assign but its assign some random IP please guide me how to sepecify own ip address as LB
Screenshot_4
Screenshot_5

Problem with upgrade

[root@virt_dev1 gohttpserver]# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
[root@virt_dev1 gohttpserver]# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.4", GitCommit:"a87e9a978f65a8303aa9467537aa59c18122cbf9", GitTreeState:"clean", BuildDate:"2019-07-08T08:43:10Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

I have some troubles when I try to do upgrade like this:

helm upgrade helm-nifi -f values.yaml .

I get the following error:

UPGRADE FAILED
ROLLING BACK
Error: failed to create resource: the server could not find the requested resource
Error: UPGRADE FAILED: failed to create resource: the server could not find the requested resource

But, if I run:

helm upgrade helm-nifi -f values.yaml . --debug --dry-run

I get this message:

Release "helm-nifi" has been upgraded. Happy Helming!
LAST DEPLOYED: Sun Aug 25 14:42:45 2019

Can you help me and point me where is my mistake? Thank for advance!

P.S. If I do helm delete --purge helm-nifi and do helm install cetic/nifi -f values.yaml --namespace $NAMESPACE --name $RELEASE_NAME - all is ok.

[cetic/chart] Error: found in requirements.yaml, but missing in charts/ directory: zookeeper

Describe the bug
I am attempting to first download the nifi chart and helm install it using the vlues.yaml file:
git clone https://github.com/helm/charts
git clone https://github.com/cetic/helm-nifi
helm install --name nifi --namespace nifi -f helm-nifi/values.yaml helm-nifi
Error: found in requirements.yaml, but missing in charts/ directory: zookeeper

Version of Helm and Kubernetes:
irrelevant

What happened:
Got error

What you expected to happen:
When I attempt to clone the charts folder inside the helm-nifi folder, then I run into other errors.
Can you describe the correct way to download the nifi chart locally and initiate the helm install?

How to reproduce it (as minimally and precisely as possible):
git clone https://github.com/helm/charts
git clone https://github.com/cetic/helm-nifi
helm install --name nifi --namespace nifi -f helm-nifi/values.yaml helm-nifi

Anything else we need to know:

[cetic/helm-nifi] loadBalancerSourceRanges setting not working in values.yaml

Describe the bug
Attempting to secure the node by limiting the IPs that have access to it, I updated values.yaml and set loadBalancerSourceRanges to include only one IP but it looks the setting just didn't work.

Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.15.2", GitCommit:"8dce272473e5f2a7bf58ce79bb5c3691db54c96b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"}

What happened:
Tried updating values.yaml with:
` loadBalancerSourceRanges:

  • 130.211.204.1/32also tried: loadBalancerSourceRanges:
  • 130.211.204.1`

What you expected to happen:
I expected that when I try to access the external IP of the load balancer from an IP other than 130.211.204.1, I would get 404 Page not found message. Instead I could freely access the load balancer from any IP, as if the loadBalancerSourceRanges setting didn't kick in.

How to reproduce it (as minimally and precisely as possible):
In a local branch of the chart update values.yaml by uncommenting the line referencing loadBalancerSourceRanges and then install the chart from the local repo
image

Anything else we need to know:
I am using Azure Kubernetes Service, so as per the Kubernetes documentation loadBalancerSourceRanges should be supported.

Please advise how I can further troubleshoot this.

[cetic/nifi] add extra entries to nifi.properties file

Is your feature request related to a problem? Please describe.
We'd like to add custom location where nifi should load nar-files from. This could be configured via additional properties-configuration using the 'nifi.nar.library.directory.'-prefix.

Describe the solution you'd like
Append configuration item from values (e.g. extraNifiProperties) to the nifi.properties-template before creating the configMap.

Describe alternatives you've considered
There might be the possibility to use the postStart-hook for manipulating the nifi.properties but this seems like a hack when templating providing a clean and maintainable solution.

[cetic/nifi] helm lint fails

Describe the bug
Helm lint fails on chart, apiVersion is required (missing in Chart.yaml)

Version of Helm and Kubernetes:
Helm v3.0.3, Go v1.13.6

What happened:

What you expected to happen:
helm lint to PASS

How to reproduce it (as minimally and precisely as possible):
navigate to NiFi folder and run "helm lint ." in the terminal

Anything else we need to know:

[cetic/nifi] nifi node 4

Describe the bug
A clear and concise description of what the bug is.

Version of Helm and Kubernetes:
kubernetes 12
What happened:
I have tried to install 4 node nifi cluster but getting readyness error.
i have modified in values.yaml file replica count = 4
helm install -n nifi . --namespace nifi
error:
Readiness probe failed: Node not found with CONNECTED state. Full cluster state:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 100.100.0.6...

  • TCP_NODELAY set
  • Connected to nifi-0.nifi-headless.nifi.svc.cluster.local (100.100.0.6) port 8080 (#0)

GET /nifi-api/controller/cluster HTTP/1.1
Host: nifi-0.nifi-headless.nifi.svc.cluster.local:8080
User-Agent: curl/7.52.1
Accept: /

< HTTP/1.1 404 Not Found
< Date: Tue, 19 Nov 2019 09:44:18 GMT
< X-Frame-Options: SAMEORIGIN
< Content-Security-Policy: frame-ancestors 'self'
< X-XSS-Protection: 1; mode=block
< Content-Type: application/json
< Vary: Accept-Encoding
< Content-Length: 59
< Server: Jetty(9.4.11.v20180605)
<
{ [59 bytes data]

  • Curl_http_done: called premature == 0
    100 59 100 59 0 0 168 0 --:--:-- --:--:-- --:--:-- 169
  • Connection #0 to host nifi-0.nifi-headless.nifi.svc.cluster.local left intact
    parse error: Invalid numeric literal at line 1, column 5
    parse error: Invalid numeric literal at line 1, column 5
    Warning Unhealthy 91s kubelet, ip-10-0-56-48.us-west-2.compute.internal Readiness probe failed: Node not found with CONNECTED state. Full cluster state:
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed

What you expected to happen:
How to set 4 node nifi cluster using helm cetic/nifi.

Cluster not detecting zookeeper

Run the helm script.
Create a ListSFTP processor
Run the processor

This error occurs:

019-07-02 19:55:21,804 ERROR [Timer-Driven Process Thread-7] o.a.nifi.processors.standard.ListSFTP ListSFTP[id=b440bf38-016b-1000-0000-00004ce8fa92] Failed to properly initialize Processor. If still scheduled to run, NiFi will attempt to initialize and run the Processor again after the 'Administrative Yield Duration' has elapsed. Failure is due to java.io.IOException: Failed to obtain value from ZooKeeper for component with ID b440bf38-016b-1000-0000-00004ce8fa92 with exception code CONNECTIONLOSS: java.io.IOException: Failed to obtain value from ZooKeeper for component with ID b440bf38-016b-1000-0000-00004ce8fa92 with exception code CONNECTIONLOSS
java.io.IOException: Failed to obtain value from ZooKeeper for component with ID b440bf38-016b-1000-0000-00004ce8fa92 with exception code CONNECTIONLOSS
        at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.getState(StandardStateManagerProvider.java:288)
        at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
        at org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:298)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:142)
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:130)
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:75)
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:52)
        at org.apache.nifi.controller.StandardProcessorNode.lambda$initiateStart$4(StandardProcessorNode.java:1515)
        at org.apache.nifi.engine.FlowEngine$3.call(FlowEngine.java:123)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /nifi/components/b440bf38-016b-1000-0000-00004ce8fa92
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
        at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
        at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
        at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
        ... 19 common frames omitted

[cetic/helm-nifi] Support NiFi Registry container for flow versioning

Is your feature request related to a problem? Please describe.
I would like to enable NiFi flow version control and allow new NiFi clusters to easily pull in existing flows from other environments.

Describe the solution you'd like
Have the option to spin up a apache/nifi-registry container

  • Use the same security/authentication configuration as NiFi
  • Configure users/permissions between the applications (proxy, read, write)
  • Configure the registry client in the NiFi controller using the NiFi API call to /controller/registry-clients
  • Optionally configure a remote git repo to be cloned on the container and used by the GitFlowPersistenceProvider

Describe alternatives you've considered
This could be a separate chart. But the benefit of keeping them together is a single security configuration applied consistently and the potential for no configuration in the NiFi UI before getting started.

Additional context
The ultimate goal is to use the NiFi Toolkit and/or NiFi & Registry APIs driven by Kubernetes configuration to allow deployment of a specific version of a NiFi flow developed in a separate environment with no user interaction.

[cetic/nifi] Custom configmap support

Is your feature request related to a problem? Please describe.
I need to mount my own CofigMaps to NiFi Pods. Current Charts do not have such feature, however, it is similar to custom Secrets, which is supported already

Describe the solution you'd like
I would like to mount my custom ConfigMaps via Helm values and --set option.

Describe alternatives you've considered
There is no clear alternative rather than fork this repo and add custom ConfigMaps into Pod spec of the NiFi Statefulset

Additional context
I can propose a PR, which will allow to add ConfigMaps in values similar to Secrets:

configmaps:
  - name: nifi-krb5-conf
    keys:
      - krb5.conf
    mountPath: /etc/krb5.conf

Would you be interested in such PR?

[cetic/nifi] kubernetes node update nifi error

Describe the bug
Using kops i have updated my instance size of large
Screenshot from 2019-12-31 04-10-56
Before update my cluster nifi cluster is working fine but after update my cluster getting error
Screenshot from 2019-12-31 04-12-16
Screenshot from 2019-12-31 04-16-14

Pod describe is montioned on the txt file
nifi-1-describce.txt

Version of Helm and Kubernetes:
helm version --short
Client: v2.16.1+gbbdfe5e
Server: v2.16.1+gbbdfe5e

What happened:
Nifi custer is not connecting to nodes

What you expected to happen:
After update cluster should work all nifi data and cluster in working state

How to reproduce it (as minimally and precisely as possible):
kops install new cluster
install helm
install nifi cluster using helm as per installation documents with 2 node cluster
Nifi exposed using nginx ingress
kops update your new node size
https://github.com/kubernetes/kops/blob/master/docs/cli/kops_edit_cluster.md

Once everything updated check your ingress will have issue

Anything else we need to know:

@ebcFlagman @octopyth @mgoeminne @jdesroch @fzalila

[cetic/nifi] issue title annotations for basic auth

Describe the bug
A clear and concise description of what the bug is.
I have tried to create nifi basic auth using nginx but littlebit confusion on the annotations.
Anyone give me some sample for the annotations passwd for basic auth
Version of Helm and Kubernetes:
helm version --short
Client: v2.16.1+gbbdfe5e
Server: v2.16.1+gbbdfe5e

k8s: v1.14.8

What happened:
HOw to added basic auth from helm chart
cat secrets-auth.yaml
https://github.com/cetic/helm-nifi/blob/master/templates/secrets-auth.yaml

How to pass annotation in ingress values.yaml
ingress:
enabled: true
annotations: {}
tls: []
hosts: []
path: /

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

How to run nifi kubernate as standalone

Hi ,
I have tested helm-nifi on kubernate and its working only issue i am getting is import template where template is of big processor when i dropped to convas its giving error The specified observer identifier already exists i have exported template from standalone and import in cluster any idea solution how it will work.i also find below link but not useful
http://apache-nifi-users-list.2361937.n4.nabble.com/Error-instantiating-template-on-cluster-The-specified-observer-identifier-already-exists-Reproduced-2-td8452.html#a8471

can't access UI with secured cluster

Describe the bug

Following the issue #45 , when the authentication is enabled I can't access the UI, I receive either :

System Error

The request contained an invalid host header [abc.com] in the request [/nifi].

Check for request manipulation or third-party intercept.

Valid host headers are [empty] or:

127.0.0.1 127.0.0.1:9443 ....

or :

503 service temporarily unavailable 

openresty/1.15.8.2

Version of Helm and Kubernetes:

Helm: "v3.0.2"

kubernetes: "v1.17.1"

What happened:

NiFi UI is unreachable

After this update Allow whitelisting expected Host values, NiFi accepts requests where the Host header contains an expected value. Currently, the expected values are driven by the .host properties in nifi.properties.

This issue seems to be similar to the issue we're having, so reading the following :

<< You will need a stable network identity that you can use to configure as your "proxy" in advance. For example in a testing scenario where you have access to the kubernetes cluster you can simply use "localhost" as the name of the proxy and use kubernetes port forward to tunnel requests from the localhost to your individual nodes (only one node at a time).

Another option that could better work for non-local use cases is to use a LoadBalancer service in front of the nodes and configure DNS to point to your LoadBalancer IP. If you want to do this in advance it is possible to create floating IPs and preconfigure DNS for it at almost any cloud provider. Then add the configured DNS to nifi.web.proxy.host property when starting your cluster. If setting up DNS is not an option you can use the IP directly. If setting up the IP in advance is not an option you may use an arbitrary hostname as the proxy host and add that hostname to your hosts file (or dnsmasq or company dns) to point to the dynamically generated LoadBalancer IP after NiFi started up. >>

I tried to create a host name for the minikube IP in the /etc/hosts file and preconfigured that DNS in nifi.web.proxy.host variable in nifi.properties ( also nifi.web.proxy.context.path and nifi.web.https.host ) I ended up getting one or the other from the errors above (also tried the ip address directly not only the dns) .

What you expected to happen:

Access the NiFi UI with a dns that I pass in the ingress config and in the webProxyHost variable.

How to reproduce it (as minimally and precisely as possible):

  • Clone the branch feature\ldap.
  • In the values.yaml file: enable and pass the ldap config and change the http/https (httpPort/httpsPort) ports and set to true the variables isSecure and clusterSecure.
  • Give your minikube IP a DNS in the etc/hosts file and pass that DNS in the webProxyHost variable.
  • Enable ingress and set the .host variable to your DNS.

Anything else we need to know:

in the ingress.yaml file I changed {{- $ingressPort := .Values.service.httpPort -}} to {{- $ingressPort := .Values.service.httpsPort -}} and when I try to access the DNS it didn't work as well ( it downloads file ).

Not able to deploy

Hi, I'm not able to deploy in on internal Kubernetes cluster. Do you know what could be a problem?

2m        4m        2         nifi-przemek-0.15ba7ec25be392d2         Pod           spec.containers{server}          Normal    Started            kubelet, cskwrk002d1pxxx.lin.d1.mycompany.zone   Started container server
2m        4m        2         nifi-przemek-0.15ba7ec24c77b6ec         Pod           spec.containers{server}          Normal    Created            kubelet, cskwrk002d1pxxxlin.d1.mycompany.zone   Created container server
1m        3m        3         nifi-przemek-0.15ba7ed4bb5b6615         Pod           spec.containers{server}          Warning   Unhealthy          kubelet, cskwrk002d1pxxx.lin.d1.mycompany.zone   Readiness probe failed: Node not found with CONNECTED state. Full cluster state:
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 1.2.157.196...
* TCP_NODELAY set
* connect to 1.2.157.196 port 8080 failed: Connection refused
* Failed to connect to nifi-przemek-0.nifi-przemek-headless.mynamescape.svc.devkube.d1.mycompany.zone port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to nifi-przemek-0.nifi-przemek-headless.mynamescape.svc.devkube.d1.mycompany.zone port 8080: Connection refused

[cetic/nifi] kubectl port-forward: connection refused

Describe the bug
Cannot get access to ui when deployed to microk8s.

Version of Helm and Kubernetes:
helm 2.16.0
kubernetes 1.17.0 (microk8s latest)

What happened:
kubectl port-forward is getting connection refused from nifi service port 80

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
$ kubectl port-forward service/nifi 8080:80 -n nifi Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080 Handling connection for 8080 E0128 20:54:25.693343 19817 portforward.go:400] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod e4b0944121d6dd616ce42a46210581b344b3196a09242f784b098fa141c7cc0f, uid : failed to execute portforward in network namespace "/var/run/netns/cni-caf78bef-7782-c265-94fc-05dd2e49672a": socat command returns error: exit status 1, stderr: "2020/01/28 20:54:25 socat[20834] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused\n"

Anything else we need to know:
setting webProxyHost in values.xml doesn't help

[cetic/nifi] Nifi cluster disconnected

Describe the bug
A clear and concise description of what the bug is.

Version of Helm and Kubernetes:

What happened:
I have deployed nifi cluster 6 days before which is working fine .but suddently that got error.This issue i am facing repeatedly.
dotf-nifi-0 3/4 Running 0 7d
dotf-nifi-1 4/4 Running 0 7d
dotf-nifi-2 3/4 Running 0 7d

Logs for the pod.
2020-01-07 00:20:48,779 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:20:48,829 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:20:48,863 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:20:48,870 WARN [Curator-ConnectionStateManager-0] o.a.c.f.state.ConnectionStateManager There are no ConnectionStateListeners registered.
2020-01-07 00:20:53,944 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:20:54,179 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:20:54,217 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:20:59,228 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:20:59,441 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:20:59,565 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:04,635 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:21:04,751 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:21:04,786 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:09,925 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:21:10,120 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:21:10,191 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:15,212 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:21:15,481 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:21:15,572 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:20,594 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:21:20,719 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:21:20,875 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:25,972 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:21:26,121 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:21:26,237 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:31,322 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:21:31,469 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:21:31,601 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:36,644 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:21:36,723 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:21:36,775 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:41,853 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
2020-01-07 00:21:42,058 INFO [Heartbeat Monitor Thread-1-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-01-07 00:21:42,125 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-01-07 00:21:47,175 INFO [Heartbeat Monitor Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Starting
What you expected to happen:
Nifi cluster should not get disconnection error

How to reproduce it (as minimally and precisely as possible):
No idea how to reproduce .

Anything else we need to know:

Question about scaling

In the README there is mention that replicaCount > 1 is unstable, what does unstable mean and are there plans to fix it?

[cetic/helm-nifi] ingress path other than / doesn't work

Describe the bug
the application UI should be accessible though the custom ingress path

Version of Helm and Kubernetes:
helm: v2.14.3
kubernetes: v1.15.1

What happened:
the chart allows us to set the ingress.path variable. I would like to host the application at a path other than '/'. for e.g. at '/demo-nifi'. So I expected that setting ingress.path: '/demo-nifi' would work without additional changes i.e the application ui will be available at host/demo-nifi/nifi and all calls would be made to host/demo-nifi/{..}. But all REST calls from the browser assume that the urls are host/{..}

What you expected to happen:
ui uses custom path set in ingress.path

How to reproduce it (as minimally and precisely as possible):
set ingress.path: /demo-nifi

Anything else we need to know:

if forgive any ignorance about possible advanced configuration that can be done to resolve this issue

[cetic/nifi] Set up Nifi with LDAP/HTTPS

Describe the bug
Pls help me for set up LDAP/HTTPS for Nifi in Kube

Version of Helm and Kubernetes:
Helm: 2.16
Kube: 1.16

What happened:
I tried to set up my nifi in Kube under LDAP and HTTPS but can't worked. Can you help us?

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Cannot redirect server logs to sdout

Hi thank you for your great helm !

I'm trying to redirect all log in server stdout. Is is possible with your helm ?

I'm trying to modify logback.xml but doesn't work

[cetic/nifi] Pending indefinitely

Hi,
I'm in trouble with the new version of this helm file (1.10.0)
All containers "PodInitializing" indefinitely.

How can I fix this issue.

Here are the events logs on GCP:



Message | Reason | First Seen | Last Seen | Count
-- | -- | -- | -- | --
Started container | Started | Dec 9, 2019, 2:54:42 PM | Dec 9, 2019, 2:54:42 PM | 1
pulling image "busybox" | Pulling | Dec 9, 2019, 2:54:41 PM | Dec 9, 2019, 2:54:41 PM | 1
Successfully pulled image "busybox" | Pulled | Dec 9, 2019, 2:54:41 PM | Dec 9, 2019, 2:54:41 PM | 1
Created container | Created | Dec 9, 2019, 2:54:41 PM | Dec 9, 2019, 2:54:41 PM | 1
Successfully assigned test/apache-nifi-test-0 to gke-foundry-02-tools-47303935-njrt | Scheduled | Dec 9, 2019, 2:54:40 PM | Dec 9, 2019, 2:54:40 PM | 1
Successfully pulled image "busybox" | Pulled | Dec 9, 2019, 2:48:22 PM | Dec 9, 2019, 2:48:22 PM | 1
Created container | Created | Dec 9, 2019, 2:48:22 PM | Dec 9, 2019, 2:48:22 PM | 1
Started container | Started | Dec 9, 2019, 2:48:22 PM | Dec 9, 2019, 2:48:22 PM | 1
pulling image "busybox" | Pulling | Dec 9, 2019, 2:48:21 PM | Dec 9, 2019, 2:48:21 PM | 1
Successfully assigned test/apache-nifi-test-0 to gke-foundry-02-tools-47303935-njrt | Scheduled | Dec 9, 2019, 2:48:20 PM | Dec 9, 2019, 2:48:20 PM | 1


[cetic/nifi] Providing Private Keys to Cluster

Is your feature request related to a problem? Please describe.
NiFi doesn't have any secure key storage and relies on file system for example the SFTP private key file references a local path. This isn't a great solution when you are running a cluster as nodes can be missed and more importantly when its on Kubernetes how do we inject these into the container.

Describe the solution you'd like
Ideally a if there was a configuration parameter that creates a number of ConfigMap to present these into the cluster as files on the filesystem that NiFi can access.

Thanks!

[cetic/nifi] Readiness Probe Fails

Describe the bug
Nifi readiness probe doesn't succeed.
During out cluster installation we deploy and delete and redeploy Nifi multiple times. And we use persistent storage for the nifi. Randomly, after a redeployment, Nifi will stop working.
We are not doing anything different in our redeployment.

What happened:
The readiness probe fails with the following message

Readiness probe failed: Node not found with CONNECTED state. Full cluster state: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.233.105.161... * TCP_NODELAY set * Connected to nifi-0.nifi-headless.dap-core.svc.cluster.local (10.233.105.161) port 8080 (#0) > GET /nifi-api/controller/cluster HTTP/1.1 > Host: nifi-0.nifi-headless.dap-core.svc.cluster.local:8080 > User-Agent: curl/7.52.1 > Accept: */* > < HTTP/1.1 404 Not Found < Date: Fri, 28 Feb 2020 13:56:06 GMT < X-Frame-Options: SAMEORIGIN < Content-Security-Policy: frame-ancestors 'self' < X-XSS-Protection: 1; mode=block < Content-Type: application/json < Vary: Accept-Encoding < Content-Length: 59 < Server: Jetty(9.4.19.v20190610) < { [59 bytes data] * Curl_http_done: called premature == 0 100 59 100 59 0 0 6072 0 --:--:-- --:--:-- --:--:-- 6555 * Connection #0 to host nifi-0.nifi-headless.dap-core.svc.cluster.local left intact parse error: Invalid numeric literal at line 1, column 5 parse error: Invalid numeric literal at line 1, column 5
We also see these kind of errors

Multi-Attach error for volume "pvc-366c853a-2e59-4443-a1bd-0f8a7d9c2d51" Volume is already exclusively attached to one node and can't be attached to another
What you expected to happen:
The Readiness probe must pass

How to reproduce it (as minimally and precisely as possible):
Hard to say because we don't know what causes it. But best bet to reproduce would be to deploy nifi with persistent volumes. Then delete deployment and redeploy again. The might show up .

Anything else we need to know:

Ingress toute to headless service instead of LoadBalancer

I've the following configuration:
persistence: enabled: true storageClass: managed-azuredisk-standard-lrs service: loadBalancer: enabled: false ingress: enabled: true annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod path: /nifi/(.*) hosts: - foo.bar tls: - secretName: tls-secret hosts: - foo.bar zookeeper: enabled: false url: cl-zookeeper.ingestion

With this config the Ingress-Service points to the Deploy-Name instead of the headless-Service.
Can this be changed or should I configure the LoadBalancer type to ClusterIP?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.