GithubHelp home page GithubHelp logo

devshawn / kafka-gitops Goto Github PK

View Code? Open in Web Editor NEW
312.0 312.0 70.0 315 KB

🚀Manage Apache Kafka topics and generate ACLs through a desired state file.

Home Page: https://devshawn.github.io/kafka-gitops

License: Apache License 2.0

Shell 0.21% Java 78.59% Dockerfile 0.14% Groovy 21.06%
acl-management acls gitops kafka kafka-acls kafka-cluster kafka-dsf kafka-gitops kafka-manager kafka-topics

kafka-gitops's People

Contributors

almazik avatar bitprocessor avatar cinetik avatar devshawn avatar erikhh avatar gquintana avatar jrevillard avatar m0un10 avatar peteryongzhong avatar sergialonsaco avatar tball-dev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-gitops's Issues

confluent cloud topic config values default not working

this behavior is weird and not sure if the reason is because of constant changes in confluent cloud or something else.
we created with topics web/cli in the past and now intend to manage in this tool and the topics were created with defaults and no specific configs. So, intending to add the topics in this tool. here is a sample

settings:
  ccloud:
    enabled: true
  topics:
    defaults:
      # seems to be required for kafka, gives error if removed. but it's not required/shown for confluent cloud cli
      replication: 1
    blacklist:
      prefixed:
        - _confluent
topics:
  temp.test.hello-kafka:
    partitions: 3
# omitting writing some topic names in issue   

this topic above was created I think with web ui and then some other created with cli 2 days ago.
one we created with cli 2 days ago does not show any change in plan but this one above shows change in config:

14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka exists, it will not be created.
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] cleanup.policy
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] max.compaction.lag.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] max.message.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] min.compaction.lag.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] message.timestamp.type
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] min.insync.replicas
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] segment.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] retention.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] segment.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] message.timestamp.difference.max.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] retention.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] delete.retention.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka exists, it will not be created.
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] cleanup.policy
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] max.compaction.lag.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] max.message.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] min.compaction.lag.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] message.timestamp.type
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] min.insync.replicas
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] segment.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] retention.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] segment.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] message.timestamp.difference.max.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] retention.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] delete.retention.ms
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update

The following actions will be performed:
~ [TOPIC] temp.test.hello-kafka
	~ configs:
		- cleanup.policy
		- max.compaction.lag.ms
		- max.message.bytes
		- min.compaction.lag.ms
		- message.timestamp.type
		- min.insync.replicas
		- segment.bytes
		- retention.ms
		- segment.ms
		- message.timestamp.difference.max.ms
		- retention.bytes
		- delete.retention.ms


~ [TOPIC] temp.stage.hello-kafka
	~ configs:
		- cleanup.policy
		- max.compaction.lag.ms
		- max.message.bytes
		- min.compaction.lag.ms
		- message.timestamp.type
		- min.insync.replicas
		- segment.bytes
		- retention.ms
		- segment.ms
		- message.timestamp.difference.max.ms
		- retention.bytes
		- delete.retention.ms

when I execute apply, got errors:

14:08:58.853 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka exists, it will not be created.
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] cleanup.policy
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] max.compaction.lag.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] max.message.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] min.compaction.lag.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] message.timestamp.type
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] min.insync.replicas
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] segment.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] retention.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] segment.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] message.timestamp.difference.max.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] retention.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] delete.retention.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka exists, it will not be created.
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] cleanup.policy
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] max.compaction.lag.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] max.message.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] min.compaction.lag.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] message.timestamp.type
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] min.insync.replicas
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] segment.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] retention.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] segment.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] message.timestamp.difference.max.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] retention.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] delete.retention.ms
Applying: [UPDATE]

~ [TOPIC]  temp.test.hello-kafka
	~ configs:
		- max.compaction.lag.ms
		- max.message.bytes
		- min.compaction.lag.ms
		- message.timestamp.type
		- min.insync.replicas
		- segment.bytes
		- retention.ms
		- segment.ms
		- message.timestamp.difference.max.ms
		- retention.bytes
		- delete.retention.ms


[ERROR] Error thrown when attempting to update a Kafka topic config:
org.apache.kafka.common.errors.InvalidRequestException: Invalid config value for resource ConfigResource(type=TOPIC, name='redacted_ temp.test.hello-kafka'): null

[ERROR] An error has occurred during the apply process.
[ERROR] The apply process has stopped in place. There is no rollback.
[ERROR] Fix the error, re-create a plan, and apply the new plan to continue.

I have no choice but to fill up these values from cloud.

It would be nice to have:

  • if the topic config properties are not specified in state.yaml, should not try to update. that way defaults will work even if these are changed by provider
  • another nice feature would be to have the values under defaults just like replication.

Double quotes in passwords are not escaped

First of all, thank you for creating this wonderful piece of kit! It is ridiculously useful.

We're using SASL-SCRAM login on our Kafka, but looking at the code this issue would be applicable to all JAAS usernames and passwords.
Our passwords are randomly generated and occasionally contain double quote characters ("). The config for the gitops user looks somewhat like this:

sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="my-gitops-user" password="XXXXXXXXX"YYYYY";

Then when running 'kafka-gitops plan` it gives this error:

[ERROR] Error thrown when creating Kafka admin client:
Value not specified for key 'YYYYY' in JAAS config

I had a look at the code. And it looks easy enough to fix. I'll submit a PR for this later tonight if that's ok with you.

Missing customUserAcls

I'm defining permissions for schema registry, but after applying none of the permissions listed under customerUserAcls are present in the cluster with a kafka-acls.sh --list

This is the customUserAcls section of my state.yml file:

customUserAcls:
  everyone:
    read-heartbeat:
      name: heartbeat
      principal: User:*
      type: TOPIC
      pattern: LITERAL
      host: "*"
      operation: READ
      permission: ALLOW

    describe-heartbeat:
      name: heartbeat
      principal: User:*
      type: TOPIC
      pattern: LITERAL
      host: "*"
      operation: DESCRIBE
      permission: ALLOW

  # https://docs.confluent.io/current/schema-registry/security/index.html#authorizing-access-to-the-schemas-topic
  schema-registry:
    describe-cluster:
      name: kafka-cluster
      principal: User:schema-registry
      type: CLUSTER
      pattern: LITERAL
      host: "*"
      operation: DESCRIBE
      permission: ALLOW
    
    create-cluster:
      name: kafka-cluster
      principal: User:schema-registry
      type: CLUSTER
      pattern: LITERAL
      host: "*"
      operation: CREATE
      permission: ALLOW
    
    describe-schemas:
      name: _schemas
      principal: User:schema-registry
      type: TOPIC
      pattern: LITERAL
      host: "*"
      operation: DESCRIBE
      permission: ALLOW
      
    describeconfigs-schemas:
      name: _schemas
      principal: User:schema-registry
      type: TOPIC
      pattern: LITERAL
      host: "*"
      operation: DESCRIBE_CONFIGS
      permission: ALLOW

    describe-consumer-offsets:
      name: __consumer_offsets
      principal: User:schema-registry
      type: TOPIC
      pattern: LITERAL
      host: "*"
      operation: DESCRIBE
      permission: ALLOW

Any idea what's going on?

Q: Custom User ACLs

Hello,
in my state.yaml file, i have this block of sample code:

customUserAcls:
my-test-user:
read-all-kafka:
name: data-
type: TOPIC
pattern: PREFIXED
host: "*"
operation: READ
permission: ALLOW

in the output of the 'plan' switch, it doesn't show the acl being added. Additionally, do I use 'read-all-kafka' if the user only need to read from only 1 topic?

thanks,

plan dies with a nullpointer exception

After setting KAFKA_SASL_JAAS_USERNAME, KAFKA_SASL_JAAS_PASSWORD and KAFKA_BOOTSTRAP_SERVERS running kafka-gitops validate as well as kafka-gitops plan results in

java.lang.NullPointerException
	at com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader.handleAuthentication(KafkaGitopsConfigLoader.java:62)
	at com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader.setConfig(KafkaGitopsConfigLoader.java:41)
	at com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader.load(KafkaGitopsConfigLoader.java:18)
	at com.devshawn.kafka.gitops.StateManager.<init>(StateManager.java:63)
	at com.devshawn.kafka.gitops.cli.PlanCommand.call(PlanCommand.java:37)
	at com.devshawn.kafka.gitops.cli.PlanCommand.call(PlanCommand.java:19)
	at picocli.CommandLine.executeUserObject(CommandLine.java:1783)
	at picocli.CommandLine.access$900(CommandLine.java:145)
	at picocli.CommandLine$RunLast.handle(CommandLine.java:2141)
	at picocli.CommandLine$RunLast.handle(CommandLine.java:2108)
	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1975)
	at picocli.CommandLine.execute(CommandLine.java:1904)
	at com.devshawn.kafka.gitops.MainCommand.main(MainCommand.java:76)

I tried it with both 0.2.14 and 0.2.15.

MSK IAMs Authentication Support

Hi there!

This is such a fantastic project and it's going to be super useful for our usecase. I was just wondering if the standard docker container has MSK IAM authentication support?

Looking at the AWS documentation, you can see an extra class is required with a few extra configuration options. Is this currently supported by kafka-gitops? If not would it be as simple as placing the MSK class in the classpath within the container and setting the required properties?

Required properties:

security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler

Thanks in advance!

Build Documentation

Hey I want to use this, but need to update the kafka-clients to 2.8.0 in the build path.

How can I rebuild the jar using gradle? Or could we get a new release :)

Error thrown when attempting to list Kafka ACLs

Hello,

Congratulations... Very interesting this tool!
By running kafka-gitops plan

gives me the following error:
image

You know what I might be missing?

Thank you very much in advance!

image

Greetings.

Nice to have: import current state

Use case: running the tool against an existing Kafka cluster setup with some ACLs/topics already configure in plan mode (with a minimal state file) currently shows all the ACLs/topics that would be removed. While it is already possible to copy/paste this output and do a little bit of find/replace and other basic edits to come to a new state file that would keep these changes, it would be nice if:

  • either the output is in copy/paste'able Yaml

or

  • the command would have an import mode to generate an initial state.yaml from the current ACL configuration and current topic setup.

I can imagine that there are other potential users out there, with a running Kafka setup with historical, manual changes, that would like to switch to a GitOps approach. In my case, it was a rather fresh cluster, so replicating the current state manually into a state.yaml file was rather easy.

I think that the "output" proposal would be a rather fast fix?

Topic Configs are considered to be added in "Plan" even if they are present in the existing system.

Hello @devshawn ,

I have created a state.yaml file from the existing system.

  1. I have validated it successfully.
  2. When I run the "plan", it still indicates the Topic configs as "to be added".

Expectation: I should see zero("0") updates/

Output json from "plan" command:

"configs": {
"compression.type": "producer",
"message.downconversion.enable": "true",
"min.insync.replicas": "1",
"segment.jitter.ms": "0",
"cleanup.policy": "compact",
"flush.ms": "9223372036854775807",
"segment.bytes": "1073741824",
"retention.ms": "604800000",
"flush.messages": "9223372036854775807",
"message.format.version": "2.6-IV0",
"max.compaction.lag.ms": "9223372036854775807",
"file.delete.delay.ms": "60000",
"max.message.bytes": "1048588",
"min.compaction.lag.ms": "0",
"message.timestamp.type": "CreateTime",
"preallocate": "false",
"min.cleanable.dirty.ratio": "0.5",
"index.interval.bytes": "4096",
"unclean.leader.election.enable": "false",
"retention.bytes": "-1",
"delete.retention.ms": "86400000",
"segment.ms": "604800000",
"message.timestamp.difference.max.ms": "9223372036854775807",
"segment.index.bytes": "10485760"
}
},
"topicConfigPlans": [
{
"key": "compression.type",
"value": "producer",
"action": "ADD"
},
{
"key": "message.format.version",
"value": "2.6-IV0",
"action": "ADD"
},
{
"key": "max.compaction.lag.ms",
"value": "9223372036854775807",
"action": "ADD"
},
{
"key": "file.delete.delay.ms",
"value": "60000",
"action": "ADD"
},
{
"key": "max.message.bytes",
"value": "1048588",
"action": "ADD"
},
{
"key": "min.compaction.lag.ms",
"value": "0",
"action": "ADD"
},
{
"key": "message.timestamp.type",
"value": "CreateTime",
"action": "ADD"
},
{
"key": "message.downconversion.enable",
"value": "true",
"action": "ADD"
},
{
"key": "min.insync.replicas",
"value": "1",
"action": "ADD"
},
{
"key": "segment.jitter.ms",
"value": "0",
"action": "ADD"
},
{
"key": "preallocate",
"value": "false",
"action": "ADD"
},
{
"key": "min.cleanable.dirty.ratio",
"value": "0.5",
"action": "ADD"
},
{
"key": "index.interval.bytes",
"value": "4096",
"action": "ADD"
},
{
"key": "unclean.leader.election.enable",
"value": "false",
"action": "ADD"
},
{
"key": "retention.bytes",
"value": "-1",
"action": "ADD"
},
{
"key": "delete.retention.ms",
"value": "86400000",
"action": "ADD"
},
{
"key": "flush.ms",
"value": "9223372036854775807",
"action": "ADD"
},
{
"key": "segment.bytes",
"value": "1073741824",
"action": "ADD"
},
{
"key": "retention.ms",
"value": "604800000",
"action": "ADD"
},
{
"key": "segment.ms",
"value": "604800000",
"action": "ADD"
},
{
"key": "message.timestamp.difference.max.ms",
"value": "9223372036854775807",
"action": "ADD"
},
{
"key": "flush.messages",
"value": "9223372036854775807",
"action": "ADD"
},
{
"key": "segment.index.bytes",
"value": "10485760",
"action": "ADD"
}
]
}
],
"aclPlans": []
}

Add support for increasing number of partitions in a topic

Given the following statefile:

#Orders
topics:
  orders:
    partitions: 5
    replication: 3

Desired state:
When I update partitions to 6 I expect the tool to increase the number of partitions.
Actual State:
Executing apply...
[SUCCESS] There are no necessary changes; the actual state matches the desired state.

Kafka Version - 2.1.1-cp1
Kafka-gitops version - 0.2.7

Allow 'customServiceAcls' to be placed into a 'services.yaml' file

Hey,

It'd be nice to be able to put customServiceAcls into a services.yaml to keep things together.

Example:

settings:
  ccloud:
    enabled: true
  files:
    topics: topics.yaml
    services: services.yaml

Placing the customServiceAcls block into the services.yaml file doesn't read the ACLs, they need to be placed into the root state.yaml which makes things a bit messy, and unorganized. I'd like to be able to place the custom ACLs related to my services with where my services are defined.

Thanks

is it possible to specify multiple operations related to a single resource for acts

when needed to specify entries for a service account but operations on same topic/group/resource, there is repetition. could it be possible to add operations in a single entry:

customServiceAcls:
  dev_mgmt-test:
    d-cg-thg:
      name: test.hello-group
      type: GROUP
      pattern: LITERAL
      host: "*"
      operation: DESCRIBE,DESCRIBE_CONFIGS,READ
      permission: ALLOW

Unable to import/match existing confluent cloud configuration in yaml

environment: confluent cloud
I was trying to create state yaml so that it would not delete existing configuration:
-state.yaml-

settings:
  ccloud:
    enabled: true
  topics:
    defaults:
      replication: 1
  files:
    topics: topics.yaml
    services: services.yaml

customServiceAcls:
  app-test:
    r-t-thg:
      name: temp.test.hello-kafka
      type: TOPIC
      pattern: LITERAL
      host: "*"
      operation: READ
      permission: ALLOW

-topics.yaml-

topics:
  temp.test.hello-kafka:
    partitions: 3

-services.yaml-

services:
  app-test:
    type: application

After clearing validation, when executed plan, it always shows removing existing acl/permission. I assumed service-account should exist in "services:" and under "customServiceAcls:" defining it's value as the service-account name as shown above. it's still not picking up. I tried to define principal for acl entry to match the User:{id} of "app-test" but the result is same.

Please help me with an example of import.

Project status ?

Dear all,

@devshawn, I just would like some information about the project status as we did not here from you since nearly 2 months now.

I think that you made a really good job and kafka-gitops seems to be interesting for many persons including me. I personally integrated it into my company processes to manage our Kafka clusters and I would really like to see some feature integrated like Schema management etc... Of course, I will collaborate as I already did for #52.

Wouldn't it be the good time to open the management to more than one person ? I can perfectly understand that this is really time consuming to maintain this kind of project alone but if a small group could be built I'm pretty sure that everything might be easier.

What do you think ?

Best,
Jerome

Nullpointer exception thrown on empty topic properties.

Hi, I tried running the docker (through docker compose), and I received a nullpointer exception without explenation. I don't really know if it can be seen as a bug, or missing error message. Basically if I just fill in the name of the topic, and rely on defaults, it gives a nullpointer exception.

I tried to create topics, but with all settings in default, and I got a nullpointer exception.
Command:

$ docker-compose -f kafka-gitops.yml up
Recreating kafka-gitops ... done
Attaching to kafka-gitops
kafka-gitops    | 14:32:51.731 [main] INFO com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader - Kafka Config: {bootstrap.servers=localhost:9092, client.id=kafka-gitops}
kafka-gitops    | 14:32:51.735 [main] INFO com.devshawn.kafka.gitops.service.ParserService - Parsing desired state file...
kafka-gitops    | java.lang.NullPointerException
kafka-gitops    |       at com.devshawn.kafka.gitops.service.ParserService.parseStateFile(ParserService.java:84)
kafka-gitops    |       at com.devshawn.kafka.gitops.service.ParserService.parseStateFile(ParserService.java:42)
kafka-gitops    |       at com.devshawn.kafka.gitops.StateManager.getAndValidateStateFile(StateManager.java:72)
kafka-gitops    |       at com.devshawn.kafka.gitops.cli.ValidateCommand.call(ValidateCommand.java:26)
kafka-gitops    |       at com.devshawn.kafka.gitops.cli.ValidateCommand.call(ValidateCommand.java:15)
kafka-gitops    |       at picocli.CommandLine.executeUserObject(CommandLine.java:1783)
kafka-gitops    |       at picocli.CommandLine.access$900(CommandLine.java:145)
kafka-gitops    |       at picocli.CommandLine$RunLast.handle(CommandLine.java:2141)
kafka-gitops    |       at picocli.CommandLine$RunLast.handle(CommandLine.java:2108)
kafka-gitops    |       at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1975)
kafka-gitops    |       at picocli.CommandLine.execute(CommandLine.java:1904)
kafka-gitops    |       at com.devshawn.kafka.gitops.MainCommand.main(MainCommand.java:69)
kafka-gitops exited with code 1

docker-compose:

version: '3'

services:
  kafka-gitops:
    image: devshawn/kafka-gitops
    container_name: kafka-gitops
    environment:
      - KAFKA_BOOTSTRAP_SERVERS=localhost:9092
    entrypoint: kafka-gitops -v -f /state-test.yml validate
    volumes:
    - ./state-test.yml:/state-test.yml

my desired state:

topics:
  test:

settings:
  topics:
    defaults:
      partitions: 1
      replication: 1
      configs:
        cleanup.policy: delete
        retention.ms: -1
    blacklist:
      prefixed:
        - _confluent

How to use with shared environments?

Hi,

We're using Kafka as a shared cluster and as such, we have multiple projects with multiple teams working in parallel. We are planning on having 1 repo per project/system, with their own states and plans, but during our initial testing, we realised that one project is wiping the config of another project since we don't have an overall state file.

Is that the intended purpose?
Is there a way to prevent that?

Brew warning about use of deprecated bottle call

I just installed kafka-gitops via brew, and I received the following output:

Warning: Calling bottle :unneeded is deprecated! There is no replacement.
Please report this issue to the devshawn/kafka-gitops tap (not Homebrew/brew or Homebrew/core):
  /usr/local/Homebrew/Library/Taps/devshawn/homebrew-kafka-gitops/kafka-gitops.rb:8

The installation was successful, but it was printed in the console 3 times.

State file topic configs

Hi there,

I've added some topic configs to my state file, and when running a plan, it's trying to add the configs that already exist?

Context:
Topic is pre-existing and was created using the ccloud cli tool. Some configs were passed at the time of topic creation using --config. I'm looking to 'import' the topic into the state file.

Examples:

State.yaml

topics:
  test_partitions:
    partitions: 128
    replication: 3
    configs:
      cleanup.policy: delete
      retention.ms: 604800000

ccloud kafka topic describe (removed a bunch of output)

Configuration

                   Name                   |        Value
+-----------------------------------------+---------------------+
  cleanup.policy                          | delete
  compression.type                        | producer
  retention.ms                            |           604800000
...

Plan:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update

The following actions will be performed:

Topics: 0 to create, 1 to update, 0 to delete.

~ [TOPIC] test_partitions
	~ configs:
		+ cleanup.policy: delete
		+ retention.ms: 604800000

ACLs: 0 to create, 0 to update, 0 to delete.

Plan: 0 to create, 1 to update, 0 to delete.

My assumption (expected outcome) was that the plan would say There are no necessary changes; the actual state matches the desired state. and not need to modify anything.
If I remove the additional configs from the state file, it comes back as no necessary changes. Since we created these topics with specific configs, I'd like to have the state file show those configs.

Thanks

kafka-gitops fails during account creation when ccloud uses SSO

Currently I am planning on using this tool on an environment where the authentication with ccloud is actually done through SSO. Our subscription is activated using Azure AAD as our identity source. As such, login with XX_CCLOUD_EMAIL and XX_CCLOUD_PASSWORD is not possible. It is worth noting that currently I am using ccloud clusters that have internet broker endpoints.

To try and circumvent this, I went through all the sign-in process using ccloud login --no-browser --save. This validates my flow and eventually results in a stored and valid login. From here I can use ccloud create topics and service accounts.

after setting the appropriate variables in by bash session, I notice that account creation fails while suggesting that I am not logged into ccloud. Logs below

kafka-gitops --verbose -f sit-state.yaml account
Creating service accounts...

08:54:28.915 [main] INFO com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader - Kafka Config: {zookeeper.connect=, bootstrap.servers=<REDACTED>.azure.confluent.cloud:9092, advertised.listeners=, client.id=kafka-gitops}
08:54:29.011 [main] INFO com.devshawn.kafka.gitops.service.ConfluentCloudService - Using ccloud executable at: ccloud
08:54:29.013 [main] INFO com.devshawn.kafka.gitops.service.ParserService - Parsing desired state file...
08:54:32.615 [main] INFO com.devshawn.kafka.gitops.service.ConfluentCloudService - Fetching service account list from Confluent Cloud via ccloud tool.
08:54:35.114 [main] INFO com.devshawn.kafka.gitops.service.ConfluentCloudService - No content to map due to end-of-input
 at [Source: (String)""; line: 1, column: 0]
[ERROR] There was an error listing Confluent Cloud service accounts. Are you logged in?

I also validate that through ccloud I am able to list the service-accounts, currently empty for a new cluster.

ccloud service-account list
  Id | Name | Description
+----+------+-------------+

support PREFIXED consumer groups

Are there any plans to support PREFIXED consumer groups (service type: application)?

For now I implemented this at in my fork though as I'm not familiar with the code base and only need this for service type application I didn't touch files related to other service types.

How to setup kafka-gitops with Kafka cluster running in AWS EKS?

Hi!

Is there any way to setup kafka-gitops to work against a kafka cluster inside an AWS EKS cluster?
Right now I am creating a docker image with the kafka-broker + kafka-gitops and running it on EKS, but that means that every time I want to execute a kafka-gitops operation I need to connect inside the EKS cluster.
Is there any option to run the kafka-gitops from outside the EKS?

Thanks so much!

Allow specifying a default replication factor

The replication factor of topics within a cluster is often static (e.g. almost all are set to 3, for example).

We should support setting a default and making the replication field optional.

Feature request: managing schemas

Hi,

Right now there is nothing like this for schema management. It might be useful to also allow/declare what topics/subjects use what schemas:

config:
  #Where the schemas reside
  schemaDir: /tmp/output_dir/
  schema:
    registry:
      url: http://localhost:8081
#     username: test
#     password: test
schemas:
  - relativeLocation: Personnel.json
    # This is equals to confluent references:  https://docs.confluent.io/platform/current/schema-registry/develop/api.html#post--subjects-(string-%20subject)-versions
    references:
      - name: com.person.json
        subject: person
        version: -1
    # If left blank, it will auto submit to personnel-value
    subjects:
      - personnel-raw-value
      - personnel-refined-value
      - personnel-x-value

This would allow the topic state, and schema state to be managed by one tool/file.
java -jar kafka-schema-gitops-1.0-SNAPSHOT-jar-with-dependencies.jar -i <input_yaml> validate # or execute

Idempotency issue

For some topics , we are facing idempotency issue. It creates topic first time but during second run we get below issue.

I am not able to reproduce similar issue on other kafka cluster.

./kafka-gitops -v -f new.yml --no-delete apply
Executing apply...

03:39:06.846 [main] INFO com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader - Kafka Config: {bootstrap.servers=localhost:9092, client.id=kafka-gitops}
03:39:06.853 [main] INFO com.devshawn.kafka.gitops.service.ParserService - Parsing desired state file...
03:39:07.606 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic test-OWN-0 does not exist; it will be created.
Applying: [CREATE]

  • [TOPIC] test-OWN-0
    + partitions: 6
    + replication: 1
    + configs:
    + cleanup.policy: compact

[ERROR] Error thrown when attempting to create a Kafka topic:
org.apache.kafka.common.errors.TopicExistsException: Topic 'test-OWN-0' already exists.

[ERROR] An error has occurred during the apply process.
[ERROR] The apply process has stopped in place. There is no rollback.
[ERROR] Fix the error, re-create a plan, and apply the new plan to continue.

Environment-specific state?

I'm using kafka-gitops for managing access across multiple clusters in different environments, and finding that it would be nice to have a way to deploy some state to certain environments and not others. For example, my testing/user-dev cluster might have topics for in-development projects or extra topics for testing different application configurations that I would never want to reach prod, but I still want to manage via desired-state configuration.

Some ideas:

  1. Use a tool like jq/yq to merge the environment-specific state file with the global one before passing to kafka-gitops. This leaves open the question of how exactly to merge/deal with conflicts.
  2. Change kafka-gitops to accept multiple state files and merge them itself before validating/planning/applying. This moves the problem of conflicts to this project; I would totally understand if you didn't want to take on that complexity at this time.
  3. Use two separate state files and execute apply twice, using prefixes on the environment-specific state and settings.topics.blacklist.prefixed to prevent the two from clashing.
  4. kafka-gitops adds an option/setting to prefix all of it's changes with some string, accomplishing basically the same as 3, but built-in. (An analogy might be docker stack deployments prefixing all containers/networks/secrets/etc with the name of the stack.)

Some dissenting thoughts:

  1. Don't. Maintain separate state files for each environment. This is not my preference because then the main state file is not being tested in the lower-level environments, and it just doesn't feel very devops-y 😛
  2. Use a single state file, but maintain environment differences using separate git branches. I'm afraid that this would devolve into a mess of git branches, custom merges, cherry-picks etc. But maybe this would work better than I'm imagining.
  3. Don't use multiple clusters / don't have inconsistent configuration between environments / this is a terrible idea. Fair enough 😄, but why?

What do you think?

question about state.yaml file

In my state.yaml file, i have the following:

settings:
files:
topics: topics.yaml
services: services.yaml
users: users.yaml
topics:
blacklist:
prefixed:
- __amazon

I have the topics.yaml, services.yaml and users.yaml files in the same location as the state.yaml file. When I run the validate and plan command, it is ignoring the users.yaml file and attempting to delete all my ACLs. If i put the users: hive under the settings: in the state.yaml file, it recognize all the ACLs. am i doing something wrong with the users.yaml file?

using the lastest kafka-gitops binary. below is the content of my users.yaml file:

users:
scramadmin:
principal: User:scramadmin

customUserAcls:
scramadmin:
admin-topic:
name: ""
type: TOPIC
pattern: LITERAL
host: "
"
operation: ALL
permission: ALLOW

thanks,

Support for templating?

We run a multi tenant system. In general environments are the same except for the name of the tenant. To keep hosting overhead down we'd like to have all our tenants share one Kafka cluster. Where we keep the tenants apart using ACLs.

To do this now with Kafka-Gitops we would need to have a lot of repetition in our state file, and thus a lot of messy copy- paste-adapting every time we need to connect a new tentnat to our system. Or when we want to introduce a new application service for all our tenants.

In Terraform you can have variables and such, and generate resources for each item in a list. We've set it up so that we can just add the Tenant to a list and from there on Terraform creates all the resources needed for that tenant.
Are there any plans to add this kind of templating to Kafka-Gitops as well?

Maybe even just a way to share a custom ACL between multiple services or users without having to repeat the whole ACL for each and every one. That would already make the state file a whole lot more concise for us.

Blocker: customUserAcls - Plan command - Marks the action for every entry on the state.yaml as "REMOVE".

I have generated a current state.yaml for my existing cluster.
I changed some couple of users the permission as "DENY". So I was expecting the "plan" command to show me those many "updates", instead, it is including those records in the plan json file and marks everything else for "REMOVE".

Kindly advise at the earliest.

This currently is not allowing me to deploy onto our server, so we can include this into our process permanently.

Thanks,
Jay.

Improve documentation on --no-delete flag

As many users already have a cluster with topics/ACLs defined, we should provide better documentation for using kafka-gitops against an existing Kafka cluster. This includes improving the documentation around the --no-delete flag and topic blacklist setting.

Feature request: possibility to manage topics only

Hi,
It's a great tool.
It would be nice to have possibility to manage only kafka topics, without ACLs.

When I tried to plan config file with topics configuration I get this error:

[ERROR] Error thrown when attempting to list Kafka ACLs:
org.apache.kafka.common.errors.SecurityDisabledException: No Authorizer is configured on the broker
[ERROR] An error has occurred during the planning process. No plan was created.

Feature: get connection config from file

Reading Kafka connection config from environment variables is alright when using Docker.

But when running kafka-gitops without Docker (makes me sad), setting environment variables may reveal passwords to anyone being able to read /prod<pid>/environ

Being able to provide an admin.properties file containing Admin client settings would be very useful.

Question: users and customUserAcls config

As far as I understand users and ACLs are in 2 separate sections:

users:
  my-test-user:
    principal: User:my-test-user
customUserAcls:
  my-test-user:
    read-all-kafka:
      name: kafka.
      type: TOPIC
      pattern: PREFIXED
      host: "*"
      operation: READ
      permission: ALLOW

Why are they separate? What about:

users:
  my-test-user:
    principal: User:my-test-user
    acls:
      read-all-kafka:
        name: kafka.
        type: TOPIC
        pattern: PREFIXED
        host: "*"
        operation: READ
        permission: ALLOW

Or to be able to share ACLs groups among several users (some kind of RBAC):

users:
  my-test-user:
    principal: User:my-test-user
    roles:
      - my-test-role
  my-other-user:
    principal: User:my-other-user
    roles:
      - my-test-role
customRoles:
  my-test-role:
    read-all-kafka:
      name: kafka.
      type: TOPIC
      pattern: PREFIXED
      host: "*"
      operation: READ
      permission: ALLOW

Executing Apply when there are no changes should exit success, not throw PlanIsUpToDateException

I created a new release to apply to a new environment after a previous release successfully applied on the dev environment. But the deploy/apply to dev (which had no changes) failed with:

2020-08-13T22:37:28.9431787Z Executing apply...
2020-08-13T22:37:28.9436938Z 
2020-08-13T22:37:30.2278932Z com.devshawn.kafka.gitops.exception.PlanIsUpToDateException: The current desired state file matches the actual state of the cluster.
2020-08-13T22:37:30.2285551Z 	at com.devshawn.kafka.gitops.manager.PlanManager.validatePlanHasChanges(PlanManager.java:179)
2020-08-13T22:37:30.2287816Z 	at com.devshawn.kafka.gitops.StateManager.apply(StateManager.java:93)
2020-08-13T22:37:30.2291777Z 	at com.devshawn.kafka.gitops.cli.ApplyCommand.call(ApplyCommand.java:35)
2020-08-13T22:37:30.2292939Z 	at com.devshawn.kafka.gitops.cli.ApplyCommand.call(ApplyCommand.java:19)
2020-08-13T22:37:30.2294005Z 	at picocli.CommandLine.executeUserObject(CommandLine.java:1783)
2020-08-13T22:37:30.2295589Z 	at picocli.CommandLine.access$900(CommandLine.java:145)
2020-08-13T22:37:30.2296457Z 	at picocli.CommandLine$RunLast.handle(CommandLine.java:2141)
2020-08-13T22:37:30.2297266Z 	at picocli.CommandLine$RunLast.handle(CommandLine.java:2108)
2020-08-13T22:37:30.2298094Z 	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1975)
2020-08-13T22:37:30.2298922Z 	at picocli.CommandLine.execute(CommandLine.java:1904)
2020-08-13T22:37:30.2299804Z 	at com.devshawn.kafka.gitops.MainCommand.main(MainCommand.java:69)
2020-08-13T22:37:30.6357738Z 
2020-08-13T22:37:30.6415279Z ##[error]Bash exited with code '1'.

Given that this tool is intended to be used with automation to apply desired state configuration, if no changes are required to make the current state match the desired state that should be represented as a success to the controlling automation framework.

I can imagine why one might want to fail if there were no changes and you expected there to be changes, but imo that should be an optional flag, not the default.

Validate should also check enum variant spelling

Hi!

While using apply, I found that I misspelled the resource_pattern field as PREFIX when it should have been PREFIXED and got this error.

java.lang.IllegalArgumentException: No enum constant org.apache.kafka.common.resource.PatternType.PREFIX
...

I did run the validate command beforehand, but it did not catch this error. I think validate should check enum fields to make sure they are valid before declaring that the state file is ready.

confluent cloud related values and dry run view command possibility

  1. when we use confluent cloud cli (ccloud) to create topic, it does not require replication. the tool gives an error if not specifying replication in yaml
  2. For customServiceAcls entry, host: is required in each while not sure if this is of any use for confluent cloud.
  3. is it possible to see commands which will be executed in dry-run/plan mode or even when actual execution takes place, would be really useful to see how it's changing the cluster.

plan error

I am trying to setup the kafka-gitops for my project. We have four kafka broker as per below, once I setup the environment variables export KAFKA_BOOTSTRAP_SERVERS=xxxx.xxxx.xxx.xxx.example.org:9092 on my jump box. Then executing the plan, getting the below error.

kafka-gitops -v plan

Generating execution plan...

11:03:51.023 [main] INFO com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader - Kafka Config: {bootstrap.servers=xxxx.xxxx.xxx.xxx.example.org:9092, client.id=kafka-gitops}
11:03:51.027 [main] INFO com.devshawn.kafka.gitops.service.ConfluentCloudService - Using ccloud executable at: ccloud
11:03:51.028 [main] INFO com.devshawn.kafka.gitops.service.ParserService - Parsing desired state file...
[ERROR] Error thrown when attempting to list Kafka topics:
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.

[ERROR] An error has occurred during the planning process. No plan was created.

Documentation around usage

I feel the documentation is a little light around usage of kafka-gitops.
Can someone help answer the following questions?

  1. Is it possible to increase the number of partitions on an existing topic? I tried it and it detected no changes.

  2. Is it possible to decrease the number of partitions safely? what would happen if the topic contained records?

  3. Is it possible to rename a topic? What is the default behaviour here? Seems like a dangerous operation since it deletes the old topic and creates the new one.

kafka-gitops user permissions

Hi,

I'm currently looking at locking down the cluster as much as possible, and as such, I was wondering what would the minimal permission for the kafka-gitops user would be?

Could we have an example in the doc on that?

Feature request: Possibility to manage ACLs only

This is a lot like #51 , but for a different segment of configuration management.

I'm in a situation where it would be nice to only manage ACLs and not topics. We have a handful of tools that generate topics as needed (ksqldb, flink, some in-house applications), and we're in the middle of setting up ACLs for some of our other topics.

Problem is, any time we leave these configurations out of our state file, kafka-gitops tries to delete our topics.

I saw that #54 got merged in, and I wrote #80 that basically copies that for --skip-topics. I'm wondering if I should instead investigate the possibility of automatically skipping plan segments if their keys are left out of the state file.

For instance, the following will delete all topics

topics: []
services:
  example-service:
      type: application
      produces:
        - example-topic

But this one will leave any existing topics alone:

services:
  example-service:
      type: application
      produces:
        - example-topic

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.