GithubHelp home page GithubHelp logo

jcustenborder / kafka-connect-spooldir Goto Github PK

View Code? Open in Web Editor NEW
159.0 11.0 123.0 487 KB

Kafka Connect connector for reading CSV files into Kafka.

License: Apache License 2.0

Shell 0.86% Java 99.14%
kafka-connect csv json

kafka-connect-spooldir's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-connect-spooldir's Issues

Does it support multiple files in the directory

I am raising it as an issue as i did not find comments section.

Is it having capability to read multiple different files to different topics in kafka or is it going to be always one file in the directory. Also can i use shared path URI in the input.path property value

Support a configurable cleanup policy.

The connector currently moves files that were read successfully to a different folder. There are a lot of use cases where we would prefer to delete the file. There are also use cases where we would want to leave the files in place. Add support for None, MoveToFinished, or Delete.

java.lang.NoClassDefFoundError: org/apache/kafka/connect/header/Header

Hello when I start the connector I get the following error:

java.lang.NoClassDefFoundError: org/apache/kafka/connect/header/Header at com.github.jcustenborder.kafka.connect.utils.jackson.HeaderSerializationModule.<init>(HeaderSerializationModule.java:37) at com.github.jcustenborder.kafka.connect.utils.jackson.ObjectMapperFactory.<clinit>(ObjectMapperFactory.java:36) at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceConnectorConfig.readSchema(SpoolDirSourceConnectorConfig.java:550) at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceConnectorConfig.<init>(SpoolDirSourceConnectorConfig.java:188) at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnectorConfig.<init>(SpoolDirCsvSourceConnectorConfig.java:113) at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector.config(SpoolDirCsvSourceConnector.java:31) at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector.config(SpoolDirCsvSourceConnector.java:25) at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceConnector.start(SpoolDirSourceConnector.java:56) at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:108) at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:133) at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:192) at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:211) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:894) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:108) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:910) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:906) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: org.apache.kafka.connect.header.Header at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

Any ideas what I am missing?

kafka-run-class: command not found

kafka-run-class com.github.jcustenborder.kafka.connect.spooldir.SchemaGenerator -t csv -f src/test/resources/com/github/jcustenborder/kafka/connect/spooldir/csv/FieldsMatch.data -c config/CSVExample.properties -i id fails and gives the error:
kafka-run-class: command not found

I'm using Kafka version 1.1.0.

export CLASSPATH="$(find target/kafka-connect-target/usr/share/java -type f -name '*.jar' | tr '\n' ':')" had failed and the correct path was: target/kafka-connect-target/usr/share/kafka-connect/kafka-connect-spooldir for this step to work.

Fix connectors validate method

The validate method on a Kafka connector should provide validation of the provided settings and produce a configuration with error messages about any invalid settings. The current implementation contains the follow problems:

  1. A empty settings will cause a NullPointerException for undefined paths to be thown in the WritableDirectoryValidator.
  2. The WritableDirectoryValidator will generate stacktraces instead of error messages in the return configuration on invalid settings.
  3. Even if a given settings set is validated, the connector configuration might fail with an invalid configuration/precondition check. Optimally the settings validation should ensure that the settings may be used to safely configure.

Maven BUILD FAILURE due to test failures

hi, When run "mvn clean package" I got this error. Could you please take a look?
Thank you very much

Tests run: 16, Failures: 0, Errors: 2, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:21 min
[INFO] Finished at: 2018-04-13T14:13:15+02:00
[INFO] Final Memory: 41M/650M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on project kafka-connect-spooldir: There are test failures.
[ERROR]
[ERROR] Please refer to C:\work\big_data\git\kafka-connect-spooldir\target\surefire-reports for the individual test results.
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on project kafka-connect-spooldir: There are test failures.

Please refer to C:\work\big_data\git\kafka-connect-spooldir\target\surefire-reports for the individual test results.
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:213)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:154)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:146)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:51)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:309)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:194)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:107)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:955)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:194)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoFailureException: There are test failures.

Please refer to C:\work\big_data\git\kafka-connect-spooldir\target\surefire-reports for the individual test results.
    at org.apache.maven.plugin.surefire.SurefireHelper.reportExecution (SurefireHelper.java:91)
    at org.apache.maven.plugin.surefire.SurefirePlugin.handleSummary (SurefirePlugin.java:320)
    at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked (AbstractSurefireMojo.java:892)
    at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute (AbstractSurefireMojo.java:755)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:134)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:208)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:154)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:146)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:51)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:309)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:194)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:107)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:955)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:194)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356)
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

Test case is failing

Followed these steps:

cd ~/source
git clone [email protected]:jcustenborder/connect-utils.git
cd connect-utils
mvn clean install

Got this:

Failed tests: timestampTests(io.confluent.kafka.connect.utils.data.type.JsonNodeTest): expected:<1451275679192> but was:<1451282879192>

Now trying with:

mvn install -DskipTests

Record columns count swapped with schemaConfig columns count

I believe that the values for record columns count and schemaConfig columns count has been reversed!

  Preconditions.checkState(this.valueParserConfig.mappings.size() == record.length, "Record has %s columns but schemaConfig has %s columns.",
      this.valueParserConfig.mappings.size(),
      record.length
  );

Found this bug while checking to see if a schema that has additional columns can be used to parse a CSV file that is missing some of these additional columns.

Start connect failed with class not found

java 1.8 conflent 3.0.0

ERROR Error while starting connector CsvSpoolDir (org.apache.kafka.connect.runtime.WorkerConnector:109)
java.lang.NoClassDefFoundError: com/opencsv/enums/CSVReaderNullFieldIndicator
at io.confluent.kafka.connect.source.SpoolDirectoryConfig.(SpoolDirectoryConfig.java:124)
at io.confluent.kafka.connect.source.SpoolDirectoryConnector.start(SpoolDirectoryConnector.java:40)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:101)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:126)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:183)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:178)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.startConnector(StandaloneHerder.java:250)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:164)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:94)
Caused by: java.lang.ClassNotFoundException: com.opencsv.enums.CSVReaderNullFieldIndicator
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 9 more

Incorrect offset handling of reprocessed JSON files

When reprocessing JSON file (e.g. because of an error), the offset of last processed record is looked up in Kafka in order to skip already processed lines. But there's a bug in line skipping code: One more line should be skipped.

I believe that there should be <= comparison (offset numbering is zero-based).

You can try it with this config:

{
  "name": "test",
  "config": {
    "connector.class": "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceConnector",
    "tasks.max": "1",
    "topic": "test",
    "file": "/tmp/input/test.json",
    "schema.generation.enabled": "true",
    "schema.generation.key.name": "test-key",
    "schema.generation.value.name": "test-value",
    "finished.path": "/tmp/finished",
    "input.file.pattern": "test.json",
    "error.path": "/tmp/error",
    "input.path": "/tmp/input"
  }
}

Take any file, e.g.

{"f1": "value1"}
{"f1": "value2"}
{"f1": "value3"}

and load it in the input directory. Then load it again with the same name. Check the Kafka topic, the last line will appear twice.

This can cause problems when reprocessing repaired file that was previously routed to error directory.

CSV reprocessing works fine.

Namespace in Schema

Hi there,

we are using that connector for uploading several CSVs to Kafka as AVRO data.
When the connector initially consumes a file, it generates the Schema and create it in the Schema Registry.
The problem now is, that all schemas has the same namespace "com.github.jcustenborder.kafka.connect.model".

We consume the messages from multiple topics in one Java application and we have problems dealing with the same namespace here.

Is there a way to deal with that? Would be nice to have an option to set a given namespace for every connector.

Kind regards
-Sergei

pooling directory is too big

Hello M. Custenborder
I think that the pooling time of the directory is too big, can reduce this time.
I have the impression that pooling is done every second ?
thx a lot

import org.apache.kafka.connect.header.Header Compile issue.

Kafka version is 0.10.0
Compile error exists Header cannot be resolved to be a type. I see that this particular class is not available in the Kafka version which i am using, Can you please tell me if this can be resolved without upgrading Kafka.

Failed to Find Class 500 Error

I am trying to add this plugin connector to my current instance of kafka connect. I'm using Openshift and am running Kafka Connect using the docker image below (it's basically an older version of confluentinc/cp-docker-images except it has the jar files from the previously mentioned GitHub and the hdfs connector jar is upgraded) :

https://hub.docker.com/r/chenchik/custom-connect-hdfs/

I run kafka connect with a huge command that sets a ton of environment variables. From what I understand the plugins should go in /etc/kafka-connect/jars, once they're in there, they should work.

In order to install this plugin into my Kafka Connect instance I cloned this plugin from github and ran:

mvn clean package

Then I took all the files in /target and copied them into my container running Kafka Connect into the /etc/kafka-connect/jars directory. I didn't change any environment variables after that.

When I try to activate the connector using the REST API, I issue a POST request with this payload:

{
    "name": "csv-json-1",
    "config": {
        "connector.class": "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector",
        "tasks.max": "1",
        "finished.path":"/csv-json/results/",
	"input.file.pattern":".*csv",
	"error.path":"/csv-json/errors/",
	"topic":"danila-csv-json",
	"input.path":"/csv-json/input/",
	"key.schema":"com.github.jcustenborder.kafka.connect.spooldir.CsvSchemaGenerator",
	"value.schema":"com.github.jcustenborder.kafka.connect.spooldir.CsvSchemaGenerator"
    }
}

The response I get every time is:

{
"error_code": 500,
"message": "Failed to find any class that implements Connector and which name matches com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector, available connectors are: org.apache.kafka.connect.source.SourceConnector, org.apache.kafka.connect.tools.MockSourceConnector, org.apache.kafka.connect.file.FileStreamSinkConnector, io.confluent.connect.hdfs.tools.SchemaSourceConnector, org.apache.kafka.connect.tools.VerifiableSourceConnector, io.confluent.connect.s3.S3SinkConnector, org.apache.kafka.connect.file.FileStreamSourceConnector, org.apache.kafka.connect.tools.VerifiableSinkConnector, io.confluent.connect.jdbc.JdbcSinkConnector, io.confluent.connect.jdbc.JdbcSourceConnector, io.confluent.connect.elasticsearch.ElasticsearchSinkConnector, org.apache.kafka.connect.sink.SinkConnector, io.confluent.connect.storage.tools.SchemaSourceConnector, org.apache.kafka.connect.tools.MockConnector, org.apache.kafka.connect.tools.MockSinkConnector, org.apache.kafka.connect.tools.SchemaSourceConnector, io.confluent.connect.hdfs.HdfsSinkConnector"
}

If I issue a GET request to /connector-plugins, it is not listed.

I also cannot seem to find any logs inside of the container that can explain what's going on. The only kind log message I get is from the log the container is providing openshift. This is the only entry that pops up:

[2017-08-01 22:22:35,635] INFO 172.17.0.1 - - [01/Aug/2017:22:22:15 +0000] "POST /connectors HTTP/1.1" 500 1081 20544 (org.apache.kafka.connect.runtime.rest.RestServer)

Any idea on what I can do to resolve this issue?

Build Error

Hi,

My environment is as below --

OS: RHEL 6.6
Maven: 3.5.4
Kafka: Confluent 3.2.1
Java: 1.8

When I run 'mvn clean package' I get a lot of 'java.lang.NoClassDefFoundError' and finally the below --

Failed tests:
SpoolDirCsvSourceConnectorTest.startWithoutSchemaMismatch:81 Unexpected exception type thrown ==> expected: <org.apache.kafka.connect.errors.DataException> but was: <java.lang.NoClassDefFoundError>
SpoolDirJsonSourceConnectorTest.startWithoutSchemaMismatch:80 Unexpected exception type thrown ==> expected: <org.apache.kafka.connect.errors.DataException> but was: <java.lang.NoClassDefFoundError>
Tests in error:
CsvSchemaGeneratorTest.foo:35 NoClassDefFound com.github.jcustenborder.kafka.c...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
BaseDocumentationTest.before:132->lambda$before$4:143 » NoClassDefFound Could ...
JsonSchemaGeneratorTest.schema:34 NoClassDefFound Could not initialize class c...
SpoolDirCsvSourceConnectorTest.startWithoutSchema:57 » NoClassDefFound com.git...
SpoolDirCsvSourceTaskTest>SpoolDirSourceTaskTest.configureIndent:72 » NoClassDefFound
SpoolDirSourceTaskTest.configureIndent:72 » NoClassDefFound org/apache/kafka/c...
SpoolDirJsonSourceConnectorTest.startWithoutSchema:56 » NoClassDefFound Could ...
SpoolDirJsonSourceTaskTest>SpoolDirSourceTaskTest.configureIndent:72 » NoClassDefFound
SpoolDirSourceTaskTest.configureIndent:72 NoClassDefFound Could not initialize...

Tests run: 46, Failures: 2, Errors: 21, Skipped: 0

Am I missing something? Do I need to set explicit CLASSPATH before I build?

Thanks.

Cannot parse date

I am using v1.0.24 of the library with Kafka connect 4.0.0.

I am trying to parse a csv with values:

1,2018-03-12,2018-03-12,1200,A,B,2018-04-30T22:00:51.000Z

and schema:

{"name":"Test","type":"STRUCT","isOptional":false,"fieldSchemas":{"id":{"type":"INT64","isOptional":false},"created":{"name" : "org.apache.kafka.connect.data.Date","type" : "INT32","version" : 1,"isOptional" : true},"updated":{"name" : "org.apache.kafka.connect.data.Date","type" : "INT32","version" : 1,"isOptional" : true},"logins":{"type":"INT64","isOptional":true},"category":{"type":"STRING","isOptional":true},"subcategory":{"type":"STRING","isOptional":true},"last_login":{"name" : "org.apache.kafka.connect.data.Timestamp","type" : "INT64","version" : 1,"isOptional" : true}}}

but i am getting the following error:

org.apache.kafka.connect.errors.DataException: Exception thrown while parsing data for 'created'. linenumber=1
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask.process(SpoolDirCsvSourceTask.java:126)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.read(SpoolDirSourceTask.java:293)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.poll(SpoolDirSourceTask.java:156)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:179)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.connect.errors.DataException: Could not parse '2018-03-12' to 'Date'
at com.github.jcustenborder.kafka.connect.utils.data.Parser.parseString(Parser.java:113)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask.process(SpoolDirCsvSourceTask.java:118)

Any idea what could be the cause of the issue?
Thank you.

Separator char config

Hi I'm using the connect REST-API to configure this plugin. Everything seems to be going fine, until when I try to set this setting

"csv.separator.char":";"

It seems the plugin is unabled to parse that?

Failed to reconfigure connector's tasks, retrying after backoff

Stacktrace
[2016-11-09 18:31:17,052] ERROR Failed to reconfigure connector's tasks, retrying after backoff: (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
java.lang.NullPointerException
at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:51)
at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
at org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:79)
at org.apache.kafka.connect.runtime.SourceConnectorConfig.(SourceConnectorConfig.java:29)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:845)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:799)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:795)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:755)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:715)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:206)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:176)
at java.lang.Thread.run(Thread.java:745)

Configuration used:
[2016-11-09 18:31:16,983] INFO SpoolDirectoryConfig values:
processing.file.extension = .PROCESSING
csv.first.row.as.header = true
finished.path = /tmp/spooldir/finished
csv.strict.quotes = false
csv.schema.name = BlockingIA
csv.verify.reader = true
csv.schema.from.header = true
csv.file.charset = UTF-8
file.minimum.age.ms = 0
record.processor.class = class io.confluent.kafka.connect.source.io.processing.csv.CSVRecordProcessor
csv.skip.lines = 0
input.file.pattern = ^.*.csv$
error.path = /tmp/spooldir/error
csv.schema.from.header.keys = [DayID, XXX, YYY]
csv.quote.char = 34
include.file.metadata = false
csv.parser.timestamp.date.formats = [yyyy-MM-dd' 'HH:mm:ss]
csv.schema =
batch.size = 1000
csv.parser.timestamp.timezone = UTC
csv.null.field.indicator = BOTH
halt.on.error = false
csv.ignore.quotations = false
csv.escape.char = 92
csv.case.sensitive.field.names = false
csv.ignore.leading.whitespace = true
csv.keep.carriage.return = false
csv.separator.char = 124
topic = quickstart-data
input.path = /tmp/spooldir/input
(io.confluent.kafka.connect.source.SpoolDirectoryConfig)

Allowing column skipping

I am planning to implement column skipping. Will add a config that will allow skipped column numbers to be specified as a comma separated list. In the CSVSourceTask I’ll skip column in the for loop where the avro message is constructed.

Let me know if this was considered and not implemented for a reason.

input file name

Also can we parse filename (input file )as a key while writing to kafka topic ?

PollingDirectoryMonitor.poll NullPointerException

Stacktrace:
[2017-01-18 18:25:48,139] INFO Opening /home/cosmo/devel/syncrasy/KafkaConnectExample/input/temp.csv.PROCESSING (io.confluent.kafka.connect.source.io.PollingDirectoryMonitor:216)
[2017-01-18 18:25:48,168] ERROR Exception encountered processing line 1 of /home/cosmo/devel/syncrasy/KafkaConnectExample/input/temp.csv.PROCESSING. (io.confluent.kafka.connect.source.io.PollingDirectoryMonitor:231)
org.apache.kafka.connect.errors.ConnectException: java.lang.NullPointerException: SchemaConfig.name cannot be null
at io.confluent.kafka.connect.source.io.PollingDirectoryMonitor.poll(PollingDirectoryMonitor.java:221)
at io.confluent.kafka.connect.source.SpoolDirectoryTask.poll(SpoolDirectoryTask.java:51)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:155)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: SchemaConfig.name cannot be null
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:226)
at io.confluent.kafka.connect.source.io.processing.csv.SchemaConfig.parserConfigs(SchemaConfig.java:42)
at io.confluent.kafka.connect.source.io.processing.csv.CSVRecordProcessor.configure(CSVRecordProcessor.java:150)
at io.confluent.kafka.connect.source.io.PollingDirectoryMonitor.poll(PollingDirectoryMonitor.java:219)
... 9 more
[2017-01-18 18:25:48,170] INFO Closing /home/cosmo/devel/syncrasy/KafkaConnectExample/input/temp.csv.PROCESSING (io.confluent.kafka.connect.source.io.PollingDirectoryMonitor:130)
[2017-01-18 18:25:48,170] ERROR Error during processing, moving /home/cosmo/devel/syncrasy/KafkaConnectExample/input/temp.csv.PROCESSING to /home/cosmo/devel/syncrasy/KafkaConnectExample/error. (io.confluent.kafka.connect.source.io.PollingDirectoryMonitor:139)

Configuration (with REST)
{
"name": "CsvSourceTest",
"config": {
"connector.class": "io.confluent.kafka.connect.source.SpoolDirectoryConnector",
"tasks.max": "1",
"topics": "connect-test",
"record.processor.class": "io.confluent.kafka.connect.source.io.processing.csv.CSVRecordProcessor",
"input.file.pattern": "^.*\.csv$",
"finished.path": "/home/cosmo/devel/syncrasy/KafkaConnectExample/finished",
"error.path": "/home/cosmo/devel/syncrasy/KafkaConnectExample/error",
"input.path": "/home/cosmo/devel/syncrasy/KafkaConnectExample/input",
"halt.on.error": false,
"topic": "connect-test",
"include.file.metadata": false,
"batch.size": 1000,
"csv.null.field.indicator": "BOTH",
"csv.parser.timestamp.date.formats": "yyyy-MM-dd'T'HH:mm:ss'Z'",
"csv.first.row.as.header": true,
"csv.schema": "{"keys":["id"],"fields":[{"name":"id","type":"int32","required":true},{"name":"name","type":"string","required":true}]}"
}
}

When I try to copy this file in the input directory:
File "temp.txt"
id,name
1,cosmo
2,steve
3,alin
4,gavin
5,leo

Connector support pipe delimiter

This connector supports " | " delimiter ?

ex:- file like below

mobile|imsi|2050160|2|14|2|xxxxxx|20160917|vvvvvv|20370101|bbbbbbbbb|1250158,1104006,1210003,xxxxxx,1106000,44444444,1103001|2000:14,3000:-4444444|

Kindly confirm

Upgrade to Kafka 0.10.2.0

The current latest release on github is based on Kafka 0.10.0.0. It would be nice to have releases for Kafka 0.10.1.0 and Kafka 0.10.2.0.

kafka-connect .DataException: Found null value for non-optional schema

Team,

My scheme definition seems to be correct however spooldir connector unable to load the data into Topic queue. I keep on getting following error.

[2018-08-16 20:56:34,676] ERROR Error encountered in task csv-raw-flight-data-connector-0. Executing stage 'KEY_CONVERTER' with class 'io.confluent.connect.avro.AvroConverter'. (org.apache.kafka.connect.runtime.errors.LogReporter)
connect | org.apache.kafka.connect.errors.DataException: Found null value for non-optional schema

I am using latest confluent docker images from git:/cp-docker-images/examples/cp-all-in-one

I have the following connector definition:
{
"name": "csv-raw-flight-data-connector",
"config": {
"tasks.max": "1",
"connector.class": "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector",
"input.file.pattern": "^rawFlight.*.csv$",
"input.path": "/tmp/raw-flight-data/source",
"finished.path": "/tmp/raw-flight-data/finished",
"error.path": "/tmp/raw-flight-data/error",
"halt.on.error": "false",
"topic": "raw.flight.data",
"errors.log.enable":"true",
"errors.log.include.messages":"false",
"errors.tolerance":"all",
"csv.null.field.indicator":"EMPTY_SEPARATORS",
"value.schema":"{"name":"com.github.jcustenborder.kafka.connect.model.Value","type":"STRUCT","isOptional":true,"fieldSchemas":{"month":{"type":"INT64","isOptional":true},"day":{"type":"INT64","isOptional":true},"airline":{"type":"STRING","isOptional":true},"flightNum":{"type":"STRING","isOptional":true},"origin":{"type":"STRING","isOptional":true},"dest":{"type":"STRING","isOptional":true},"crs_dep_time":{"type":"STRING","isOptional":true},"actualDelay":{"type":"BYTES","isOptional":true},"taxi_out":{"type":"BYTES","isOptional":true},"actualDelayNew":{"type":"BYTES","isOptional":true},"distance":{"type":"BYTES","isOptional":true}}}",
"key.schema":"{"name":"com.github.jcustenborder.kafka.connect.model.Key","type":"STRUCT","isOptional":false,"fieldSchemas":{"origin":{"type":"STRING","isOptional":false}}}",
"csv.first.row.as.header": "true"
}
}

HERE IS THE SAMPLE csv FILE That I was testing to load in Kafka. This file had about 150k rows..but I'm pasting typical records since the error was thrown at the beginning itself. Also note that there is no topic created as per the connector definition. In my schema I kept all my value fields to be IsOptional as true but not sure where connector is picking non-Optional field.

month, day, airline, flightNum, origin, dest, crs_dep_time, actualDelay, taxi_out, actualDelayNew, distance
6,6,"OO","5636","MFR","SFO","1530",0.00,329.00,,
6,7,"OO","5636","MFR","SFO","1530",0.00,329.00,,
6,7,"OO","5636","MFR","SFO","1530",0.00,329.00,,
6,6,"OO","5636","MFR","SFO","1530",0.00,329.00,,
6,1,"OO","5636","MFR","SFO","1550",76.00,329.00,,
6,4,"OO","5636","MFR","SFO","1530",70.00,329.00,,
6,5,"OO","5636","MFR","SFO","1530",48.00,329.00,,
6,7,"OO","5636","MFR","SFO","1530",20.00,329.00,,
6,4,"OO","5636","MFR","SFO","1530",0.00,329.00,,
6,1,"OO","5636","MFR","SFO","1530",51.00,329.00,,
6,5,"OO","5636","MFR","SFO","1530",49.00,329.00,,
6,5,"OO","5636","MFR","SFO","1530",,329.00,,

BELOW IS THE EXCEPTION:

connect | [2018-08-16 14:24:37,224] INFO Found 1 file(s) to process (com.github.jcustenborder.kafka.connect.spooldir.InputFileDequeue)
connect | [2018-08-16 14:24:37,224] INFO Opening /tmp/raw-flight-data/source/rawFlight.data_1.csv (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask)
connect | [2018-08-16 14:24:37,382] INFO configure() - field names from header row. fields = month, day, airline, flightNum, origin, dest, crs_dep_time, actualDelay, taxi_out, actualDelayNew, distance (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask)
connect | [2018-08-16 14:24:37,386] INFO WorkerSourceTask{id=csv-raw-flight-data-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
connect | [2018-08-16 14:24:37,388] INFO WorkerSourceTask{id=csv-raw-flight-data-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
connect | [2018-08-16 14:24:37,388] ERROR WorkerSourceTask{id=csv-raw-flight-data-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
connect | org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:266)
connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)

connect [2018-08-16 14:24:24,224] ERROR Error encountered in task csv-raw-flight-data-connector-0. Executing stage 'KEY_CONVERTER' with class 'io.confluent.connect.avro.AvroConverter'. (org.apache.kafka.connect.runtime.errors.LogReporter)

connect | Caused by: org.apache.kafka.connect.errors.DataException: Found null value for non-optional schema
connect | at io.confluent.connect.avro.AvroData.validateSchemaValue(AvroData.java:1035)
connect | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:363)
connect | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:567)
connect | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:326)
connect | at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:75)
connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$1(WorkerSourceTask.java:266)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
connect | [2018-08-16 14:24:37,224] INFO Found 1 file(s) to process (com.github.jcustenborder.kafka.connect.spooldir.InputFileDequeue)
connect | [2018-08-16 14:24:37,224] INFO Opening /tmp/raw-flight-data/source/rawFlight.data_1.csv (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask)
connect | [2018-08-16 14:24:37,382] INFO configure() - field names from header row. fields = month, day, airline, flightNum, origin, dest, crs_dep_time, actualDelay, taxi_out, actualDelayNew, distance (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask)
connect | [2018-08-16 14:24:37,386] INFO WorkerSourceTask{id=csv-raw-flight-data-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
connect | [2018-08-16 14:24:37,388] INFO WorkerSourceTask{id=csv-raw-flight-data-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
connect | [2018-08-16 14:24:37,388] ERROR WorkerSourceTask{id=csv-raw-flight-data-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
connect | org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:266)
connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | Caused by: org.apache.kafka.connect.errors.DataException: Found null value for non-optional schema
connect | at io.confluent.connect.avro.AvroData.validateSchemaValue(AvroData.java:1035)
connect | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:363)
connect | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:567)
connect | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:326)
connect | at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:75)
connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$1(WorkerSourceTask.java:266)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)

Handling timestamps that are in "YYYYMMDDHHMMSS.MMM" format

What would be the best method to handle a timestamp in "20180530143000.167" format using kafka-connect-spooldir // Kafka // Tranquility? I tried INT64 on kafka-connect-spooldir but had no luck.

I am getting the following error:
[2018-11-16 16:10:01,380] ERROR Exception encountered processing line 2 of /var/input/BW-CDR-20180531004500-2-25d3128f-218647.csv. (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask:290)
org.apache.kafka.connect.errors.DataException: Exception thrown while parsing data for 'startTime'. linenumber=2
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask.process(SpoolDirCsvSourceTask.java:126)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.read(SpoolDirSourceTask.java:286)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.poll(SpoolDirSourceTask.java:165)
at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:244)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:220)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Could not parse '20180530143000.177' to 'Long'
at com.github.jcustenborder.kafka.connect.utils.data.Parser.parseString(Parser.java:113)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask.process(SpoolDirCsvSourceTask.java:118)
... 11 more
Caused by: java.lang.NumberFormatException: For input string: "20180530143000.177"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at com.github.jcustenborder.kafka.connect.utils.data.type.Int64TypeParser.parseString(Int64TypeParser.java:24)
at com.github.jcustenborder.kafka.connect.utils.data.Parser.parseString(Parser.java:109)
... 12 more
[2018-11-16 16:10:01,382] INFO Closing /var/input/BW-CDR-20180531004500-2-25d3128f-218647.csv (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask:191)
[2018-11-16 16:10:01,382] ERROR Error during processing, moving /var/input/BW-CDR-20180531004500-2-25d3128f-218647.csv to /var/error. (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask:199)
[2018-11-16 16:10:01,383] INFO Removing processing file /var/input/BW-CDR-20180531004500-2-25d3128f-218647.csv.PROCESSING (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask:213)
[2018-11-16 16:10:01,633] INFO Opening /var/input/BW-CDR-20180531004500-2-2f016b82-218430.csv (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask:257)
[2018-11-16 16:10:01,635] INFO configure() - field names from header row. fields = recordId, serviceProvider, type, userNumber, groupNumber, direction, callingNumber, callingPresentationIndicator, calledNumber, startTime, userTimeZone, answerIndicator, answerTime, releaseTime, terminationCause, networkType, dialedDigits, callCategory, networkCallType, networkTranslatedNumber, releasingParty, route, networkCallID, codec, accessDeviceAddress, accessCallID, group, department, originalCalledNumber, originalCalledPresentationIndicator, originalCalledReason, redirectingNumber, redirectingPresentationIndicator, redirectingReason, chargeIndicator, typeOfNetwork, localCallId, remoteCallId, key, cancelCWTperCall.facResult, clidBlockingPerCall.invocationTime, clidBlockingPerCall.facResult, directVMTransfer.invocationTime, directVMTransfer.facResult, userId, otherPartyName, otherPartyNamePresentationIndicator, trunkGroupName, clidPermitted, relatedCallId, relatedCallIdReason, transfer.invocationTime, transfer.result, transfer.relatedCallId, transfer.type, codecUsage, trunkGroupInfo, asCallType, configurableCLID, callCenter.outgoingCallCenterPhoneNumber, namePermitted, callCenter.outgoingCallCenterUserId, location, locationType, locationUsage, userAgent, extTrackingId, flexibleSeatingGuest.invocationTime (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask:60)
[2018-11-16 16:10:01,635] ERROR Exception encountered processing line 2 of /var/input/BW-CDR-20180531004500-2-2f016b82-218430.csv. (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask:290)
org.apache.kafka.connect.errors.DataException: Exception thrown while parsing data for 'startTime'. linenumber=2
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask.process(SpoolDirCsvSourceTask.java:126)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.read(SpoolDirSourceTask.java:286)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.poll(SpoolDirSourceTask.java:165)
at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:244)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:220)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Could not parse '20180530143000.167' to 'Long'
at com.github.jcustenborder.kafka.connect.utils.data.Parser.parseString(Parser.java:113)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask.process(SpoolDirCsvSourceTask.java:118)
... 11 more
Caused by: java.lang.NumberFormatException: For input string: "20180530143000.167"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at com.github.jcustenborder.kafka.connect.utils.data.type.Int64TypeParser.parseString(Int64TypeParser.java:24)
at com.github.jcustenborder.kafka.connect.utils.data.Parser.parseString(Parser.java:109)
... 12 more

Extended ASCII charcters not allowed as separators

Character 254 not allowed/ignored as separator (it appears that any character with ASCII code above 127 is getting ignored and results in error:

org.apache.kafka.connect.errors.ConnectException: java.lang.ArrayIndexOutOfBoundsException: 1
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.read(SpoolDirSourceTask.java:298)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.poll(SpoolDirSourceTask.java:165)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:179)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask.process(SpoolDirCsvSourceTask.java:111)
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTask.read(SpoolDirSourceTask.java:286)

Test failing when building

I ran mvn clean package after adding these dependencies in pom.xml

<dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>connect-api</artifactId>
     <version>2.0.0</version>
</dependency>
<dependency>
     <groupId>org.apache.kafka</groupId>
     <artifactId>connect-transforms</artifactId>
     <version>2.0.0</version>
</dependency>

But test failed, and got this error.

20:13:12.578 [main] INFO  com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceConnector - Setting key.schema to {
  "name" : "com.github.jcustenborder.kafka.connect.model.Key",
  "type" : "STRUCT",
  "isOptional" : false,
  "fieldSchemas" : { }
}
20:13:12.578 [main] INFO  com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceConnector - Setting value.schema to {
  "name" : "com.github.jcustenborder.kafka.connect.model.Value",
  "type" : "STRUCT",
  "isOptional" : false,
  "fieldSchemas" : {
    "id" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "first_name" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "last_name" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "email" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "gender" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "ip_address" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "last_login" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "account_balance" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "country" : {
      "type" : "STRING",
      "isOptional" : true
    },
    "favorite_color" : {
      "type" : "STRING",
      "isOptional" : true
    }
  }
}
20:13:12.579 [main] TRACE com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceConnectorTest - cleanupTempDir() - Removing /var/folders/4l/njkp363s3_gg670bmhwcdvhh0000gp/T/1541515392571-0/input/input0.json
20:13:12.579 [main] TRACE com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceConnectorTest - cleanupTempDir() - Removing /var/folders/4l/njkp363s3_gg670bmhwcdvhh0000gp/T/1541515392571-0/input/input1.json
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.038 sec - in com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceConnectorTest
Running com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceTaskTest
20:13:12.588 [main] TRACE com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTaskTest - packagePrefix = com.github.jcustenborder.kafka.connect.spooldir.json
20:13:12.589 [main] INFO  com.github.jcustenborder.kafka.connect.spooldir.TestDataUtils - packageName = com.github.jcustenborder.kafka.connect.spooldir.json
20:13:12.596 [main] INFO  org.reflections.Reflections - Reflections took 7 ms to scan 1 urls, producing 10 keys and 10 values
20:13:12.598 [main] TRACE com.github.jcustenborder.kafka.connect.spooldir.TestDataUtils - Loading resource com/github/jcustenborder/kafka/connect/spooldir/json/FileModeFieldFieldsMatch.json
20:13:12.603 [main] ERROR com.github.jcustenborder.kafka.connect.spooldir.TestDataUtils - Exception thrown while loading /com/github/jcustenborder/kafka/connect/spooldir/json/FileModeFieldFieldsMatch.json
com.fasterxml.jackson.databind.JsonMappingException: field cannot be null. (through reference chain: com.github.jcustenborder.kafka.connect.spooldir.TestCase["expected"]->java.util.ArrayList[0])
	at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:388)
	at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:360)
	at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:308)
	at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:259)
	at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:26)
	at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:499)
	at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:108)
	at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:276)
	at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:140)
	at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3798)
	at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2908)
	at com.github.jcustenborder.kafka.connect.spooldir.TestDataUtils.loadJsonResourceFiles(TestDataUtils.java:53)
	at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceTaskTest.loadTestCases(SpoolDirSourceTaskTest.java:162)
	at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceTaskTest.poll(SpoolDirJsonSourceTaskTest.java:49)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:389)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.descriptor.TestFactoryTestDescriptor.lambda$invokeTestMethod$1(TestFactoryTestDescriptor.java:70)
	at org.junit.jupiter.engine.execution.ThrowableCollector.execute(ThrowableCollector.java:40)
	at org.junit.jupiter.engine.descriptor.TestFactoryTestDescriptor.invokeTestMethod(TestFactoryTestDescriptor.java:68)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:110)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.lambda$execute$3(HierarchicalTestExecutor.java:83)
	at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:77)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.lambda$null$2(HierarchicalTestExecutor.java:92)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
	at java.util.Iterator.forEachRemaining(Iterator.java:116)
	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.lambda$execute$3(HierarchicalTestExecutor.java:92)
	at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:77)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.lambda$null$2(HierarchicalTestExecutor.java:92)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
	at java.util.Iterator.forEachRemaining(Iterator.java:116)
	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.lambda$execute$3(HierarchicalTestExecutor.java:92)
	at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:77)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:51)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:43)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:170)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:154)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:90)
	at org.junit.platform.surefire.provider.JUnitPlatformProvider.invokeSingleClass(JUnitPlatformProvider.java:144)
	at org.junit.platform.surefire.provider.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:126)
	at org.junit.platform.surefire.provider.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:105)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
Caused by: org.apache.kafka.connect.errors.DataException: field cannot be null.
	at org.apache.kafka.connect.data.Struct.put(Struct.java:215)
	at com.github.jcustenborder.kafka.connect.utils.jackson.ValueHelper.value(ValueHelper.java:197)
	at com.github.jcustenborder.kafka.connect.utils.jackson.SourceRecordSerializationModule$Storage.key(SourceRecordSerializationModule.java:65)
	at com.github.jcustenborder.kafka.connect.utils.jackson.SourceRecordSerializationModule$Storage.build(SourceRecordSerializationModule.java:79)
	at com.github.jcustenborder.kafka.connect.utils.jackson.SourceRecordSerializationModule$Deserializer.deserialize(SourceRecordSerializationModule.java:120)
	at com.github.jcustenborder.kafka.connect.utils.jackson.SourceRecordSerializationModule$Deserializer.deserialize(SourceRecordSerializationModule.java:115)
	at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:287)
	... 64 common frames omitted
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.025 sec <<< FAILURE! - in com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceTaskTest
poll()  Time elapsed: 0.025 sec  <<< ERROR!
com.fasterxml.jackson.databind.JsonMappingException: field cannot be null. (through reference chain: com.github.jcustenborder.kafka.connect.spooldir.TestCase["expected"]->java.util.ArrayList[0])
	at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceTaskTest.poll(SpoolDirJsonSourceTaskTest.java:49)
Caused by: org.apache.kafka.connect.errors.DataException: field cannot be null.
	at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceTaskTest.poll(SpoolDirJsonSourceTaskTest.java:49)


Results :

Tests in error:
  SpoolDirCsvSourceTaskTest.poll:49->SpoolDirSourceTaskTest.loadTestCases:162 » JsonMapping
  SpoolDirJsonSourceTaskTest.poll:49->SpoolDirSourceTaskTest.loadTestCases:162 » JsonMapping

Tests run: 48, Failures: 0, Errors: 2, Skipped: 0

I wasn't sure what needed to be changed here. Let me know where it needs fixing, I can work on it.

SpoolDirSourceConnector.validate(...) method throws IllegalStateException On empty config

This issue is related to issue #22 . We are considering using Landoop as a way to configure and manage Kafka connectors. The Landoop UI has a nifty feature where it uses the Connector validate method to iteratively tell you what is wrong with the current configuration so you can fix it on the fly and in the end hopefully have a working connector.

However, Landoop UI does not expect the validate method to throw an IllegalStateException. In the underlying framework, errors are reported by throwing a ConfigException

We have actually done a bit of work (see https://github.com/TeletronicsDotAe/kafka-connect-spooldir ) on improving the validation in order to get informative error messages on configuration time, including adding the ability to specify dependent properties (such as when property A is foo, property B must be present). While preparing the pull request I've noticed that you have moved the ValidDirectoryWritable class out of the project into a different project. Would you consider changing your validator to throw ConfigExceptions instead of IllegalStateExceptions?

Error while generating CSV schema (Documentation and Code Bug)

Disclaimer: I am absolute dumb in maven and allied technologies and hence perfect test target for documentation :D
Now, the instructions for CSV generating schema example do not work out of the box.
Below are problems explained which I faced.
Command mvn clean package works perfectly. No problems here.
Command export CLASSPATH="$(find target/kafka-connect-target/usr/share/java -type f -name '*.jar' | tr '\n' ':')" fails as there the directory oath under target is wrong.
For me correct path was: target/kafka-connect-target/usr/share/kafka-connect/kafka-connect-spooldir
Lastly the final command expects a lot of directories to be created in /tmp and hence throws exceptions every time it is run. Following Exceptions are thrown one after the other.
Exception in thread "main" org.apache.kafka.common.config.ConfigException: Invalid value '/tmp/input' must be a directory...
Exception in thread "main" org.apache.kafka.common.config.ConfigException: Invalid value '/tmp/spooldir/finished' must be a directory...
Exception in thread "main" org.apache.kafka.common.config.ConfigException: Invalid value '/tmp/error' must be a directory...

I created following directories in order to run the command successfully.
mkdir /tmp/input
mkdir /tmp/spooldir/
mkdir /tmp/spooldir/finished/
mkdir /tmp/error

I think the code assumes those directories to be present. It should be either rectified or dependency on those directories should be removed.

<<Exception in thread "main" java.lang.NoClassDefFoundError>> when I try to start kafka connect

Hello,

I downloaded kafka-connect-spooldir-1.0.31 (https://github.com/jcustenborder/kafka-connect-spooldir/releases/download/1.0.31/kafka-connect-spooldir-1.0.31.rpm) and installed it on CentOS 7 (rpm -i kafka-connect-spooldir-1.0.31.rpm).

When I start the connect-standalone.sh, I am getting the following:

[root@kafka01 kafka-connect-spooldir]# /opt/kafka_2.11-1.0.0/bin/connect-standalone.sh connect-avro-docker-arda.properties CSVExample.properties
[2018-08-20 19:02:11,438] INFO Kafka Connect standalone worker initializing ... (org.apache.kafka.connect.cli.ConnectStandalone:65)
[2018-08-20 19:02:11,459] INFO WorkerInfo values:
jvm.args = -Xmx256M, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/opt/kafka_2.11-1.0.0/bin/../logs, -Dlog4j.configuration=file:/opt/kafka_2.11-1.0.0/bin/../config/connect-log4j.properties
jvm.spec = Oracle Corporation, OpenJDK 64-Bit Server VM, 1.8.0_161, 25.161-b14
jvm.classpath = :/opt/kafka_2.11-1.0.0/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka_2.11-1.0.0/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/commons-lang3-3.5.jar:/opt/kafka_2.11-1.0.0/bin/../libs/connect-api-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/connect-file-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/connect-json-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/connect-runtime-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/connect-transforms-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/guava-20.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka_2.11-1.0.0/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka_2.11-1.0.0/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jackson-annotations-2.9.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jackson-core-2.9.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jackson-databind-2.9.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jackson-jaxrs-base-2.9.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jackson-jaxrs-json-provider-2.9.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jackson-module-jaxb-annotations-2.9.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka_2.11-1.0.0/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka_2.11-1.0.0/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka_2.11-1.0.0/bin/../libs/javax.inject-1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka_2.11-1.0.0/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jetty-continuation-9.2.22.v20170606.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jetty-http-9.2.22.v20170606.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jetty-io-9.2.22.v20170606.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jetty-security-9.2.22.v20170606.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jetty-server-9.2.22.v20170606.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jetty-servlet-9.2.22.v20170606.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jetty-servlets-9.2.22.v20170606.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jetty-util-9.2.22.v20170606.jar:/opt/kafka_2.11-1.0.0/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka_2.11-1.0.0/bin/../libs/kafka_2.11-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/kafka_2.11-1.0.0-sources.jar:/opt/kafka_2.11-1.0.0/bin/../libs/kafka_2.11-1.0.0-test-sources.jar:/opt/kafka_2.11-1.0.0/bin/../libs/kafka-clients-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/kafka-log4j-appender-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/kafka-streams-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/kafka-streams-examples-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/kafka-tools-1.0.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/log4j-1.2.17.jar:/opt/kafka_2.11-1.0.0/bin/../libs/lz4-java-1.4.jar:/opt/kafka_2.11-1.0.0/bin/../libs/maven-artifact-3.5.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka_2.11-1.0.0/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka_2.11-1.0.0/bin/../libs/plexus-utils-3.0.24.jar:/opt/kafka_2.11-1.0.0/bin/../libs/reflections-0.9.11.jar:/opt/kafka_2.11-1.0.0/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka_2.11-1.0.0/bin/../libs/scala-library-2.11.11.jar:/opt/kafka_2.11-1.0.0/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka_2.11-1.0.0/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka_2.11-1.0.0/bin/../libs/snappy-java-1.1.4.jar:/opt/kafka_2.11-1.0.0/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka_2.11-1.0.0/bin/../libs/zkclient-0.10.jar:/opt/kafka_2.11-1.0.0/bin/../libs/zookeeper-3.4.10.jar
os.spec = Linux, amd64, 3.10.0-693.11.6.el7.x86_64
os.vcpus = 2
(org.apache.kafka.connect.runtime.WorkerInfo:71)
[2018-08-20 19:02:11,460] INFO Scanning for plugin classes. This might take a moment ... (org.apache.kafka.connect.cli.ConnectStandalone:74)
[2018-08-20 19:02:11,482] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/annotations-2.0.1.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:11,667] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/annotations-2.0.1.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:11,669] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/commons-beanutils-1.9.3.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:11,744] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/commons-beanutils-1.9.3.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:11,744] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/commons-collections-3.2.2.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:11,891] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/commons-collections-3.2.2.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:11,893] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/commons-compress-1.16.1.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:11,971] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/commons-compress-1.16.1.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:11,972] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/commons-lang3-3.5.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,033] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/commons-lang3-3.5.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,034] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/commons-logging-1.2.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,042] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/commons-logging-1.2.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,043] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/connect-utils-0.3.114.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,066] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/connect-utils-0.3.114.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,067] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/extended-log-format-0.0.1.5.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,072] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/extended-log-format-0.0.1.5.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,072] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/freemarker-2.3.25-incubating.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,263] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/freemarker-2.3.25-incubating.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,265] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/guava-18.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,545] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/guava-18.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,546] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/jackson-annotations-2.8.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,557] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/jackson-annotations-2.8.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,558] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/jackson-core-2.8.5.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,589] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/jackson-core-2.8.5.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,590] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/jackson-databind-2.8.5.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,727] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/jackson-databind-2.8.5.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,734] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/javassist-3.19.0-GA.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
[2018-08-20 19:02:12,814] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/kafka-connect/kafka-connect-spooldir/javassist-3.19.0-GA.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:202)
[2018-08-20 19:02:12,815] INFO Loading plugin from: /usr/share/kafka-connect/kafka-connect-spooldir/kafka-connect-spooldir-1.0.31.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
Exception in thread "main" java.lang.NoClassDefFoundError: com/github/jcustenborder/kafka/connect/utils/VersionUtil
at com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSourceConnector.version(SpoolDirSourceConnector.java:51)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:279)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:260)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:201)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:193)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:153)
at org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:47)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:75)
Caused by: java.lang.ClassNotFoundException: com.github.jcustenborder.kafka.connect.utils.VersionUtil
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:62)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 8 more

Are there any dependencies or settings that I might be missing?

Thanks in advance.

Appending double quotes " in the key field.

I'm using "connector.class": "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceConnector" class for my connector and loading the sample json data as follows into a Kafka Topic. The key for the topic is 'feature'. However when this data being loaded into Kafka topic, I see an additional " ( double quotes ) character only when the feature key contains the value crs_dep_time=XXX.

Input JSON data

{"feature":"Intercept Term","weight":-6.37018915368633}
{"feature":"actualDelay","weight":2468.1325931963374}
{"feature":"actualDelayNew","weight":0.8075805793246844}
{"feature":"airline=AA","weight":-42.80618686815858}
{"feature":"airline=AS","weight":3.1463937948486476}
{"feature":"airline=B6","weight":2318.7675588420047}
{"feature":"airline=EV","weight":7.641077768948304}
{"feature":"airline=MQ","weight":1.0702345777979114}
{"feature":"airline=OO","weight":-48.55791727714976}
{"feature":"airline=UA","weight":-5.048287038729775}
{"feature":"airline=US","weight":6.63515312969548}
{"feature":"crs_dep_time=0002","weight":-43.29792960288084}
{"feature":"crs_dep_time=0005","weight":2472.107786931284}
{"feature":"crs_dep_time=0010","weight":-148.55745377500796}
{"feature":"crs_dep_time=0015","weight":-48.55791727714976}
{"feature":"crs_dep_time=0020","weight":2421.351212593232}
{"feature":"crs_dep_time=0023","weight":-196.14641495743783}
{"feature":"crs_dep_time=0025","weight":-149.36503435433264}
{"feature":"crs_dep_time=0028","weight":3.3980613320072917}
{"feature":"crs_dep_time=0029","weight":4.805353115180916}
{"feature":"crs_dep_time=0030","weight":3.3980613320072917}
{"feature":"crs_dep_time=0035","weight":-0.7063020962466524}
{"feature":"crs_dep_time=0036","weight":-145.88158335410827}
{"feature":"crs_dep_time=0037","weight":2461.7624040426513}
{"feature":"crs_dep_time=0040","weight":-151.1415710283772}
{"feature":"crs_dep_time=0041","weight":5.045428312744525}
{"feature":"crs_dep_time=0045","weight":-1.7765366740445638}
{"feature":"crs_dep_time=0050","weight":-148.55745377500796}
{"feature":"crs_dep_time=0055","weight":2.198657060902049}
{"feature":"crs_dep_time=0100","weight":-53.15156975679153}
{"feature":"crs_dep_time=0103","weight":4.291031579549044}

I have no clue where the double quote is added just in-front of "crs_dep_time" from ? I have highlighted the messages in bold for clarity purpose. Other similar fields are working fine as expected.

Schema Used for this key column is:

"key.schema":"{"name":"com.github.jcustenborder.kafka.connect.model.Key","type":"STRUCT","isOptional":false,"fieldSchemas":{"feature":{"type":"STRING","isOptional":false}}}"

ksql> print 'raw-flight-training-data' from beginning;
Format:AVRO
9/11/18 12:38:57 AM UTC, Intercept Term, {"feature": "Intercept Term", "weight": -38.68666597272306}
9/11/18 12:38:57 AM UTC, actualDelay, {"feature": "actualDelay", "weight": 0.42436083421438164}
9/11/18 12:38:57 AM UTC, actualDelayNew, {"feature": "actualDelayNew", "weight": -17.04699196715615}
9/11/18 12:38:57 AM UTC, airline=AA, {"feature": "airline=AA", "weight": -56.44558026364061}
9/11/18 12:38:57 AM UTC, airline=AS, {"feature": "airline=AS", "weight": -33.55130838674241}
9/11/18 12:38:57 AM UTC, airline=B6, {"feature": "airline=B6", "weight": -13.232523227415035}
9/11/18 12:38:57 AM UTC, airline=EV, {"feature": "airline=EV", "weight": -28.47293170806979}
9/11/18 12:38:57 AM UTC, airline=MQ, {"feature": "airline=MQ", "weight": -16.784277334507582}
9/11/18 12:38:57 AM UTC, airline=OO, {"feature": "airline=OO", "weight": -41.1891999921024}
9/11/18 12:38:57 AM UTC, airline=UA, {"feature": "airline=UA", "weight": -47.2902679258449}
9/11/18 12:38:57 AM UTC, airline=US, {"feature": "airline=US", "weight": -28.49017799034254}
**9/11/18 12:38:57 AM UTC, "crs_dep_time=0002, {"feature": "crs_dep_time=0002", "weight": -28.749903610401965}
9/11/18 12:38:57 AM UTC, "crs_dep_time=0005, {"feature": "crs_dep_time=0005", "weight": -37.735102736939446}
9/11/18 12:38:57 AM UTC, "crs_dep_time=0010, {"feature": "crs_dep_time=0010", "weight": -30.703876028785565}
9/11/18 12:38:57 AM UTC, "crs_dep_time=0015, {"feature": "crs_dep_time=0015", "weight": -41.1891999921024}
9/11/18 12:38:57 AM UTC, "crs_dep_time=0020, {"feature": "crs_dep_time=0020", "weight": -17.8617558582724}
9/11/18 12:38:57 AM UTC, "crs_dep_time=0023, {"feature": "crs_dep_time=0023", "weight": -31.9430007541162}
9/11/18 12:38:57 AM UTC, "crs_dep_time=0025, {"feature": "crs_dep_time=0025", "weight": -13.656884061629416}
9/11/18 12:38:57 AM UTC, "crs_dep_time=0028, {"feature": "crs_dep_time=0028", "weight": -25.370633005356673}**

can this be used in a non docker environment

I have compiled the files and want to run the same on a standalone system where kafka is installed. Please provide me the steps in details including where to configure the default settings etc

Offset Management

Hi, just a quick question.
I've noticed that the offset saved by kafka connect are never read, but you simply start from the first file in the directory and then move it to another folder when you finish reading it.

What happens if the connector dies after the file was moved to the other directory, but before all the records were actually sent to kafka?
Is there a nice way to guarantee that all the records were actually written to kafka before moving the file?

Thanks,
Andrea

When isOptional is true, in an INT32 field, it cause parse exception if empty

Hi there,

I have configured my value schema with one field as Optional true and INT32 type. When it attempts to read my CSV, it blows up and exception while parsing empty '' into Integer. It might be my understanding around it, but I was assuming that when one field is Optional true it may, or may not be filled with some value.
"SomeIntegerValue": { "isOptional": true, "type": "INT32" }

org.apache.kafka.connect.errors.DataException: Could not parse '' to 'Integer'\n\tat

Wondering if would be possible to setup a default value to it when it's empty.

Thanks!

Multiple task support (increasing tasks.max doesn't create additional tasks)

Not sure if this is a bug or if this feature actually isn't implemented.. I see the configuration option tasks.max=1 which I've tried increasing, however, I still only seem to get a single task up and running. I have also tried creating multiple instances of this connector with the same value for input.path but I always end up with a NullPointerException

{"name":"spooldir-connector-1","connector":{"state":"RUNNING","worker_id":"localhost:28082"},"tasks":[{"state":"FAILED","trace":"java.lang.NullPointerException\n\tat io.confluent.kafka.connect.source.io.processing.csv.CSVRecordProcessor.lineNumber(CSVRecordProcessor.java:160)\n\tat io.confluent.kafka.connect.source.io.PollingDirectoryMonitor.poll(PollingDirectoryMonitor.java:231)\n\tat io.confluent.kafka.connect.source.SpoolDirectoryTask.poll(SpoolDirectoryTask.java:47)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:155)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\n","id":0,"worker_id":"localhost:28082"}]}

File extension is 'PROCESSING' even after file is processed

Every time I push a file into the directory pointed to by 'input.path', it gets processed & gets moved to 'finished'. The file is renamed from 'test.log' to 'test.log.PROCESSING'. Why does it say 'PROCESSING' when processing is done?

Ideally, I am looking for a connector that will watch the 'log' directory used by log4j. As you know, log4j will start writing lines to a file in a directory. Once it's time to rotate, it will rename the log file from 'test.log' to 'test.log.1' & then will continue writing to test.log.

This connector works well for the files that are rotated but won't work for the main 'test.log' file, correct?

I mean, is there any way we can send a message to Kafka topic as soon as a new line gets written to the main 'test.log' file (and not push the file to the 'finished.path')?

kafka version not comptabile

Kafka version available in our environment is lower than the Kafka used for the compilation of this source code. Can you please tell if i can still use this code by tweaking.

Error which i am getting as of now is Header cannot be resolved to be a type

partition assignment strategy

Hi,
If I want send message to Partition on a round-robin method, how should I configure key.schema?

For example, I have 600,000 messages to a topic with 6 partitions, I want each partition has balanced number of message, 100,000 per partitions. However, with the example data you provided, if I use "id" as key, all messages are sent to one partition and other 5 partitions have zero message.

Thank you a lot for your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.