GithubHelp home page GithubHelp logo

googlecloudplatform / pubsub Goto Github PK

View Code? Open in Web Editor NEW
246.0 44.0 145.0 2.36 MB

This repository contains open-source projects managed by the owners of Google Cloud Pub/Sub.

License: Apache License 2.0

Java 86.00% Python 4.61% Shell 1.43% JavaScript 4.19% Go 3.78%

pubsub's Introduction

Build Status

This repository contains open-source projects managed by the owners of Google Cloud Pub/Sub. The projects available are:

Note: To build each of these projects, we recommend using maven.

pubsub's People

Contributors

aechalf-g avatar alvarowolfx avatar amfisher-404 avatar anguillanneuf avatar carl-mastrangelo avatar clmccart avatar davidtorres avatar davidtorresv avatar davidxia avatar denyska avatar dependabot[bot] avatar dpcollins-google avatar elefeint avatar hannahrogers-google avatar jrheizelman avatar kamalaboulhosn avatar kir-titievsky-google avatar lahuang4 avatar matt-kwong avatar mdietz94 avatar mprokhorenko avatar niksajakovljevic avatar phyfrancis avatar pongad avatar rramkumar1 avatar samarthsingal avatar schizhov avatar scwhittle avatar urshunkeler avatar yuriy-tolochkevych-exa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pubsub's Issues

Query related to Prerunning steps

Steps done as per
https://github.com/GoogleCloudPlatform/pubsub/tree/master/kafka-connector#pre-running-steps

  • I am able to create a service account and get the private key, but if I goto "IAM" tab and service account, the new service account exists.

As per Step 2:
"Go to the "IAM" tab, find the service account you just created and click on the dropdown menu named "Role(s)". Under the "Pub/Sub" submenu, select "Pub/Sub Admin"

The second step does not map to the current workflow. So could someone please clarify what has to be done for Step 2 ?
As I am not sure how to set permission for service accounts for "Pub/Sub" but I have my account set to "Pub/Sub Admin". Is that sufficient enough ?

Flic errors out when running in Kafka-only mode

$python run.py --project=google.com:kir-learns-cloud --broker=10.142.0.7
Result:
Invalid --client_type parameter given. Must be a comma deliminated sequence of client types. Allowed client types are 'gcloud_python', 'gcloud_ja
va', 'vtk', and 'experimental'. (gcloud) was provided. Kafka is assumed if --broker is provided.

The code did not actually run. Just exited. Expected: the code runs against Kafka.

What to do with experimental client?

When merging the experimental client into google-cloud-java, I fixed a number of race conditions that often deadlocks on Travis.

Should be port these fixes back to experimental client? Or set the experimental client to be the same at the google-cloud-java client, then change it later as we want to experiment more? Something else?

GOOGLE_APPLICATION_CREDENTIALS failed to find key file

Hi all,

After have started kafka-connector correctly, I've wanted connect to source GooglePubSub
I'm connect with root account and launch

export GOOGLE_APPLICATION_CREDENTIALS=../../conf/cert-test.json
curl -X POST http://localhost:8086/connectors -H 'content-type: application/json' -d @source-connector-test

Caused by: java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
Do you know what's missing in my conf ?

Thanks

Unknown configuration 'errors.deadletterqueue.topic.name'

Hello,

Please help to solve a problem with configuration PubSub Sink Kafka Connect.
My configuration:
curl -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{ "name": "pubsub_test", "config": { "connector.class": "com.google.pubsub.kafka.sink.CloudPubSubSinkConnector", "tasks.max": "1", "topics": "kafka_test_topic", "cps.topic": "cps_test_topic", "cps.project": "cps_test_project" } }' http://localhost:8083/connectors

Than in status, I have a following message:
{"name":"pubsub_test","connector":{"state":"RUNNING","worker_id":"connect:8083"},"tasks":[{"state":"FAILED","trace":"org.apache.kafka.common.config.ConfigException: Unknown configuration 'errors.deadletterqueue.topic.name'\n\tat org.apache.kafka.common.config.AbstractConfig.get(AbstractConfig.java:91)\n\tat org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig.get(ConnectorConfig.java:117)\n\tat org.apache.kafka.connect.runtime.ConnectorConfig.get(ConnectorConfig.java:162)\n\tat org.apache.kafka.common.config.AbstractConfig.getString(AbstractConfig.java:126)\n\tat org.apache.kafka.connect.runtime.Worker.sinkTaskReporters(Worker.java:531)\n\tat org.apache.kafka.connect.runtime.Worker.buildWorkerTask(Worker.java:508)\n\tat org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:451)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:873)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1600(DistributedHerder.java:111)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:888)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:884)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n","id":0,"worker_id":"connect:8083"}],"type":"sink"}

Flic does not compile on Cloud Shell

To reproduce:

  • Open Cloud Shell (a debian image)
  • $sudo apt-get install openjdk-8-jre
  • $export JAVA_HOME='/usr/lib/jvm/java-1.8.0-openjdk-amd64'
  • python run.py --project=google.com:kir-learns-cloud --broker=10.142.0.7 --client_t
    ype=vtk

Error message:
.....
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:03 min
[INFO] Finished at: 2017-01-09T17:20:03-05:00
[INFO] Final Memory: 27M/109M
[INFO] ------------------------------------------------------------------------
java -jar target/driver.jar --project google.com:kir-learns-cloud --cps_gcloud_java_publisher_count=1 --cps_gcloud_java_subscriber_count=1 --brok
er=10.142.0.7 --kafka_publisher_count=1 --kafka_subscriber_count=1 --message_size=1 --publish_batch_size=1 --request_rate=1 --max_outstanding_req
uests=10 --loadtest_duration=10m --burn_in_duration=2m --publish_batch_duration=1ms
Exception in thread "main" java.lang.UnsupportedClassVersionError: com/google/pubsub/flic/Driver : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:803)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)

Struct message body does not support Map or Struct types.

Using both JSON and AVRO I receive this DataException

Struct message body does not support Map or Struct types.

But as I understand, Struct and Map types should be converted to String.

throw new DataException("Struct message body does not support Map or Struct types.");

Here is an example of the JSON:

{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"mysql_instance.ticketmonster.customers.Key"},"payload":{"id":1001}}	{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"first_name"},{"type":"string","optional":false,"field":"last_name"},{"type":"string","optional":false,"field":"email"}],"optional":true,"name":"mysql_instance.ticketmonster.customers.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"first_name"},{"type":"string","optional":false,"field":"last_name"},{"type":"string","optional":false,"field":"email"}],"optional":true,"name":"mysql_instance.ticketmonster.customers.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"server_id"},{"type":"int64","optional":false,"field":"ts_sec"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"db"},{"type":"string","optional":true,"field":"table"}],"optional":false,"name":"io.debezium.connector.mysql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"mysql_instance.ticketmonster.customers.Envelope"},"payload":{"before":null,"after":{"id":1001,"first_name":"Sally","last_name":"Thomas","email":"[email protected]"},"source":{"version":"0.7.5","name":"mysql-instance","server_id":0,"ts_sec":0,"gtid":null,"file":"mysql-bin.000004","pos":6182,"row":0,"snapshot":true,"thread":null,"db":"ticketmonster","table":"customers"},"op":"c","ts_ms":1522334541441}}
2018-03-08 14:43:55,346 ERROR  ||  WorkerSinkTask{id=CPSConnector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted.   [org.apache.kafka.connect.runtime.WorkerSinkTask]
org.apache.kafka.connect.errors.DataException: Struct message body does not support Map or Struct types.
	at com.google.pubsub.kafka.sink.CloudPubSubSinkTask.handleValue(CloudPubSubSinkTask.java:218)
	at com.google.pubsub.kafka.sink.CloudPubSubSinkTask.put(CloudPubSubSinkTask.java:118)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:495)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2018-03-08 14:43:55,355 ERROR  ||  WorkerSinkTask{id=CPSConnector-0} Task threw an uncaught and unrecoverable exception   [org.apache.kafka.connect.runtime.WorkerTask]
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:517)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Struct message body does not support Map or Struct types.
	at com.google.pubsub.kafka.sink.CloudPubSubSinkTask.handleValue(CloudPubSubSinkTask.java:218)
	at com.google.pubsub.kafka.sink.CloudPubSubSinkTask.put(CloudPubSubSinkTask.java:118)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:495)
	... 10 more
2018-03-08 14:43:55,355 ERROR  ||  WorkerSinkTask{id=CPSConnector-0} Task is being killed and will not recover until manually restarted   [org.apache.kafka.connect.runtime.WorkerTask]

Clean up warnings

The current code compiles with warnings. Modify the maven configuration (pom.xml) to show the warnings (-Xlint:all), then clean up the warnings.

Google Pub-Sub Gives Permissions Denied

Trying out the Java Example of Publishing to Google Pub/Sub. Keep Getting Permission Denied. I am using a JSON Key for a Service Account which has Admin Access as well as Publisher Access to the Pub/Sub.
I have set Environment Variable "GOOGLE_APPLICATION_CREDENTIALS" to point to the Local Json Key File.
I am using Java-8, 0.30 Version of the Jar.

com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: User not authorized to perform this action.

load tester misreports subscriber QPS

I recently ran load testing on google-cloud-java. The load test reports ~350MB/s publish and ~40MB/s subscribe on 8-core instances.

However, Stackdriver Monitoring shows that the subscriber is keeping up with publisher. By the end of the load test, there are about 20k messages (200MB) left in the subscription. I assume that these messages were "already in the pipes" when the subscriber shut down. If subscriber were 10x slower than publisher, I'd expect to see many more messages left after a 10-minute run.

Decide on testing approach: mocking vs. not and how to deal with unimplemented.

From Urs: I ported the tests from the Nevado project for section 6 (steps 1 and 2 of issue #56).

What is working:

  • All tests ported
  • Tests compile
  • Tests run
    What needs to be done:
  • Tests are not fully functional. Some of the Nevado tests are specific to their code, and these tests need to be adapted to our code (I noted this as a TODO in the source code)
  • The start and stop test methods need to be implemented (to set up and close a default connection). Should be pretty straight forward except for the next point.
  • Need to set up a PubSub connection. Possibilities:
  • We can easily take the test environment from the integration test (but it depends on deprecated code).
  • We can use the Google Cloud PubSub emulator, but I have not yet found an easy way to run it without installing the - Google Cloud tool chain (so will not work in travis)
    We can use our access to the official Google Cloud PubSub server (but we should not share access on github, thus no travis)

We can mock the PubSub connection
For the PubSub connection, I would like to implement all possibilities and make it configurable. We can then decide whether for travis we run the deprecated emulator or use mocking (or do both in sequence).

Initial discussion indicate that all are leaning towards mocking.

Ignore tests for unimplemented code

Add the @ignore (org.junit.Ignore) annotation to tests for code that has not yet been implemented. The current tests might need to be split into smaller tests. Each test should be associated with an issue. The issue should only be closed once the associated test is not ignored anymore and passes.

Kafka Connect: Sink connector eventually gets stuck in retry loop

I'm running this pubsub connector from master as of opening this ticket, within a docker container based on confluentinc/cp-kafka-connect:3.3.0-1, which features kafka 0.11.0

Eventually, while piping messages from kafka to pubsub, the task will run into an HTTP/2 error and never be able to recover, repeating the error message below. This results in an increasing backlog, and a constantly repeated batch of messages.

The only thing that comes to mind with this is some sort of network address cache, causing the task to try and connect to a pubsub node which has been shut down. I found that the java included in the confluent container does not define a value for java's networkaddress.cache.ttl setting, so it defaults to forever. I then changed the value to 30 seconds (by forking the container to change the file itself) and restarted my workers, but the problem came back after a few hours.

# java -version
openjdk version "1.8.0_144"
OpenJDK Runtime Environment (Zulu 8.23.0.3-linux64) (build 1.8.0_144-b01)
OpenJDK 64-Bit Server VM (Zulu 8.23.0.3-linux64) (build 25.144-b01, mixed mode)
[2017-09-13 05:17:57,552] WARN Commit of WorkerSinkTask{id=athena-staging-pubsub-1} offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2017-09-13 05:17:57,839] ERROR Commit of WorkerSinkTask{id=athena-staging-pubsub-1} offsets threw an unexpected exception: (org.apache.kafka.connect.runtime.WorkerSinkTask)
org.apache.kafka.clients.consumer.RetriableCommitFailedException: Offset commit failed with a retriable exception. You should retry committing offsets. The underlying error was: The request timed out.
[2017-09-13 06:47:52,831] ERROR WorkerSinkTask{id=athena-staging-pubsub-1} Offset commit failed, rewinding to last committed offsets (org.apache.kafka.connect.runtime.WorkerSinkTask)
java.lang.RuntimeException: java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNAVAILABLE: HTTP/2 error code: NO_ERROR
Received Goaway
session_timed_out
at com.google.pubsub.kafka.sink.CloudPubSubSinkTask.flush(CloudPubSubSinkTask.java:293)
at org.apache.kafka.connect.sink.SinkTask.preCommit(SinkTask.java:117)
at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:305)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:164)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNAVAILABLE: HTTP/2 error code: NO_ERROR
Received Goaway
session_timed_out
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:500)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:459)
at com.google.pubsub.kafka.sink.CloudPubSubSinkTask.flush(CloudPubSubSinkTask.java:290)
... 11 more
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: HTTP/2 error code: NO_ERROR
Received Goaway
session_timed_out
at io.grpc.Status.asRuntimeException(Status.java:545)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:417)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:458)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$500(ClientCallImpl.java:385)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$3.runInContext(ClientCallImpl.java:486)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:52)
at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
... 3 more

NoSuchMethodError using Kafka Connect 3.1.1

When running Kafka Connect 3.1.1 standalone following the Kafka Quickstart guide, the following exception occurs when trying to use a CloudPubSubSinkConnector configuration:

DEBUG Sending GET with input null to http://localhost:8081/schemas/ids/1 (io.confluent.kafka.schemaregistry.client.rest.RestService:118)
DEBUG Sending POST with input {"schema":"{\"type\":\"record\",\"name\":\"myrecord\",\"fields\":[{\"name\":\"f1\",\"type\":\"string\"}]}"} to http://localhost:8081/subjects/test-value (io.confluent.kafka.schemaregistry.client.rest.RestService:118)
DEBUG Flushing... (com.google.pubsub.kafka.sink.CloudPubSubSinkTask:260)
INFO WorkerSinkTask{id=CPSConnector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:262)
DEBUG Group connect-CPSConnector committed offset 0 for partition test-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:544)
DEBUG Finished WorkerSinkTask{id=CPSConnector-0} offset commit successfully in 9 ms (org.apache.kafka.connect.runtime.WorkerSinkTask:197)
ERROR Task CPSConnector-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
java.lang.NoSuchMethodError: org.apache.kafka.connect.sink.SinkRecord.<init>(Ljava/lang/String;ILorg/apache/kafka/connect/data/Schema;Ljava/lang/Object;Lorg/apache/kafka/connect/data/Schema;Ljava/lang/Object;JLjava/lang/Long;Lorg/apache/kafka/common/record/TimestampType;)V
	at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:359)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:239)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:172)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:143)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:143)

kafka -- CPS-source-connector dropping WorkerSourceTasks

I keep randomly dropping workerSourceTasks (starts with 10) They will not restart unless I tell the REST endpoint to start it again. I have investigated and tried differrent settings all day yesterday with no luck keeping them up.

  • Here is the output of a task failing after commiting the offsets
[2017-05-19 15:30:12,117] INFO Finished WorkerSourceTask{id=CPSConnector-4} commitOffsets successfully in 30 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-19 15:30:12,117] ERROR Task CPSConnector-4 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:141)
java.util.ConcurrentModificationException
	at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)
	at java.util.HashMap$KeyIterator.next(HashMap.java:1461)
	at com.google.protobuf.AbstractMessageLite$Builder.checkForNullValues(AbstractMessageLite.java:379)
	at com.google.protobuf.AbstractMessageLite$Builder.addAll(AbstractMessageLite.java:366)
	at com.google.pubsub.v1.AcknowledgeRequest$Builder.addAllAckIds(AcknowledgeRequest.java:621)
	at com.google.pubsub.kafka.source.CloudPubSubSourceTask.ackMessages(CloudPubSubSourceTask.java:202)
	at com.google.pubsub.kafka.source.CloudPubSubSourceTask.poll(CloudPubSubSourceTask.java:108)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:162)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
[2017-05-19 15:30:12,173] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:142)
[2017-05-19 15:30:12,173] INFO Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:689)

Also, I cannot keep a watch on the tasks because even though the state is 'failed' the header response is '200 OK'

HTTP/1.1 200 OK
Date: Fri, 19 May 2017 15:49:29 GMT
Content-Type: application/json
Content-Length: 1325
Server: Jetty(9.2.15.v20160210)

{
  "state": "FAILED",
  "trace": "java.util.ConcurrentModificationException\n\tat java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)\n\tat java.util.HashMap$KeyIterator.next(HashMap.java:1461)\n\tat com.google.protobuf.AbstractMessageLite$Builder.checkForNullValues(AbstractMessageLite.java:379)\n\tat com.google.protobuf.AbstractMessageLite$Builder.addAll(AbstractMessageLite.java:366)\n\tat com.google.pubsub.v1.AcknowledgeRequest$Builder.addAllAckIds(AcknowledgeRequest.java:621)\n\tat com.google.pubsub.kafka.source.CloudPubSubSourceTask.ackMessages(CloudPubSubSourceTask.java:202)\n\tat com.google.pubsub.kafka.source.CloudPubSubSourceTask.poll(CloudPubSubSourceTask.java:108)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:162)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:748)\n",
  "id": 4,
  "worker_id": "REDACTED:8083"
}

Support for distributed kafka-connect

I was trying to work with distributed connect supported by kafka using ./bin/connect-distributed.sh config/connect-distributed.properties /kafka-connector/config/cps-source-connector.properties but seems like I cannot pull my logs from GCP with distributed connector. I was able to do it with standalone connector. Any clues why?

ALPN is not configured properly - error seen only for CloudPubSubSourceConnector

Steps done:

  • git clone
  • mvn package
  • Used the created jar in the kafka connect setup

Setup:

  • Using confluent platform docker image of kafka connect - FROM confluentinc/cp-kafka-connect:4.1.0
  • Intend to use CloudPubSubSourceConnector
  • Using the master branch of the pubsub connector to generate the jar.

ALPN is not configured error showed up, I referenced the previous issues related to ALPN but those were related to Sink connectors and the resolution for that was to update the netty-tcnative-boringssl-static.

Any suggestions ?

Error seen:
com.google.pubsub.kafka.source.CloudPubSubSourceConnector.verifySubscription(CloudPubSubSourceConnector.java:209)\n\tat com.google.pubsub.kafka.source.CloudPubSubSourceConnector.start(CloudPubSubSourceConnector.java:117)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195)\n\tat org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:208)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:899)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:109)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:915)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:911)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.IllegalArgumentException: ALPN is not configured properly. See https://github.com/grpc/grpc-java/blob/master/SECURITY.md#troubleshooting for more information.\n\tat io.grpc.netty.GrpcSslContexts.selectApplicationProtocolConfig(GrpcSslContexts.java:166)\n\tat io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:136)\n\tat io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:124)\n\tat io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:94)\n\tat io.grpc.netty.NettyChannelBuilder$NettyTransportFactory$DefaultNettyTransportCreationParamsFilterFactory.<init>(NettyChannelBuilder.java:546)\n\tat io.grpc.netty.NettyChannelBuilder$NettyTransportFactory$DefaultNettyTransportCreationParamsFilterFactory.<init>(NettyChannelBuilder.java:539)\n\tat io.grpc.netty.NettyChannelBuilder$NettyTransportFactory.<init>(NettyChannelBuilder.java:477)\n\tat io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:325)\n\tat io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:362)\n\tat com.google.pubsub.kafka.common.ConnectorUtils.getChannel(ConnectorUtils.java:53)\n\tat com.google.pubsub.kafka.source.CloudPubSubSourceConnector.verifySubscription(CloudPubSubSourceConnector.java:200)\n\t... 13 more\nCaused by: java.lang.ClassNotFoundException: org/eclipse/jetty/alpn/ALPN\n\tat java.lang.Class.forName0(Native Method)\n\tat java.lang.Class.forName(Class.java:348)\n\tat io.grpc.netty.JettyTlsUtil.isJettyAlpnConfigured(JettyTlsUtil.java:64)\n\tat io.grpc.netty.GrpcSslContexts.selectApplicationProtocolConfig(GrpcSslContexts.java:153)\n\t... 23 more\n"

stackdriver logs from pub/sub to kafka cps-source-connector

Kafka Version 2.11-0.10.1.0 (have also tried 2.11-0.10.2.0 with updates to the deps)
I have hit a road block here, I have ran though the docs too many times to count and still am coming up short. Hopefully someone can give me a pointer here so I can get this working.

Issue

When running the cps-source-connector I get this error:

ERROR Task CPSConnector-3 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
org.apache.kafka.connect.errors.DataException: Conversion error: null value for field that is required and has no default value
	at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:556)
	at org.apache.kafka.connect.json.JsonConverter.convertToJsonWithEnvelope(JsonConverter.java:537)
	at org.apache.kafka.connect.json.JsonConverter.fromConnectData(JsonConverter.java:291)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:182)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:160)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Steps to Reproduce

  1. Create pub/sub topic and subcription
  2. Create stackdriver export (sink) with a basic log filter, for example: mine is just gce_instance
  3. In GCE instance install/start kafka via quickstart guide
  4. Follow README instructions from this kafka pubsub repo to get the source connector setup and configured.
  5. create kafka topic
  6. run bin/connect-standalone.sh config/connect-standalone.properties config/cps-source-connector.properties to start connector
    properties file looks like this
name=CPSConnector
connector.class=com.google.pubsub.kafka.source.CloudPubSubSourceConnector
tasks.max=10
cps.project=<actual project>
cps.subscription=<actual subscription>
kafka.topic=test2
  1. Get error regarding null resource (see above)

Troubleshooting

-verified export creds export GOOGLE_APPLICATION_CREDENTIALS=<path to google svc acct json
-issued glcoud command locally to pull subscription data successfully gcloud beta pubsub subscriptions pull <subscription name>
-combed through tons of code but made no progress (java noob here)
-Tons of permission checking in the google project and cross referencing the docs

Big query connector

I made this dataflow pipeline that will connect Pub/Sub to Big query. Any ideas where would be the right place to commit this.

grpc-all downgrade causes class not found on io.grpc.ServiceDescriptor

kafka-connector pom specifies to use io-grpc:grpc-all:0.14.0 yet the other dependency, com.google.api.grpc:pubsub:0.0.9 requires io-grpc:grpc-core:0.15.0 since io.grpc.ServiceDescriptor does not exist until 0.15.0:

kafka-connector:master pom:

<dependency>
  <groupId>io.grpc</groupId>
  <artifactId>grpc-all</artifactId>
  <version>0.14.0</version>
</dependency>
...
<dependency>
  <groupId>com.google.api.grpc</groupId>
  <artifactId>grpc-google-pubsub-v1</artifactId>
  <version>0.0.9</version>
</dependency>

whereas the pom for com.google.api.grpc:pubsub:0.0.9:

<dependency>
  <groupId>io.grpc</groupId>
  <artifactId>grpc-all</artifactId>
  <version>0.15.0</version>
  <scope>compile</scope>
</dependency>

Trying to start a CloudPubSubGRPCPublisher I receive:

Caught: java.lang.NoClassDefFoundError: io/grpc/ServiceDescriptor
java.lang.NoClassDefFoundError: io/grpc/ServiceDescriptor
    at trendGooglePubSubPitches.run(trendGooglePubSubPitches:12)
Caused by: java.lang.ClassNotFoundException: io.grpc.ServiceDescriptor
    ... 1 more

Finally, I confirm that grpc-core:0.14.0 does not have ServiceDescriptor whereas 0.15.0 does:

Ryan-Smalls-MacBook-Pro:kafka-connector rsmall$ jar tf ~/.m2/repository/io/grpc/grpc-core/0.14.0/grpc-core-0.14.0.jar | grep Service
io/grpc/ServerServiceDefinition$1.class
io/grpc/ServerServiceDefinition.class
io/grpc/BindableService.class
io/grpc/ServerServiceDefinition$Builder.class

Versus

Ryan-Smalls-MacBook-Pro:kafka-connector rsmall$ jar tf ~/Downloads/grpc-core-0.15.0.jar | grep Service
io/grpc/ServerServiceDefinition$1.class
io/grpc/ServiceDescriptor.class
io/grpc/ServerServiceDefinition.class
io/grpc/BindableService.class
io/grpc/ServerServiceDefinition$Builder.class

That 0.15.0 jar I downloaded from Maven central to inspect directly.

Kafka PubSub Connector: Jetty ALPN/NPN has not been properly configured

I am using kafka_2.11-0.10.2.1 and the pubsub connector provided by google here. All I care to do is push data from a Kafka Topic to a PubSub one using a standalone connector. I followed all steps as I should have:

  1. Produced the cps-kafka-connector.jar and placed it in kafka_2.11-0.10.2.1/libs
  2. Added the cps-sink-connector.properties file in kafka's config directory. The file looks like this:
name=CPSConnector
connector.class=com.google.pubsub.kafka.sink.CloudPubSubSinkConnector
tasks.max=10
topics=kafka_topic
cps.topic=pubsub_topic
cps.project=my_gcp_project_12345
  1. Created the kafka_topic and produced some messages.
  2. Created the pubsub_topic on my GCP account.
  3. I ran:
bin/connect-standalone.sh config/connect-standalone.properties config/cps-sink-connector.properties

but I am getting a series of errors which look to be caused by Jetty ALPN/NPN has not been properly configured. I supposed this is not a configurable issue and has to do with the jar file itself, but maybe I got it wrong.

My understanding is that since no standard Java release has built-in support for ALPN today we need to use the Jetty-ALPN (or Jetty-NPN if on Java < 8) bootclasspath extension for OpenJDK. To do this, I have to add an Xbootclasspath JVM option referencing the path to the Jetty alpn-boot jar.

java -Xbootclasspath/p:/path/to/jetty/alpn/extension.jar ...

I also need to use the release of the Jetty-ALPN jar specific to the version of Java used by the cps-kafka-connector.jar. Is this something I can change in the package configs before I build?

Create a TopicConnectionFactory

The Publisher test suite (issue #56 ) is being implemented in the branch "jms_publisher_test_suite". The tests currently fail because there is no implementation of a TopicConnectionFactory.

kafka-source-connector worker tasks killing off over time

I am still seeing an issue with the worker tasks killing off over time. Here is a snip of when it fails.

[2017-05-23 06:44:55,426] INFO Finished WorkerSourceTask{id=CPSConnector-2} commitOffsets successfully in 53 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 06:44:55,426] ERROR Task CPSConnector-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:141)
java.util.ConcurrentModificationException
	at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)
	at java.util.HashMap$KeyIterator.next(HashMap.java:1461)
	at com.google.protobuf.AbstractMessageLite$Builder.checkForNullValues(AbstractMessageLite.java:379)
	at com.google.protobuf.AbstractMessageLite$Builder.addAll(AbstractMessageLite.java:366)
	at com.google.pubsub.v1.AcknowledgeRequest$Builder.addAllAckIds(AcknowledgeRequest.java:621)
	at com.google.pubsub.kafka.source.CloudPubSubSourceTask.ackMessages(CloudPubSubSourceTask.java:202)
	at com.google.pubsub.kafka.source.CloudPubSubSourceTask.poll(CloudPubSubSourceTask.java:108)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:162)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
[2017-05-23 06:44:55,427] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:142)
[2017-05-23 06:44:55,427] INFO Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:689)

Also, if I hit the REST endpoint I still see "state":"FAILED" but status is HTTP/1.1 200 OK so I would be unable to monitor and restart the tasks that fail. as you can see below, there are a few worker tasks that are not active but the ones that are active are happy and healthy.

[2017-05-23 14:13:13,705] INFO Finished WorkerSourceTask{id=CPSConnector-0} commitOffsets successfully in 0 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 14:13:20,047] INFO Finished WorkerSourceTask{id=CPSConnector-2} commitOffsets successfully in 0 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 14:13:27,730] INFO Finished WorkerSourceTask{id=CPSConnector-4} commitOffsets successfully in 0 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 14:13:29,672] INFO Finished WorkerSourceTask{id=CPSConnector-5} commitOffsets successfully in 1 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 14:13:30,868] INFO Finished WorkerSourceTask{id=CPSConnector-6} commitOffsets successfully in 1 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 14:13:31,264] INFO Finished WorkerSourceTask{id=CPSConnector-7} commitOffsets successfully in 1 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 14:13:31,265] INFO Finished WorkerSourceTask{id=CPSConnector-9} commitOffsets successfully in 1 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 14:13:43,706] INFO Finished WorkerSourceTask{id=CPSConnector-0} commitOffsets successfully in 0 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)
[2017-05-23 14:13:50,048] INFO Finished WorkerSourceTask{id=CPSConnector-2} commitOffsets successfully in 1 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:371)

kubectl exec -it tnd-connector-2484114934-4d9cv /bin/bash
<v:/opt/kafka_2.11-0.10.2.0# curl -i localhost:8083/connectors/CPSConnector/tasks/1/status
HTTP/1.1 200 OK
Date: Tue, 23 May 2017 14:14:49 GMT
Content-Type: application/json
Content-Length: 1326
Server: Jetty(9.2.15.v20160210)

{"state":"FAILED","trace":"java.util.ConcurrentModificationException\n\tat 
java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)\n\tat 
java.util.HashMap$KeyIterator.next(HashMap.java:1461)\n\tat 
com.google.protobuf.AbstractMessageLite$Builder.checkForNullValues(AbstractMessageLite.java:379)\n\tat 
com.google.protobuf.AbstractMessageLite$Builder.addAll(AbstractMessageLite.java:366)\n\tat 
com.google.pubsub.v1.AcknowledgeRequest$Builder.addAllAckIds(AcknowledgeRequest.java:621)\n\tat 
com.google.pubsub.kafka.source.CloudPubSubSourceTask.ackMessages(CloudPubSubSourceTask.java:202)\n\tat 
com.google.pubsub.kafka.source.CloudPubSubSourceTask.poll(CloudPubSubSourceTask.java:108)\n\tat 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:162)\n\tat 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)\n\tat 
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)\n\tat 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat 
java.lang.Thread.run(Thread.java:748)\n","id":1,"worker_id":"REDACTED:8083"}root@tnd-connector-2484114934-4d9cv:/opt/kafka_2.11-0.10.2.0#

How to wait for an ack message to be delivered?

It seems that the AckReplyConsumer.ack() and nack() messages are asynchronous. Is there any mechanism for me to wait for the background messages to complete. I'm looking for an equivalent to the Publisher.publishAllOutstanding() but for acknowledgements.

Thanks.

Implement PubSubTopicSession class

The Publisher test suite (issue #56 ) is being implemented in the branch "jms_publisher_test_suite". The tests currently fail because the PubSubTopicSession class is not implemented.

com.google.api.gax.rpc.DeadlineExceededException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED

Connector throws following error:

[2018-04-10 14:48:59,535] WARN WorkerSinkTask{id=cps-kafka-sink-0} Offset commit failed during close (org.apache.kafka.connect.runtime.WorkerSinkTask:348) [2018-04-10 14:48:59,535] ERROR WorkerSinkTask{id=cps-kafka-sink-0} Commit of offsets threw an unexpected exception for sequence number 2: null (org.apache.kafka.connect.runtime.Wor kerSinkTask:233) java.lang.RuntimeException: java.util.concurrent.ExecutionException: com.google.api.gax.rpc.DeadlineExceededException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline ex ceeded after 9924109178ns at com.google.pubsub.kafka.sink.CloudPubSubSinkTask.flush(CloudPubSubSinkTask.java:255) at org.apache.kafka.connect.sink.SinkTask.preCommit(SinkTask.java:117) at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:345) at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:547) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:170) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.util.concurrent.ExecutionException: com.google.api.gax.rpc.DeadlineExceededException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9924 109178ns at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:500) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:459) at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:76) at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62) at com.google.pubsub.kafka.sink.CloudPubSubSinkTask.flush(CloudPubSubSinkTask.java:253) ... 11 more Caused by: com.google.api.gax.rpc.DeadlineExceededException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9924109178ns at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:51) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60) at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:95) at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:61) at com.google.common.util.concurrent.Futures$4.run(Futures.java:1123) at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:435) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:900) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:811) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:675) at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:492) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:467) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41) at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41) at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:391) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:475) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:557) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:478) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:590) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ... 3 more Caused by: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9924109178ns at io.grpc.Status.asRuntimeException(Status.java:526) ... 19 more

After this error Kafka and Connect shutdown until it retry's.
I'm using mostly default settings.

Implement PubSubTopicConnection class

The Publisher test suite (issue #56 ) is being implemented in the branch "jms_publisher_test_suite". The tests currently fail because the PubSubTopicConnection class is not implemented.

Implement PubSubMessageConsumer class

The Publisher test suite (issue #56 ) is being implemented in the branch "jms_publisher_test_suite". The tests currently fail because the PubSubMessageConsumer class is not implemented.

Failed to find any class that implements Connector

Hello,
Please help with configuration "CloudPubSubSinkConnector".

I have a very weird situation.

I have built PubSub connector from source on Centos 7 with following commands:

sudo yum install -y wget
sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
sudo yum install -y apache-maven
mvn --version

cd ~
sudo yum install -y git
git clone --recursive https://github.com/GoogleCloudPlatform/cloud-pubsub-kafka
cd cloud-pubsub-kafka/kafka-connector

mvn package

Than I have copied everything from folder 'target' to '/usr/share/java' (this folder is configured in /etc/kafka/connect-distributed.properties as plugin.path).

Restarted confluent-kafka-connect service.

And after loading, I have checked connector-plugins URL (http://localhost:8083/connector-plugins). And in this page, I have information about my loaded Kafka Connect Plugins.

[{"class":"com.google.pubsub.kafka.sink.CloudPubSubSinkConnector","type":"sink","version":"1.1.1-cp1"},{"class":"com.google.pubsub.kafka.source.CloudPubSubSourceConnector","type":"source","version":"1.1.1-cp1"},{"class":"io.confluent.connect.activemq.ActiveMQSourceConnector","type":"source","version":"4.1.1"},{"class":"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector","type":"sink","version":"4.1.1"},{"class":"io.confluent.connect.hdfs.HdfsSinkConnector","type":"sink","version":"4.1.1"},{"class":"io.confluent.connect.hdfs.tools.SchemaSourceConnector","type":"source","version":"1.1.1-cp1"},{"class":"io.confluent.connect.ibm.mq.IbmMQSourceConnector","type":"source","version":"4.1.1"},{"class":"io.confluent.connect.jdbc.JdbcSinkConnector","type":"sink","version":"4.1.1"},{"class":"io.confluent.connect.jdbc.JdbcSourceConnector","type":"source","version":"4.1.1"},{"class":"io.confluent.connect.jms.JmsSourceConnector","type":"source","version":"4.1.1"},{"class":"io.confluent.connect.replicator.ReplicatorSourceConnector","type":"source","version":"4.1.1"},{"class":"io.confluent.connect.s3.S3SinkConnector","type":"sink","version":"4.1.1"},{"class":"io.confluent.connect.storage.tools.SchemaSourceConnector","type":"source","version":"1.1.1-cp1"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"1.1.1-cp1"},{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"1.1.1-cp1"}]

But when I am trying to create new connector with following command:
curl -X POST \ -H 'Content-Type: application/json' \ -H 'Accept: application/json' \ -d '{ "name": "pubsub_test", "config": { "connector.class": "com.google.pubsub.kafka.sink.CloudPubSubSinkConnector", "tasks.max": "1", "topics": "kafka_test_topic", "cps.topic": "pubsub_test_topic", "cps.project": "pubsub_test_project" } }' http://localhost:8083/connectors

I see following information:
{"error_code":500,"message":"Failed to find any class that implements Connector and which name matches com.google.pubsub.kafka.sink.CloudPubSubSinkConnector

Implement PubSubTopic class

The Publisher test suite (bug #56) is being implemented in the branch "jms_publisher_test_suite". The tests currently fail because the PubSubTopic class is not implemented.

pubsub loadtest fails with timeout using cloudshell

loadtest fails using google cloudshell to attempt to run test

Steps to reproduce:

Using Cloud Shell
install go tool chain; sudo apt-get install golang
Invoke command python run.py --project my-project --client_types=gcloud
_python

Expected:

Pub/Sub benchmarks

Attached Stacktrace

stanley_zheng@cloudreach-mars:~/pubsub/load-test-framework$ python run.py --project cloudreach-mars --client_types=gcloud
_python --test=throughput
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:536)
[INFO] Scanning for projects...
[WARNING]
[WARNING] Some problems were encountered while building the effective model for com.google.pubsub.flic:flic:jar:1.0-SNAPSHOT
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-compiler-plugin is missing. @ line 184, column 15
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING]
[INFO] ------------------------------------------------------------------------
[INFO] Detecting the operating system and CPU architecture
[INFO] ------------------------------------------------------------------------
[INFO] os.detected.name: linux
[INFO] os.detected.arch: x86_64
[INFO] os.detected.release: debian
[INFO] os.detected.release.version: 9
[INFO] os.detected.release.like.debian: true
[INFO] os.detected.classifier: linux-x86_64
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building CompatibilityTesting 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- protobuf-maven-plugin:0.5.0:compile (default) @ flic ---
[INFO] Compiling 1 proto file(s) to /home/stanley_zheng/pubsub/load-test-framework/target/generated-sources/protobuf/java
[INFO]
[INFO] --- protobuf-maven-plugin:0.5.0:compile-custom (default) @ flic ---
[INFO] Compiling 1 proto file(s) to /home/stanley_zheng/pubsub/load-test-framework/target/generated-sources/protobuf/grpc-java
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ flic ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] Copying 15 resources
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ flic ---
[INFO] Changes detected - recompiling the module!
[WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent!
[INFO] Compiling 24 source files to /home/stanley_zheng/pubsub/load-test-framework/target/classes
[WARNING] /home/stanley_zheng/pubsub/load-test-framework/target/generated-sources/protobuf/grpc-java/com/google/pubsub/flic/common/LoadtestGrpc.java: Some input files use or override a deprecated API.
[WARNING] /home/stanley_zheng/pubsub/load-test-framework/target/generated-sources/protobuf/grpc-java/com/google/pubsub/flic/common/LoadtestGrpc.java: Recompile with -Xlint:deprecation for details.
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ flic ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /home/stanley_zheng/pubsub/load-test-framework/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ flic ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ flic ---
[INFO] Surefire report directory: /home/stanley_zheng/pubsub/load-test-framework/target/surefire-reports

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.google.pubsub.flic.DriverTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.148 sec
Running com.google.pubsub.flic.output.SheetsServiceTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.465 sec
Running com.google.pubsub.flic.common.LatencyDistributionTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.026 sec
Running com.google.pubsub.flic.controllers.ClientTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Tests run: 10, Failures: 0, Errors: 0, Skipped: 0

[INFO]
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ flic ---
[INFO] Building jar: /home/stanley_zheng/pubsub/load-test-framework/target/flic-1.0-SNAPSHOT.jar
[INFO]
[INFO] --- maven-shade-plugin:2.3:shade (driver) @ flic ---
[INFO] Including junit:junit:jar:4.12 in the shaded jar.
[INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
[INFO] Including org.apache.kafka:kafka_2.11:jar:0.10.0.0 in the shaded jar.
[INFO] Including com.yammer.metrics:metrics-core:jar:2.2.0 in the shaded jar.
[INFO] Including org.scala-lang:scala-library:jar:2.11.8 in the shaded jar.
[INFO] Including org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.0.4 in the shaded jar.
[INFO] Including org.apache.kafka:kafka-clients:jar:0.10.0.0 in the shaded jar.
[INFO] Including net.jpountz.lz4:lz4:jar:1.3.0 in the shaded jar.
[INFO] Including org.xerial.snappy:snappy-java:jar:1.1.2.4 in the shaded jar.
[INFO] Including net.sf.jopt-simple:jopt-simple:jar:4.9 in the shaded jar.
[INFO] Including org.apache.zookeeper:zookeeper:jar:3.4.6 in the shaded jar.
[INFO] Including jline:jline:jar:0.9.94 in the shaded jar.
[INFO] Including io.netty:netty:jar:3.7.0.Final in the shaded jar.
[INFO] Including com.beust:jcommander:jar:1.48 in the shaded jar.
[INFO] Including com.google.cloud:google-cloud-pubsub:jar:0.24.0-beta in the shaded jar.
[INFO] Including io.netty:netty-tcnative-boringssl-static:jar:2.0.3.Final in the shaded jar.
[INFO] Including com.google.cloud:google-cloud-core:jar:1.6.0 in the shaded jar.
[INFO] Including joda-time:joda-time:jar:2.9.2 in the shaded jar.
[INFO] Including org.json:json:jar:20160810 in the shaded jar.
[INFO] Including com.google.http-client:google-http-client:jar:1.22.0 in the shaded jar.
[INFO] Including com.google.api:api-common:jar:1.1.0 in the shaded jar.
[INFO] Including com.google.api:gax:jar:1.8.1 in the shaded jar.
[INFO] Including com.google.auto.value:auto-value:jar:1.2 in the shaded jar.
[INFO] Including org.threeten:threetenbp:jar:1.3.3 in the shaded jar.
[INFO] Including com.google.auth:google-auth-library-oauth2-http:jar:0.8.0 in the shaded jar.
[INFO] Including com.google.protobuf:protobuf-java-util:jar:3.3.1 in the shaded jar.
[INFO] Including com.google.code.gson:gson:jar:2.7 in the shaded jar.
[INFO] Including com.google.api.grpc:proto-google-common-protos:jar:0.1.19 in the shaded jar.
[INFO] Including com.google.api.grpc:proto-google-iam-v1:jar:0.1.19 in the shaded jar.
[INFO] Including com.google.cloud:google-cloud-core-grpc:jar:1.6.0 in the shaded jar.
[INFO] Including com.google.auth:google-auth-library-credentials:jar:0.8.0 in the shaded jar.
[INFO] Including com.google.protobuf:protobuf-java:jar:3.3.1 in the shaded jar.
[INFO] Including io.grpc:grpc-protobuf:jar:1.6.1 in the shaded jar.
[INFO] Including io.grpc:grpc-protobuf-lite:jar:1.6.1 in the shaded jar.
[INFO] Including io.grpc:grpc-context:jar:1.6.1 in the shaded jar.
[INFO] Including com.google.api:gax-grpc:jar:0.25.1 in the shaded jar.
[INFO] Including com.google.api.grpc:proto-google-cloud-pubsub-v1:jar:0.1.19 in the shaded jar.
[INFO] Including com.google.api.grpc:grpc-google-cloud-pubsub-v1:jar:0.1.19 in the shaded jar.
[INFO] Including io.grpc:grpc-netty:jar:1.6.1 in the shaded jar.
[INFO] Including io.grpc:grpc-core:jar:1.6.1 in the shaded jar.
[INFO] Including com.google.instrumentation:instrumentation-api:jar:0.4.3 in the shaded jar.
[INFO] Including io.opencensus:opencensus-api:jar:0.5.1 in the shaded jar.
[INFO] Including io.netty:netty-codec-http2:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-codec-http:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-codec:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-handler:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-buffer:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-common:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-handler-proxy:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-transport:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-resolver:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.netty:netty-codec-socks:jar:4.1.14.Final in the shaded jar.
[INFO] Including io.grpc:grpc-stub:jar:1.6.1 in the shaded jar.
[INFO] Including io.grpc:grpc-auth:jar:1.6.1 in the shaded jar.
[INFO] Including commons-io:commons-io:jar:2.4 in the shaded jar.
[INFO] Including org.apache.commons:commons-lang3:jar:3.4 in the shaded jar.
[INFO] Including com.google.guava:guava:jar:22.0 in the shaded jar.
[INFO] Including com.google.code.findbugs:jsr305:jar:1.3.9 in the shaded jar.
[INFO] Including com.google.errorprone:error_prone_annotations:jar:2.0.18 in the shaded jar.
[INFO] Including com.google.j2objc:j2objc-annotations:jar:1.1 in the shaded jar.
[INFO] Including org.codehaus.mojo:animal-sniffer-annotations:jar:1.14 in the shaded jar.
[INFO] Including com.google.apis:google-api-services-pubsub:jar:v1-rev7-1.21.0 in the shaded jar.
[INFO] Including org.slf4j:slf4j-api:jar:1.7.21 in the shaded jar.
[INFO] Including org.slf4j:slf4j-log4j12:jar:1.7.21 in the shaded jar.
[INFO] Including log4j:log4j:jar:1.2.17 in the shaded jar.
[INFO] Including com.google.apis:google-api-services-compute:jar:v1-rev125-1.22.0 in the shaded jar.
[INFO] Including com.google.apis:google-api-services-storage:jar:v1-rev85-1.22.0 in the shaded jar.
[INFO] Including com.google.apis:google-api-services-monitoring:jar:v3-rev9-1.22.0 in the shaded jar.
[INFO] Including com.google.api-client:google-api-client:jar:1.22.0 in the shaded jar.
[INFO] Including com.google.oauth-client:google-oauth-client:jar:1.22.0 in the shaded jar.
[INFO] Including com.google.http-client:google-http-client-jackson2:jar:1.22.0 in the shaded jar.
[INFO] Including com.fasterxml.jackson.core:jackson-core:jar:2.1.3 in the shaded jar.
[INFO] Including com.google.guava:guava-jdk5:jar:17.0 in the shaded jar.
[INFO] Including org.apache.httpcomponents:httpcore:jar:4.2.5 in the shaded jar.
[INFO] Including org.apache.httpcomponents:httpclient:jar:4.2.5 in the shaded jar.
[INFO] Including commons-logging:commons-logging:jar:1.1.1 in the shaded jar.
[INFO] Including commons-codec:commons-codec:jar:1.6 in the shaded jar.
[INFO] Including com.google.apis:google-api-services-sheets:jar:v4-rev108-1.22.0 in the shaded jar.
[INFO] Including com.google.oauth-client:google-oauth-client-java6:jar:1.11.0-beta in the shaded jar.
[INFO] Including com.google.oauth-client:google-oauth-client-jetty:jar:1.22.0 in the shaded jar.
[INFO] Including org.mortbay.jetty:jetty:jar:6.1.26 in the shaded jar.
[INFO] Including org.mortbay.jetty:jetty-util:jar:6.1.26 in the shaded jar.
[INFO] Including org.mortbay.jetty:servlet-api:jar:2.5-20081211 in the shaded jar.
[INFO] Including com.101tec:zkclient:jar:0.7 in the shaded jar.
[WARNING] guava-22.0.jar, guava-jdk5-17.0.jar define 1340 overlappping classes:
[WARNING]   - com.google.common.cache.LocalCache$KeyIterator
[WARNING]   - com.google.common.collect.ImmutableMapValues$1
[WARNING]   - com.google.common.collect.WellBehavedMap$EntrySet$1
[WARNING]   - com.google.common.io.LineProcessor
[WARNING]   - com.google.common.util.concurrent.AbstractService$5
[WARNING]   - com.google.common.io.BaseEncoding$StandardBaseEncoding$2
[WARNING]   - com.google.common.reflect.ImmutableTypeToInstanceMap
[WARNING]   - com.google.common.io.ByteProcessor
[WARNING]   - com.google.common.math.package-info
[WARNING]   - com.google.common.util.concurrent.SimpleTimeLimiter
[WARNING]   - 1330 more...
[WARNING] maven-shade-plugin has detected that some .class files
[WARNING] are present in two or more JARs. When this happens, only
[WARNING] one single version of the class is copied in the uberjar.
[WARNING] Usually this is not harmful and you can skeep these
[WARNING] warnings, otherwise try to manually exclude artifacts
[WARNING] based on mvn dependency:tree -Ddetail=true and the above
[WARNING] output
[WARNING] See http://docs.codehaus.org/display/MAVENUSER/Shade+Plugin
[INFO] Replacing /home/stanley_zheng/pubsub/load-test-framework/target/driver.jar with /home/stanley_zheng/pubsub/load-test-framework/target/flic-1.0-SNAPSHOT-shaded.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 34.498 s
[INFO] Finished at: 2018-05-28T15:40:48-04:00
[INFO] Final Memory: 30M/106M
[INFO] ------------------------------------------------------------------------
java -jar target/driver.jar --project cloudreach-mars --cps_gcloud_python_publisher_count=1 --cps_gcloud_python_subscriber_count=1 --message_size=10000 --publish_batch_size=10 --request_rate=1000000000 --max_outstanding_requests=1600 --loadtest_duration=10m --burn_in_duration=2m --publish_batch_duration=50ms --num_cores_test
INFO-Topic already exists, reusing.
INFO-File cps-gcloud-java-publisher_startup_script.sh is current, reusing.
INFO-File kafka-publisher_startup_script.sh is current, reusing.
INFO-File cps-gcloud-ruby-subscriber_startup_script.sh is current, reusing.
INFO-File cps-gcloud-python-publisher_startup_script.sh is current, reusing.
INFO-File cps-gcloud-node-publisher_startup_script.sh is current, reusing.
INFO-File cps-gcloud-dotnet-subscriber_startup_script.sh is current, reusing.
INFO-File cps-gcloud-ruby-publisher_startup_script.sh is current, reusing.
INFO-File cps-gcloud-go-subscriber_startup_script.sh is current, reusing.
INFO-File kafka-subscriber_startup_script.sh is current, reusing.
INFO-File cps-gcloud-python-subscriber_startup_script.sh is current, reusing.
INFO-File cps-gcloud-go-publisher_startup_script.sh is current, reusing.
INFO-File cps-gcloud-node-subscriber_startup_script.sh is current, reusing.
INFO-File cps-gcloud-dotnet-publisher_startup_script.sh is current, reusing.
INFO-File cps-gcloud-java-subscriber_startup_script.sh is current, reusing.
INFO-File cps.zip is current, reusing.
INFO-File driver.jar is out of date, uploading new version.
INFO-Pub/Sub actions completed.
INFO-Kafka actions completed.
INFO-File driver.jar created.
INFO-File uploads completed.
INFO-Instance group creation completed.
INFO-Starting instances.
INFO-Successfully started all instances.
INFO-Connecting to 35.202.29.100:5000
INFO-Connecting to 35.232.176.112:5000
  

ERROR-Client failed to start 11 times, shutting down.
ERROR-Shutting down:
io.grpc.StatusRuntimeException: UNAVAILABLE
        at io.grpc.Status.asRuntimeException(Status.java:526)
        at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:385)
        at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:422)
        at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:61)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:504)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:425)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:536)
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:102)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /35.232.176.112:5000
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
        at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
        at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
        ... 1 more
ERROR-Client failed to start 11 times, shutting down.
May 28, 2018 3:49:22 PM com.google.common.util.concurrent.AggregateFuture$RunningState handleException
SEVERE: Got more than one input Future failure. Logging failures after the first
io.grpc.StatusRuntimeException: UNAVAILABLE
        at io.grpc.Status.asRuntimeException(Status.java:526)
        at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:385)
        at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:422)
        at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:61)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:504)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:425)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:536)
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:102)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /35.232.176.112:5000
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
        at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
        at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
        ... 1 more

ERROR-An error occurred...
io.grpc.StatusRuntimeException: UNAVAILABLE
        at io.grpc.Status.asRuntimeException(Status.java:526)
        at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:385)
        at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:422)
        at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:61)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:504)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:425)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:536)
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:102)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /35.202.29.100:5000
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
        at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
        at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
        ... 1 more

With PUBSUB, if I get a message with a duplicate id, does ack-ing it ack the previous dup?

I am trying to acknowledge receipt of messages at specific times to reduce the chance of a message being received but not yet persisted to disk.

If the subscriber gets a message with id #1 I will put it in a list to be acknowledged later. If I get a duplicate message later on, also with id #1, am I ok to acknowledge it immediately or does it acknowledge the previous message as well?

Service Account Authentication not working. Gives permission errors

I am writing the below code to connect to a Pub-Sub Subscriber.
ServiceAccountCredentials servicecreds= ServiceAccountCredentials.fromStream(new FileInputStream("*.json"));
CredentialsProvider creds= FixedCredentialsProvider.create(servicecreds);
subscriber=Subscriber.newBuilder(subscription, new MessageReceiverExample()).setCredentialsProvider(creds).build();

I am getting "UnAuthenticatedException" when i am Trying to Listen via Subscriber. The code works if i use defaultApplicationCredentials(). But my requirement needs me to connect via the json file and not set Environment Variables.

GRPC channel: PubSub inbound message size limit

Hi everyone,

For various reasons my source connector is trying to ingest a big-boy message.

INFO Error while retrieving records, treating as an empty poll. java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: io.grpc.netty.NettyClientTransport$3: Frame size 5435421 exceeds maximum: 4194304. (com.google.pubsub.kafka.source.CloudPubSubSourceTask)

The data size exceeds the maximum allowed by GRPC (about 4MB) so to allow the connector to ingest the message need to add the option .maxInboundMessageSize(eightMegabytes) to the GRPC channel builder.

Would it be useful to achieve this via configuration? I can contribute on that but I'm very busy lately so that would be released for our nephews :)

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.