GithubHelp home page GithubHelp logo

spring-integration-aws's Introduction

Build Status Revved up by Develocity

Spring Integration Extension for Amazon Web Services (AWS)

Introduction

Amazon Web Services (AWS)

Launched in 2006, Amazon Web Services (AWS) provides key infrastructure services for business through its cloud computing platform. Using cloud computing businesses can adopt a new business model whereby they do not have to plan and invest in procuring their own IT infrastructure. They can use the infrastructure and services provided by the cloud service provider and pay as they use the services. Visit AWS Products for more details about various products offered by Amazon as a part their cloud computing services.

Spring Integration Extension for Amazon Web Services provides Spring Integration adapters for the various services provided by the AWS SDK for Java. Note the Spring Integration AWS Extension is based on the Spring Cloud AWS project.

Spring Integration's extensions to AWS

The current project version is 3.0.x and it requires minimum Java 17 and Spring Integration 6.1.x & Spring Cloud AWS 3.0.x. This version is also fully based on AWS Java SDK v2. Therefore, it has a lot of breaking changes since the previous version, for example an XML configuration support was fully removed.

This guide intends to explain briefly the various adapters available for Amazon Web Services such as:

  • Amazon Simple Storage Service (S3)
  • Amazon Simple Queue Service (SQS)
  • Amazon Simple Notification Service (SNS)
  • Amazon DynamoDB
  • Amazon Kinesis

Contributing

Pull requests are welcome. Please see the contributor guidelines for details. Additionally, if you are contributing, we recommend following the process for Spring Integration as outlined in the administrator guidelines.

Dependency Management

These dependencies are optional in the project:

  • io.awspring.cloud:spring-cloud-aws-sns - for SNS channel adapters
  • io.awspring.cloud:spring-cloud-aws-sqs - for SQS channel adapters
  • io.awspring.cloud:spring-cloud-aws-s3 - for S3 channel adapters
  • org.springframework.integration:spring-integration-file - for S3 channel adapters
  • org.springframework.integration:spring-integration-http - for SNS inbound channel adapter
  • software.amazon.awssdk:kinesis - for Kinesis channel adapters
  • software.amazon.kinesis:amazon-kinesis-client - for KCL-based inbound channel adapter
  • com.amazonaws:amazon-kinesis-producer - for KPL-based MessageHandler
  • software.amazon.awssdk:dynamodb - for DynamoDbMetadataStore and DynamoDbLockRegistry
  • software.amazon.awssdk:s3-transfer-manager - for S3MessageHandler
  • software.amazon.awssdk:aws-crt-client - for S3MessageHandler

Consider to include an appropriate dependency into your project when you use particular component from this project.

Adapters

Amazon Simple Storage Service (Amazon S3)

Introduction

The S3 Channel Adapters are based on the AmazonS3 template and TransferManager. See their specification and JavaDocs for more information.

Inbound Channel Adapter

The S3 Inbound Channel Adapter is represented by the S3InboundFileSynchronizingMessageSource and allows pulling S3 objects as files from the S3 bucket to the local directory for synchronization. This adapter is fully similar to the Inbound Channel Adapters in the FTP and SFTP Spring Integration modules. See more information in the FTP/FTPS Adapters Chapter for common options or SessionFactory, RemoteFileTemplate and FileListFilter abstractions.

The Java Configuration is:

@SpringBootApplication
public static class MyConfiguration {

    @Autowired
    private S3Client amazonS3;

    @Bean
    public S3InboundFileSynchronizer s3InboundFileSynchronizer() {
    	S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(amazonS3());
    	synchronizer.setDeleteRemoteFiles(true);
    	synchronizer.setPreserveTimestamp(true);
    	synchronizer.setRemoteDirectory(S3_BUCKET);
    	synchronizer.setFilter(new S3RegexPatternFileListFilter(".*\\.test$"));
    	Expression expression = PARSER.parseExpression("#this.toUpperCase() + '.a'");
    	synchronizer.setLocalFilenameGeneratorExpression(expression);
    	return synchronizer;
    }

    @Bean
    @InboundChannelAdapter(value = "s3FilesChannel", poller = @Poller(fixedDelay = "100"))
    public S3InboundFileSynchronizingMessageSource s3InboundFileSynchronizingMessageSource() {
    	S3InboundFileSynchronizingMessageSource messageSource =
    			new S3InboundFileSynchronizingMessageSource(s3InboundFileSynchronizer());
    	messageSource.setAutoCreateLocalDirectory(true);
    	messageSource.setLocalDirectory(LOCAL_FOLDER);
    	messageSource.setLocalFilter(new AcceptOnceFileListFilter<File>());
    	return messageSource;
    }

    @Bean
    public PollableChannel s3FilesChannel() {
    	return new QueueChannel();
    }
}

With this config you receive messages with java.io.File payload from the s3FilesChannel after periodic synchronization of content from the Amazon S3 bucket into the local directory.

Streaming Inbound Channel Adapter

This adapter produces message with payloads of type InputStream, allowing S3 objects to be fetched without writing to the local file system. Since the session remains open, the consuming application is responsible for closing the session when the file has been consumed. The session is provided in the closeableResource header (IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE). Standard framework components, such as the FileSplitter and StreamTransformer will automatically close the session.

The following Spring Boot application provides an example of configuring the S3 inbound streaming adapter using Java configuration:

@SpringBootApplication
public class S3JavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(S3JavaApplication.class)
            .web(false)
            .run(args);
    }
    
    @Autowired
    private S3Client amazonS3;

    @Bean
    @InboundChannelAdapter(value = "s3Channel", poller = @Poller(fixedDelay = "100"))
    public MessageSource<InputStream> s3InboundStreamingMessageSource() {    
        S3StreamingMessageSource messageSource = new S3StreamingMessageSource(template());
        messageSource.setRemoteDirectory(S3_BUCKET);
        messageSource.setFilter(new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
                                   "streaming"));    	
    	return messageSource;
    }

    @Bean
    @Transformer(inputChannel = "s3Channel", outputChannel = "data")
    public org.springframework.integration.transformer.Transformer transformer() {
        return new StreamTransformer();
    }
    
    @Bean
    public S3RemoteFileTemplate template() {
        return new S3RemoteFileTemplate(new S3SessionFactory(amazonS3));
    }

    @Bean
    public PollableChannel s3Channel() {
    	return new QueueChannel();
    }
}

NOTE: Unlike the non-streaming inbound channel adapter, this adapter does not prevent duplicates by default. If you do not delete the remote file and wish to prevent the file being processed again, you can configure an S3PersistentFileListFilter in the filter attribute. If you don’t actually want to persist the state, an in-memory SimpleMetadataStore can be used with the filter. If you wish to use a filename pattern (or regex) as well, use a CompositeFileListFilter.

Outbound Channel Adapter

The S3 Outbound Channel Adapter is represented by the S3MessageHandler and allows performing upload, download and copy (see S3MessageHandler.Command enum) operations in the provided S3 bucket.

The Java Configuration is:

@SpringBootApplication
public static class MyConfiguration {

    @Autowired
    private S3AsyncClient amazonS3;

    @Bean
    @ServiceActivator(inputChannel = "s3UploadChannel")
    public MessageHandler s3MessageHandler() {
    	return new S3MessageHandler(amazonS3(), "my-bucket");
    }

}

With this config you can send a message with the java.io.File as payload and the transferManager.upload() operation will be performed, where the file name is used as a S3 Object key.

See more information in the S3MessageHandler JavaDocs.

NOTE: The AWS SDK recommends to use S3CrtAsyncClient for S3TransferManager, therefore an S3AsyncClient.crtBuilder() has to be used to achieve respective upload and download requirements.

Outbound Gateway

The S3 Outbound Gateway is represented by the same S3MessageHandler with the produceReply = true constructor argument for Java Configuration.

The "request-reply" nature of this gateway is async and the Transfer result from the TransferManager operation is sent to the outputChannel, assuming the transfer progress observation in the downstream flow.

The TransferListener can be supplied via AwsHeaders.TRANSFER_LISTENER header of the request message to track the transfer progress.

See more information in the S3MessageHandler JavaDocs.

Simple Email Service (SES)

There is no adapter for SES, since Spring Cloud AWS provides implementations for org.springframework.mail.MailSender - SimpleEmailServiceMailSender and SimpleEmailServiceJavaMailSender, which can be injected to the MailSendingMessageHandler.

Amazon Simple Queue Service (SQS)

The SQS adapters are fully based on the Spring Cloud AWS foundation, so for more information about the background components and core configuration, please, refer to the documentation of that project.

Outbound Channel Adapter

The SQS Outbound Channel Adapter is presented by the SqsMessageHandler implementation and allows sending messages to the SQS queue with provided SqsAsyncClient client. An SQS queue can be configured explicitly on the adapter (using org.springframework.integration.expression.ValueExpression) or as a SpEL Expression, which is evaluated against request message as a root object of evaluation context. In addition, the queue can be extracted from the message headers under AwsHeaders.QUEUE.

The Java Configuration is pretty simple:

@SpringBootApplication
public static class MyConfiguration {

    @Bean
    @ServiceActivator(inputChannel = "sqsSendChannel")
    public MessageHandler sqsMessageHandler(SqsAsyncClient amazonSqs) {
    	return new SqsMessageHandler(amazonSqs);
    }

}

Starting with version 2.0, the SqsMessageHandler can be configured with the HeaderMapper to map message headers to the SQS message attributes. See SqsHeaderMapper implementation for more information and also consult with Amazon SQS Message Attributes about value types and restrictions.

Inbound Channel Adapter

The SQS Inbound Channel Adapter is a message-driven implementation for the MessageProducer and is represented with SqsMessageDrivenChannelAdapter. This channel adapter is based on the io.awspring.cloud.sqs.listener.SqsMessageListenerContainer to receive messages from the provided queues in async manner and send an enhanced Spring Integration Message to the provided MessageChannel.

The Java Configuration is pretty simple:

@SpringBootApplication
public static class MyConfiguration {

	@Autowired
	private SqsAsyncClient amazonSqs;

	@Bean
	public PollableChannel inputChannel() {
		return new QueueChannel();
	}

	@Bean
	public MessageProducer sqsMessageDrivenChannelAdapter() {
		SqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(this.amazonSqs, "myQueue");
		adapter.setOutputChannel(inputChannel());
		return adapter;
	}
}

The target listener container can be configured via SqsMessageDrivenChannelAdapter.setSqsContainerOptions(SqsContainerOptions) option.

Amazon Simple Notification Service (SNS)

Amazon SNS is a publish-subscribe messaging system that allows clients to publish notification to a particular topic. Other interested clients may subscribe using different protocols like HTTP/HTTPS, e-mail, or an Amazon SQS queue to receive the messages. Plus mobile devices can be registered as subscribers from the AWS Management Console.

Unfortunately Spring Cloud AWS doesn't provide flexible components which can be used from the channel adapter implementations, but Amazon SNS API is pretty simple, on the other hand. Hence, Spring Integration AWS SNS Support is straightforward and just allows to provide channel adapter foundation for Spring Integration applications.

Since e-mail, SMS and mobile device subscription/unsubscription confirmation is out of the Spring Integration application scope and can be done only from the AWS Management Console, we provide only HTTP/HTTPS SNS endpoint in face of SnsInboundChannelAdapter. The SQS-to-SNS subscription can also be done through account configuration: https://docs.aws.amazon.com/sns/latest/dg/subscribe-sqs-queue-to-sns-topic.html.

Inbound Channel Adapter

The SnsInboundChannelAdapter is an extension of HttpRequestHandlingMessagingGateway and must be as a part of Spring MVC application. Its URL must be used from the AWS Management Console to add this endpoint as a subscriber to the SNS Topic. However. before receiving any notification itself this HTTP endpoint must confirm the subscription.

See SnsInboundChannelAdapter JavaDocs for more information.

An important option of this adapter to consider is handleNotificationStatus. This boolean flag indicates if the adapter should send SubscriptionConfirmation/UnsubscribeConfirmation message to the output-channel or not. If that the AwsHeaders.NOTIFICATION_STATUS message header is present in the message with the NotificationStatus object, which can be used in the downstream flow to confirm subscription or not. Or "re-confirm" it in case of UnsubscribeConfirmation message.

In addition, the AwsHeaders#SNS_MESSAGE_TYPE message header is represented to simplify a routing in the downstream flow.

The Java Configuration is pretty simple:

@SpringBootApplication
public static class MyConfiguration {

	@Autowired
	private SnsClient amazonSns;

	@Bean
    public PollableChannel inputChannel() {
    	return new QueueChannel();
    }

    @Bean
    public HttpRequestHandler sqsMessageDrivenChannelAdapter() {
    	SnsInboundChannelAdapter adapter = new SnsInboundChannelAdapter(amazonSns(), "/mySampleTopic");
    	adapter.setRequestChannel(inputChannel());
    	adapter.setHandleNotificationStatus(true);
    	return adapter;
    }
}

Note: by default the message payload is a Map converted from the received Topic JSON message. For the convenience a payload-expression is provided with the Message as a root object of the evaluation context. Hence, even some HTTP headers, populated by the DefaultHttpHeaderMapper, are available for the evaluation context.

Outbound Channel Adapter

The SnsMessageHandler is a simple one-way Outbound Channel Adapter to send Topic Notification using SnsAsyncClient service.

This Channel Adapter (MessageHandler) accepts these options:

  • topic-arn (topic-arn-expression) - the SNS Topic to send notification for.
  • subject (subject-expression) - the SNS Notification Subject;
  • body-expression - the SpEL expression to evaluate the message property for the software.amazon.awssdk.services.sns.model.PublishRequest.
  • resource-id-resolver - a ResourceIdResolver bean reference to resolve logical topic names to physical resource ids;

See SnsMessageHandler JavaDocs for more information.

The Java Config looks like:

@Bean
public MessageHandler snsMessageHandler() {
    SnsMessageHandler handler = new SnsMessageHandler(amazonSns());
    handler.setTopicArn("arn:aws:sns:eu-west:123456789012:test");
    String bodyExpression = "T(SnsBodyBuilder).withDefault(payload).forProtocols(payload.substring(0, 140), 'sms')";
    handler.setBodyExpression(spelExpressionParser.parseExpression(bodyExpression));

    // message-group ID and deduplication ID are used for FIFO topics
    handler.setMessageGroupId("foo-messages");
    String deduplicationExpression = "headers.id";
    handler.setMessageDeduplicationIdExpression(spelExpressionParser.parseExpression(deduplicationExpression))''
    return handler;
}

NOTE: the bodyExpression can be evaluated to a org.springframework.integration.aws.support.SnsBodyBuilder allowing the configuration of a json messageStructure for the PublishRequest and provide separate messages for different protocols. The same SnsBodyBuilder rule is applied for the raw payload if the bodyExpression hasn't been configured. NOTE: if the payload of requestMessage is a software.amazon.awssdk.services.sns.model.PublishRequest already, the SnsMessageHandler doesn't do anything with it, and it is sent as-is.

Starting with version 2.0, the SnsMessageHandler can be configured with the HeaderMapper to map message headers to the SNS message attributes. See SnsHeaderMapper implementation for more information and also consult with Amazon SNS Message Attributes about value types and restrictions.

Starting with version 2.5.3, the SnsMessageHandler supports sending to SNS FIFO topics using the messageGroupId/messageGroupIdExpression and messageDeduplicationIdExpression properties.

Metadata Store for Amazon DynamoDB

The DynamoDbMetadataStore, a ConcurrentMetadataStore implementation, is provided to keep the metadata for Spring Integration components in the distributed Amazon DynamoDB store. The implementation is based on a simple table with metadataKey and metadataValue attributes, both are string types and the metadataKey is partition key of the table. By default, the SpringIntegrationMetadataStore table is used, and it is created during DynamoDbMetaDataStore initialization if that doesn't exist yet. The DynamoDbMetadataStore can be used for the KinesisMessageDrivenChannelAdapter as a cloud-based cehckpointStore.

Starting with version 2.0, the DynamoDbMetadataStore can be configured with the timeToLive option to enable the DynamoDB TTL feature. The expireAt attribute is added to each item with the value based on the sum of current time and provided timeToLive in seconds. If the provided timeToLive value is non-positive, the TTL functionality is disabled on the table.

Amazon Kinesis

Amazon Kinesis is a platform for streaming data on AWS, making it easy to load and analyze streaming data, and also providing the ability for you to build custom streaming data applications for specialized needs.

Inbound Channel Adapter

The KinesisMessageDrivenChannelAdapter is an extension of the MessageProducerSupport - event-driver channel adapter.

See KinesisMessageDrivenChannelAdapter JavaDocs and its setters for more information how to use and how to configure it in the application for Kinesis streams ingestion.

The Java Configuration is pretty simple:

@SpringBootApplication
public static class MyConfiguration {

    @Bean
    public KinesisMessageDrivenChannelAdapter kinesisInboundChannelChannel(KinesisAsyncClient amazonKinesis) {
        KinesisMessageDrivenChannelAdapter adapter =
            new KinesisMessageDrivenChannelAdapter(amazonKinesis, "MY_STREAM");
        adapter.setOutputChannel(kinesisReceiveChannel());
        return adapter;
    }
}

This channel adapter can be configured with the DynamoDbMetadataStore mentioned above to track sequence checkpoints for shards in the cloud environment when we have several instances of our Kinesis application. By default, this adapter uses DeserializingConverter to convert byte[] from the Record data. Can be specified as null with meaning no conversion and the target Message is sent with the byte[] payload.

Additional headers like AwsHeaders.RECEIVED_STREAM, AwsHeaders.SHARD, AwsHeaders.RECEIVED_PARTITION_KEY and AwsHeaders.RECEIVED_SEQUENCE_NUMBER are populated to the message for downstream logic. When CheckpointMode.manual is used the Checkpointer instance is populated to the AwsHeaders.CHECKPOINTER header for an acknowledgment in the downstream logic manually.

The KinesisMessageDrivenChannelAdapter can be configured with the ListenerMode record or batch to process records one by one or send the whole just polled batch of records. If Converter is configured to null, the entire List<Record> is sent as a payload. Otherwise, a list of converted Record.getData().array() is wrapped to the payload of message to send. In this case the AwsHeaders.RECEIVED_PARTITION_KEY and AwsHeaders.RECEIVED_SEQUENCE_NUMBER headers contains values as a List<String> of partition keys and sequence numbers of converted records respectively.

The consumer group is included to the metadata store key. When records are consumed, they are filtered by the last stored lastCheckpoint under the key as [CONSUMER_GROUP]:[STREAM]:[SHARD_ID].

Starting with version 2.0, the KinesisMessageDrivenChannelAdapter can be configured with the InboundMessageMapper to extract message headers embedded into the record data (if any). See EmbeddedJsonHeadersMessageMapper implementation for more information. When InboundMessageMapper is used together with the ListenerMode.batch, each Record is converted to the Message with extracted embedded headers (if any) and converted byte[] payload if any and converter is present. In this case AwsHeaders.RECEIVED_PARTITION_KEY and AwsHeaders.RECEIVED_SEQUENCE_NUMBER headers are populated to the particular message for a record. These messages are wrapped as a list payload to one outbound message.

Starting with version 2.0, the KinesisMessageDrivenChannelAdapter can be configured with the LockRegistry for leader selection for the provided shards or derived from the provided streams. The KinesisMessageDrivenChannelAdapter iterates over its shards and tries to acquire a distributed lock for the shard in its consumer group. If LockRegistry is not provided, no exclusive locking happens and all the shards are consumed by this KinesisMessageDrivenChannelAdapter. See also DynamoDbLockRegistry for more information.

The KinesisMessageDrivenChannelAdapter can be configured with a Function<List<Shard>, List<Shard>> shardListFilter to filter the available, open, non-exhausted shards. This filter Function will be called each time the shard list is refreshed.

For example, users may want to fully read any parent shards before starting to read their child shards. This could be achieved as follows:

    openShards -> {
        Set<String> openShardIds = openShards.stream().map(Shard::getShardId).collect(Collectors.toSet());
        // only return open shards which have no parent available for reading
        return openShards.stream()
                .filter(shard -> !openShardIds.contains(shard.getParentShardId())
                        && !openShardIds.contains(shard.getAdjacentParentShardId()))
                .collect(Collectors.toList());
        }

Starting with version 3.0, any exception thrown from the record process may lead to shard iterator rewinding to the latest check-pointed sequence or the first one in the current failed batch. This ensures an at-least-once delivery for possibly failed records. If the latest checkpoint is equal to the highest sequence in the batch, then shard consumer continue with the next iterator.

Also, the KclMessageDrivenChannelAdapter is provided for performing streams consumption by Kinesis Client Library. See its JavaDocs for more information.

Outbound Channel Adapter

The KinesisMessageHandler is an AbstractMessageHandler to perform put record to the Kinesis stream. The stream, partition key (or explicit hash key) and sequence number can be determined against request message via evaluation provided expressions or can be specified statically. They also can be specified as AwsHeaders.STREAM, AwsHeaders.PARTITION_KEY and AwsHeaders.SEQUENCE_NUMBER respectively.

The KinesisMessageHandler can be configured with the outputChannel for sending a Message on successful put operation. The payload is the original request and additional AwsHeaders.SHARD and AwsHeaders.SEQUENCE_NUMBER headers are populated from the PutRecordResult. If the request payload is a PutRecordsRequest, the full PutRecordsResult is populated in the AwsHeaders.SERVICE_RESULT header instead.

When an async failure is happened on the put operation, the ErrorMessage is sent to the errorChannel header or global one. The payload is an AwsRequestFailureException.

The payload of request message can be:

  • PutRecordsRequest to perform KinesisAsyncClient.putRecords
  • PutRecordRequest to perform KinesisAsyncClient.putRecord
  • ByteBuffer to represent a data of the PutRecordRequest
  • byte[] which is wrapped to the ByteBuffer
  • any other type which is converted to the byte[] by the provided Converter; the SerializingConverter is used by default.

The Java Configuration for the message handler:

@Bean
@ServiceActivator(inputChannel = "kinesisSendChannel")
public MessageHandler kinesisMessageHandler(KinesisAsyncClient amazonKinesis,
                                            MessageChannel channel) {
    KinesisMessageHandler kinesisMessageHandler = new KinesisMessageHandler(amazonKinesis);
    kinesisMessageHandler.setPartitionKey("1");
    kinesisMessageHandler.setOutputChannel(channel);
    return kinesisMessageHandler;
}

Starting with version 2.0, the KinesisMessageHandler can be configured with the OutboundMessageMapper to embed message headers into the record data alongside with the payload. See EmbeddedJsonHeadersMessageMapper implementation for more information.

Also, the KplMessageHandler is provided for performing streams consumption by Kinesis Producer Library.

Lock Registry for Amazon DynamoDB

Starting with version 2.0, the DynamoDbLockRegistry implementation is available. Certain components (for example aggregator and resequencer) use a lock obtained from a LockRegistry instance to ensure that only one thread is manipulating a group at a time. The DefaultLockRegistry performs this function within a single component; you can now configure an external lock registry on these components. When used with a shared MessageGroupStore, the DynamoDbLockRegistry can be used to provide this functionality across multiple application instances, such that only one instance can manipulate the group at a time.
This implementation can also be used for the distributed leader elections using a LockRegistryLeaderInitiator.

Testing

The tests in the project are performed via Testcontainers and Local Stack image. See LocalstackContainerTest interface Javadocs for more information.

spring-integration-aws's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring-integration-aws's Issues

SimpleJsonSerializer throws NPE

Caused by: org.springframework.messaging.MessageHandlingException: nested exception is java.lang.NullPointerException
	at org.springframework.integration.handler.LambdaMessageProcessor.processMessage(LambdaMessageProcessor.java:131)
	at org.springframework.integration.transformer.AbstractMessageProcessingTransformer.transform(AbstractMessageProcessingTransformer.java:90)
	at org.springframework.integration.transformer.MessageTransformingHandler.handleRequestMessage(MessageTransformingHandler.java:89)
	... 103 common frames omitted
Caused by: java.lang.NullPointerException: null
	at org.springframework.integration.json.SimpleJsonSerializer.toElement(SimpleJsonSerializer.java:92)
	at org.springframework.integration.json.SimpleJsonSerializer.toJson(SimpleJsonSerializer.java:74)
	at org.springframework.integration.file.remote.AbstractFileInfo.toJson(AbstractFileInfo.java:60)
	at org.springframework.integration.file.remote.AbstractRemoteFileStreamingMessageSource.doReceive(AbstractRemoteFileStreamingMessageSource.java:164)
	at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:141)

The workaround is to set S3StreamingMessageSource.setFileInfoJson(false) but this shouldn't be happening. I've verified that Jackson can serialize it just fine, so the problem is with SimpleJsonSerializer. Can we just use Jackson since it's ubiquitous in Spring, instead of inventing a new JSON serializer?

DynamoDbMetaDataStore defaults read and write capacity to a high value

The cost for 10000 read and write capacity is very high on AWS.

For someone testing this class it would be more reasonable to set a very low default capacity of 1 and leave it to the user to scale the capacity based on need.

Would it also be reasonable to add in default auto scaling to the provisioned table? or at least allow that to be set?

Infinite ShardConsumers created on KinesisMessageDrivenChannelAdapter startup

I see an infinite (it would seem) number of ShardConsumers created on startup:

INFO  | 2017-03-31 13:51:00 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-13] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@6e94a58c] has been started.
INFO  | 2017-03-31 13:51:01 | [kinesisMessageDrivenChannelAdapter-kinesis-dispatcher-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer (KinesisMessageDrivenChannelAdapter.java:726) - Stopping the [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='app-private', shard='shardId-000000000061', reset=false}, state=STOP}] because the shard has been CLOSED and exhausted.
INFO  | 2017-03-31 13:51:01 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-13] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@fcbb7c4] has been started.
INFO  | 2017-03-31 13:51:03 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-4] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@78906e1a] has been started.
INFO  | 2017-03-31 13:51:03 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-8] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@3fd30028] has been started.
INFO  | 2017-03-31 13:51:04 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-2] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@195396b6] has been started.
INFO  | 2017-03-31 13:51:05 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-8] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@3354c4d5] has been started.
INFO  | 2017-03-31 13:51:05 | [kinesisMessageDrivenChannelAdapter-kinesis-dispatcher-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer (KinesisMessageDrivenChannelAdapter.java:726) - Stopping the [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='app-private', shard='shardId-000000000061', reset=false}, state=STOP}] because the shard has been CLOSED and exhausted.
INFO  | 2017-03-31 13:51:05 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-2] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@59f3a31] has been started.
INFO  | 2017-03-31 13:51:06 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@12a4c0a0] has been started.
INFO  | 2017-03-31 13:51:06 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-13] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@7eb3dfa2] has been started.
INFO  | 2017-03-31 13:51:07 | [kinesisMessageDrivenChannelAdapter-kinesis-dispatcher-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer (KinesisMessageDrivenChannelAdapter.java:726) - Stopping the [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='app-private', shard='shardId-000000000061', reset=false}, state=STOP}] because the shard has been CLOSED and exhausted.
INFO  | 2017-03-31 13:51:07 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-2] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@54c97ea9] has been started.
INFO  | 2017-03-31 13:51:08 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-4] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@2be0450e] has been started.
INFO  | 2017-03-31 13:51:09 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-5] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@50877f20] has been started.
INFO  | 2017-03-31 13:51:10 | [kinesisMessageDrivenChannelAdapter-kinesis-dispatcher-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer (KinesisMessageDrivenChannelAdapter.java:726) - Stopping the [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='app-private', shard='shardId-000000000061', reset=false}, state=STOP}] because the shard has been CLOSED and exhausted.
INFO  | 2017-03-31 13:51:10 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@3b10db69] has been started.
INFO  | 2017-03-31 13:51:12 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@4495270c] has been started.
INFO  | 2017-03-31 13:51:13 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-2] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@52f01244] has been started.
INFO  | 2017-03-31 13:51:14 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:683) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@56137b70] has been started.
.
.
.

.etc.

Here's my bean definition:

    @Bean
    public KinesisMessageDrivenChannelAdapter kinesisMessageDrivenChannelAdapter(AmazonKinesis amazonKinesis) {
        KinesisMessageDrivenChannelAdapter adapter =
                new KinesisMessageDrivenChannelAdapter(amazonKinesis, appSettings.getKinesis().getStreamName());
        adapter.setOutputChannel(kinesisChannel());
        adapter.setCheckpointStore(checkpointStore());
        adapter.setCheckpointMode(CheckpointMode.batch);
        adapter.setListenerMode(ListenerMode.batch);
        adapter.setStartTimeout(10000);
        adapter.setConsumerGroup(appSettings.getKinesis().getConsumerGroup());
        return adapter;
    }

Currently running with 8 open and 4 closed shards.

Runtime error in spring-cloud-stream-samples Kinesis-produce-consume

Hi,

I've started encountering a runtime issue in with the latest SNAPSHOT of spring-integration-aws

Caused by: java.lang.NoSuchMethodError: org.springframework.integration.aws.metadata.DynamoDbMetaDataStore.setReadCapacity(Ljava/lang/Long;)V
        at org.springframework.cloud.stream.binder.kinesis.config.KinesisBinderConfiguration.kinesisCheckpointStore(KinesisBinderConfiguration.java:94)
        at org.springframework.cloud.stream.binder.kinesis.config.KinesisBinderConfiguration$$EnhancerBySpringCGLIB$$c8f0714c.CGLIB$kinesisCheckpointStore$3(<generated>)
        at org.springframework.cloud.stream.binder.kinesis.config.KinesisBinderConfiguration$$EnhancerBySpringCGLIB$$c8f0714c$$FastClassBySpringCGLIB$$6d53d50d.invoke(<generated>)
        at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)

I wonder if this may be due to a change in method signature in commit 43754e1

-	public void setReadCapacity(Long readCapacity) {
 +	public void setReadCapacity(long readCapacity) {

In spring-cloud-stream-binder-aws-kinesis the method is invoked as

kinesisCheckpointStore.setReadCapacity(this.configurationProperties.getCheckpoint().getReadCapacity());

Where getReadCapacity returns a Long

Could this be causing the problem?

Thanks,
Peter.

Milstones

Hi!

Can you please add approximate milestone of this project? (Also to kinesis binder project?)

Configure SQS integration via spring-integration java DSL

I have a scenario where I have to implement some kind of a dispatcher component that will consume messages from one SQS queue, split them, process and send to another SQS queues.
The latest version of spring-integration has java DSL to create integration flows and I think it's a good match for my use case.
But I'm wondering if spring-integration-aws is adapted for that?

IntegrationFlow requires MessageSource and poller to start consume messages from queue, but I see that spring-integration-aws provides MessageProducer.
So does it mean that I have to implement my own MessageSource with sqs sdk under the hood?
Also it looks like I have to implement my own redrive strategy, etc.

So are there any benefits in using spring-integration-aws to integrate with SQS and IntegrationFlow?
If yes then, are there any good example how to use it?

Add request/reply functionality for the SqsMessageHandler

The AmazonSQSAsync API is like:

java.util.concurrent.Future<SendMessageResult> sendMessageAsync(SendMessageRequest sendMessageRequest);

The result can be useful for target applications.

One option to introduce SQS Outbound Gateway (the same SqsMessageHandler can just implement AbstractReplyProducingMessageHandler).
The other option is to let to inject AsyncHandler<? extends AmazonWebServiceRequest, ?> .

This will lead to drop Spring Cloud AWS foundation off in the SqsMessageHandler implementation for more organic integration with AWS SDK directly.

See spring-attic/spring-cloud-aws#173 for some related info.

Support US_East_2 Region

The awsSdkVersion needs to be updated to a minimum of 1.11.44 in order to support the new us-east-2 region

Implement Kinesis checkpointing in DynamoDB

It would be helpful to have a Kinesis checkpointer that stored data in DynamoDB. It would be especially helpful if the implementation were to store data in the same format as the KCL, so as to allow seamless switching between implementations. Here's some sample data:

"leaseKey (S)","checkpoint (S)","checkpointSubSequenceNumber (N)","leaseCounter (N)","leaseOwner (S)","ownerSwitchesSinceCheckpoint (N)","parentShardId (SS)"
"shardId-000000000109","49571904859951697672938815047093113543517771024281110226","0","71393","10.38.86.80","0","{ ""shardId-000000000084"" } "
"shardId-000000000119","49571948804417539695075278996921415033704050165416986482","0","70195","10.38.86.80","0","{ ""shardId-000000000089"" } "
"shardId-000000000120","49571948804439840440273809622512234462515941495485237122","0","76384","10.38.87.213","0","{ ""shardId-000000000089"" } "
"shardId-000000000105","49571904855937563537202789633031602379567003501741475474","0","69043","10.38.85.92","0","{ ""shardId-000000000082"" } "
"shardId-000000000118","49571948804395238949876748376231580877609874030822688610","0","72186","10.38.87.29","0","{ ""shardId-000000000088"" } "
"shardId-000000000106","49571904855959864282401833504447526037072754616811128482","0","68828","10.38.86.80","0","{ ""shardId-000000000082"" } "
"shardId-000000000092","49571931792806982135555692661727595663705227010638874050","0","70120","10.38.87.213","0","{ ""shardId-000000000075"" } "
"shardId-000000000104","49571904854488015099298299601302960587814708012751259266","0","70528","10.38.86.80","0","{ ""shardId-000000000081"" } "

Note that the parent shards are referenced in the case of a re-sharding.

Each consumer group results in its own table.

Cannot set auto start up on SqsMessageDrivenChannelAdapter

Can we add the ability to set the auto start up value on SqsMessageDrivenChannelAdapter and in turn within the SimpleMessageListenerContainerFactory.

This is a hold up for my roll out. I have a branch with this fix if you would like the pull request?

SqsMessageHandler no longer transform Message headers

Hi,
in 2.0.0.M1 version SqsMessageHandler no longer transform org.springframework.messaging.Message headers into SendMessageRequest headers. In that case, any header I set in previous integration flow is not transfered into SQS message headers. Eg. I use MessageHeaders.CONTENT_TYPE or Spring Cloud Sleuth headers are not propagated.

Configuring sqs-message-driven-channel-adapter with FIFO queue results in AmazonServiceException

Defining my bean as follows:

<int-aws:sqs-message-driven-channel-adapter
	id="sqs-message-driven-channel-adapter-foo" sqs="fooSqs"
	queues="foo.fifo" max-number-of-messages="5"
	visibility-timeout="200" wait-time-out="10" send-timeout="2000"
	channel="foo-sqs" />

Defining fooSqs as follows:

@Bean(name = "fooSqs")
public AmazonSQSAsync amazonSQSAsyncClient() {

	return AmazonSQSAsyncClientBuilder.standard().withClientConfiguration(clientConfiguration())
			.withCredentials(new ProfileCredentialsProvider("PROFILE_FOO"))
			.withRegion(Regions.US_EAST_1).build();
}

Upon startup of my application, the following exception is thrown:

[INFO] [talledLocalContainer] Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sqs-message-driven-channel-adapter-foo': Invocation of init method failed; nested exception is org.springframework.beans.factory.BeanCreationException: Cannot instantiate 'SimpleMessageListenerContainer'; nested exception is com.amazonaws.AmazonServiceException: All requests to this queue must use HTTPS and SigV4. (Service: AmazonSQS; Status Code: 403; Error Code: InvalidSecurity; Request ID: a00e613f-243b-5e48-8939-30e9d1ca5485)
[INFO] [talledLocalContainer] 	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1578)
[INFO] [talledLocalContainer] 	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545)
[INFO] [talledLocalContainer] 	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
...

I have previously observed this working with Standard queues, and in the log I have noticed that all requests to the queue are sent with HTTP.

Should spring-integration-aws depend on spring-cloud-aws?

I'm doing some integration testing for GH-41 and when I try to deploy my work as a servlet, I see:

Caused by: java.lang.ClassNotFoundException: org.springframework.cloud.aws.core.env.ResourceIdResolver

This is a spring-cloud-aws-core class. There is no transitive dependency today from spring-integration-aws to this project. Prior to working off the HEAD of spring-integration-aws, I had a project dependency directly on aws-sdk and everything worked fine. I manually added the spring-cloud-aws-core dependency to fix the above problem, which required me to remove my aws-sdk dependency (version conflicts) and add spring-cloud-aws-messaging (another missing class).

Should I be adding spring-cloud-aws-* dependencies to my project to resolve missing classes, or should the spring-integration-aws project include as gradle dependencies? Or am I missing some other piece of setup that would resolve this cleanly?

Unmanaged beans w/ AmazonS3MessageHandler

Hi @garyrussell and @artembilan

While using AmazonS3MessageHandler in a Spring XD module, I'm running into exceptions.

In AmazonS3MessageHandler.handleMessageInternal(), the ExpressionEvaluatingMessageProcessor.processMessage() called here expects a BeanFactory. Because this is not a managed bean see creation here you get a RuntimeException complaining of "No beanfactory".

The same can also occur with AmazonS3MessageHandler.setFileNameGenerator(), however setters exist to provide a managed bean.

I was thinking instead of creating the ExpressionEvaluatingMessageProcessor, it could be passed in.

I'd be happy to issue a PR. What do you think?

I also have a sample XD module that exhibits the behavior if you'd like me to share.

Thanks!

Kinesis consumer thread exits after JSON deserialization exception?

Hi,

I've started a new app using spring-cloud-stream-binder-aws-kinesis. One issue I'm running into if I try to receive JSON objects is that parsing errors result in a runtime exception that causes the consumer thread to die. I'm using spring.cloud.stream.bindings.input.consumer.concurrency=5, and can see five consumer threads logging every second:

2018-03-01 13:58:23.572 DEBUG 5968 --- [-kinesis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : No records for [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=AFTER_SEQUENCE_NUMBER, sequenceNumber='49582162487825732185088604198809382927677166286909997106', timestamp=null, stream='TestStream', shard='shardId-000000000003', reset=false}, state=CONSUME}] on sequenceNumber [null]. Suspend consuming for [1000] milliseconds.

But after sending in a bad message, the consumer fails and never runs again. The other 4 continue to run, but this particular shard ends up with no consumers reading from it. Bad juju.

Example stacktrace:

2018-03-01 13:58:38.713 ERROR 5968 --- [-kinesis-consumer-1] o.s.integration.handler.LoggingHandler   : org.springframework.messaging.converter.MessageConversionException: Could not read JSON: Unexpected character ('r' (code 114)): was expecting double-quote to start field name
 at [Source: (byte[])"{ref: "XYZ123"}"; line: 1, column: 3]; nested exception is com.fasterxml.jackson.core.JsonParseException: Unexpected character ('r' (code 114)): was expecting double-quote to start field name
 at [Source: (byte[])"{reloc: "XYZ123"}"; line: 1, column: 3], failedMessage=GenericMessage [payload=byte[17], headers={aws_shard=shardId-000000000003, id=03d0e8c4-e9ee-d1a2-8188-08c13baadacf, contentType=application/json, aws_receivedStream=TestStream, aws_receivedPartitionKey=1, aws_receivedSequenceNumber=49582162487825732185088604198810591853496823934477140018, timestamp=1519876718698}]
	at org.springframework.messaging.converter.MappingJackson2MessageConverter.convertFromInternal(MappingJackson2MessageConverter.java:235)
	at org.springframework.cloud.stream.converter.ApplicationJsonMessageMarshallingConverter.convertFromInternal(ApplicationJsonMessageMarshallingConverter.java:86)
	at org.springframework.messaging.converter.AbstractMessageConverter.fromMessage(AbstractMessageConverter.java:181)
	at org.springframework.messaging.converter.CompositeMessageConverter.fromMessage(CompositeMessageConverter.java:70)
	at org.springframework.messaging.handler.annotation.support.PayloadArgumentResolver.resolveArgument(PayloadArgumentResolver.java:137)
	at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:116)
	at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:137)
	at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:109)
	at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage(StreamListenerMessageHandler.java:55)
	at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
	at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:141)
	at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
	at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:132)
	at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:105)
	at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73)
	at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:438)
	at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:388)
	at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:181)
	at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:160)
	at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
	at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:108)
	at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:197)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter.access$3600(KinesisMessageDrivenChannelAdapter.java:86)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.performSend(KinesisMessageDrivenChannelAdapter.java:957)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.processRecords(KinesisMessageDrivenChannelAdapter.java:928)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.lambda$processTask$1(KinesisMessageDrivenChannelAdapter.java:835)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ConsumerInvoker.run(KinesisMessageDrivenChannelAdapter.java:1018)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)

I will probably work around this by receiving the message as a string and doing the JSON conversion manually, but it would be nice if the direct translation worked and handled errors without killing the consumer.

This is in spring-integration-aws-2.0.0.M1. I haven't tried other versions.

InvalidDigest

When uploading to S3 I seem to always get an "InvalidDigest: The Content-MD5 you specified was invalid" exception back. The one being passed to Amazon seems to be the same as the one I get running md5sum on the temporary file written from my byte array. It seems like the issue is it's using the wrong format for the MD5 though. Instead of hex like the command-line md5sum application it needs to use it more like what's output from 'openssl dgst -binary myfile | base64' as base64 encoded.

S3InboundFileSynchronizer failing when processing through S3 directories

I'm trying to use the S3InboundFileSynchronizer to synchronize an S3Bucket to a local directory. The bucket is organised with sub-directories such as:

bucket ->
            2016 ->
                       08 ->
                          daily-report-20160801.csv
                          daily-report-20160802.csv

etc...

Using this configuration:

    public S3InboundFileSynchronizer s3InboundFileSynchronizer() {
        S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(amazonS3());
        synchronizer.setDeleteRemoteFiles(true);
        synchronizer.setPreserveTimestamp(true);
        synchronizer.setRemoteDirectory("REDACTED");
        synchronizer.setFilter(new S3RegexPatternFileListFilter(".*\\.csv$"));
        Expression expression = PARSER.parseExpression("#this.substring(#this.lastIndexOf('/')+1)");
        synchronizer.setLocalFilenameGeneratorExpression(expression);
        return synchronizer;
    }

I'm able to get as far as connecting to the bucket and listing its contents. When it comes time to read from the bucket the following exception is thrown:

org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory; nested exception is org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is java.lang.IllegalStateException: 'path' must in pattern [BUCKET/KEY].
    at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:266)

Reviewing the code it seems that it'd be impossible to ever synchronize an S3Bucket w/ sub-directories:

     private String[] splitPathToBucketAndKey(String path) {
        Assert.hasText(path, "'path' must not be empty String.");
        String[] bucketKey = path.split("/");
        Assert.state(bucketKey.length == 2, "'path' must in pattern [BUCKET/KEY].");
        Assert.state(bucketKey[0].length() >= 3, "S3 bucket name must be at least 3 characters long.");
        bucketKey[0] = resolveBucket(bucketKey[0]);
        return bucketKey;
    }

Is there some configuration I'm missing or is this a bug?

Consumer check for new incoming messages every 9+- seconds

I synchronously produced elements from kpl and consumed concurrently from kcl and Spring-kinesis in different projects. It seems that kcl receive element (single) in a delay of 1-3 seconds at most but Spring kinesis receive elements (many) in a fixed delay of 9 seconds.

My kpl & kcp and Spring-kinesis projects will be provided by mail due to security restrictions of my organization.

Spring kinesis pom:

    <parent>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-build</artifactId>
        <version>2.0.0.BUILD-SNAPSHOT</version>
    </parent>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-stream-binder-kinesis</artifactId>
            <version>1.0.0.BUILD-SNAPSHOT</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-stream</artifactId>
            <version>2.0.0.BUILD-SNAPSHOT</version>
        </dependency>
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
        </dependency>
    </dependencies>
<dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>${spring-cloud.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
<repositories>
        <repository>
            <id>spring-milestones</id>
            <name>Spring Milestones</name>
            <url>https://repo.spring.io/milestone</url>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
spring.cloud.stream.bindings.input-channel-from-kinesis.destination=stava1
spring.cloud.stream.bindings.input-channel-from-kinesis.content-type=application/json
spring.cloud.stream.kinesis.bindings.input-channel-from-kinesis.consumer.idleBetweenPolls=250
spring.cloud.stream.kinesis.bindings.input-channel-from-kinesis.consumer.checkpoint-mode=manual
spring.cloud.stream.kinesis.bindings.input-channel-from-kinesis.consumer.consumer-backoff=250
spring.cloud.stream.kinesis.bindings.input-channel-from-kinesis.consumer.records-limit=1000
spring.cloud.stream.bindings.input-channel-from-kinesis.consumer.concurrency=10
spring.cloud.stream.bindings.input-channel-from-kinesis.consumer.back-off-initial-interval=250
spring.cloud.stream.bindings.input-channel-from-kinesis.consumer.back-off-max-interval=250
spring.cloud.stream.bindings.input-channel-from-kinesis.consumer.back-off-multiplier=1.0

spring-cloud-stream-kinesis-consumer project output:

current time: 12:20:52.335, time from last batch: 68681, messsages until now: 0 - kinesis consumer received data: 0, Data:  Time:12:20:45.891
current time: 12:21:01.727, time from last batch: 9392, messsages until now: 1 - kinesis consumer received data: 1, Data:  Time:12:20:49.159
current time: 12:21:01.727, time from last batch: 16, messsages until now: 2 - kinesis consumer received data: 2, Data:  Time:12:20:50.492
current time: 12:21:01.743, time from last batch: 0, messsages until now: 3 - kinesis consumer received data: 3, Data:  Time:12:20:51.881
current time: 12:21:01.743, time from last batch: 0, messsages until now: 4 - kinesis consumer received data: 4, Data:  Time:12:20:53.224
current time: 12:21:01.743, time from last batch: 0, messsages until now: 5 - kinesis consumer received data: 5, Data:  Time:12:20:54.566
current time: 12:21:01.743, time from last batch: 0, messsages until now: 6 - kinesis consumer received data: 6, Data:  Time:12:20:55.889
current time: 12:21:10.764, time from last batch: 9021, messsages until now: 7 - kinesis consumer received data: 7, Data:  Time:12:20:57.241
current time: 12:21:10.779, time from last batch: 15, messsages until now: 8 - kinesis consumer received data: 8, Data:  Time:12:20:58.579
current time: 12:21:10.779, time from last batch: 0, messsages until now: 9 - kinesis consumer received data: 9, Data:  Time:12:20:59.979
current time: 12:21:10.779, time from last batch: 0, messsages until now: 10 - kinesis consumer received data: 10, Data:  Time:12:21:01.298
current time: 12:21:10.779, time from last batch: 0, messsages until now: 11 - kinesis consumer received data: 11, Data:  Time:12:21:02.682
current time: 12:21:10.779, time from last batch: 0, messsages until now: 12 - kinesis consumer received data: 12, Data:  Time:12:21:04.025
current time: 12:21:10.779, time from last batch: 0, messsages until now: 13 - kinesis consumer received data: 13, Data:  Time:12:21:05.400
current time: 12:21:19.762, time from last batch: 8983, messsages until now: 14 - kinesis consumer received data: 14, Data:  Time:12:21:06.716
current time: 12:21:19.762, time from last batch: 0, messsages until now: 15 - kinesis consumer received data: 15, Data:  Time:12:21:08.069
current time: 12:21:19.762, time from last batch: 0, messsages until now: 16 - kinesis consumer received data: 16, Data:  Time:12:21:09.377
current time: 12:21:19.762, time from last batch: 0, messsages until now: 17 - kinesis consumer received data: 17, Data:  Time:12:21:10.732
current time: 12:21:19.762, time from last batch: 0, messsages until now: 18 - kinesis consumer received data: 18, Data:  Time:12:21:12.075
current time: 12:21:19.762, time from last batch: 0, messsages until now: 19 - kinesis consumer received data: 19, Data:  Time:12:21:13.432
current time: 12:21:19.762, time from last batch: 0, messsages until now: 20 - kinesis consumer received data: 20, Data:  Time:12:21:14.783
current time: 12:21:28.679, time from last batch: 8917, messsages until now: 21 - kinesis consumer received data: 21, Data:  Time:12:21:16.106
current time: 12:21:28.680, time from last batch: 1, messsages until now: 22 - kinesis consumer received data: 22, Data:  Time:12:21:17.479
current time: 12:21:28.680, time from last batch: 0, messsages until now: 23 - kinesis consumer received data: 23, Data:  Time:12:21:18.796
current time: 12:21:28.680, time from last batch: 0, messsages until now: 24 - kinesis consumer received data: 24, Data:  Time:12:21:20.111
current time: 12:21:28.681, time from last batch: 1, messsages until now: 25 - kinesis consumer received data: 25, Data:  Time:12:21:21.467
current time: 12:21:28.681, time from last batch: 0, messsages until now: 26 - kinesis consumer received data: 26, Data:  Time:12:21:22.776
current time: 12:21:28.682, time from last batch: 1, messsages until now: 27 - kinesis consumer received data: 27, Data:  Time:12:21:24.114

kcl (consumer) project output:

current time: 12:20:49.943, time from last batch: 48747, messsages until now: 0 - kinesis consumer received data: 0, Data:  Time:12:20:45.891
current time: 12:20:49.943, time from last batch: 0, messsages until now: 1 - kinesis consumer received data: 1, Data:  Time:12:20:49.159
current time: 12:20:51.000, time from last batch: 1057, messsages until now: 2 - kinesis consumer received data: 2, Data:  Time:12:20:50.492
current time: 12:20:54.031, time from last batch: 3031, messsages until now: 3 - kinesis consumer received data: 3, Data:  Time:12:20:51.881
current time: 12:20:54.031, time from last batch: 0, messsages until now: 4 - kinesis consumer received data: 4, Data:  Time:12:20:53.224
current time: 12:20:55.042, time from last batch: 1011, messsages until now: 5 - kinesis consumer received data: 5, Data:  Time:12:20:54.566
current time: 12:20:58.041, time from last batch: 2999, messsages until now: 6 - kinesis consumer received data: 6, Data:  Time:12:20:55.889
current time: 12:20:58.041, time from last batch: 0, messsages until now: 7 - kinesis consumer received data: 7, Data:  Time:12:20:57.241
current time: 12:20:59.053, time from last batch: 1012, messsages until now: 8 - kinesis consumer received data: 8, Data:  Time:12:20:58.579
current time: 12:21:02.067, time from last batch: 3014, messsages until now: 9 - kinesis consumer received data: 9, Data:  Time:12:20:59.979
current time: 12:21:02.067, time from last batch: 0, messsages until now: 10 - kinesis consumer received data: 10, Data:  Time:12:21:01.298
current time: 12:21:03.094, time from last batch: 1027, messsages until now: 11 - kinesis consumer received data: 11, Data:  Time:12:21:02.682
current time: 12:21:06.254, time from last batch: 3160, messsages until now: 12 - kinesis consumer received data: 12, Data:  Time:12:21:04.025
current time: 12:21:06.254, time from last batch: 0, messsages until now: 13 - kinesis consumer received data: 13, Data:  Time:12:21:05.400
current time: 12:21:07.266, time from last batch: 1012, messsages until now: 14 - kinesis consumer received data: 14, Data:  Time:12:21:06.716
current time: 12:21:10.274, time from last batch: 3008, messsages until now: 15 - kinesis consumer received data: 15, Data:  Time:12:21:08.069
current time: 12:21:10.274, time from last batch: 0, messsages until now: 16 - kinesis consumer received data: 16, Data:  Time:12:21:09.377
current time: 12:21:11.267, time from last batch: 993, messsages until now: 17 - kinesis consumer received data: 17, Data:  Time:12:21:10.732
current time: 12:21:14.373, time from last batch: 3106, messsages until now: 18 - kinesis consumer received data: 18, Data:  Time:12:21:12.075
current time: 12:21:14.373, time from last batch: 0, messsages until now: 19 - kinesis consumer received data: 19, Data:  Time:12:21:13.432
current time: 12:21:15.317, time from last batch: 944, messsages until now: 20 - kinesis consumer received data: 20, Data:  Time:12:21:14.783
current time: 12:21:18.480, time from last batch: 3163, messsages until now: 21 - kinesis consumer received data: 21, Data:  Time:12:21:16.106
current time: 12:21:18.480, time from last batch: 0, messsages until now: 22 - kinesis consumer received data: 22, Data:  Time:12:21:17.479
current time: 12:21:19.480, time from last batch: 1000, messsages until now: 23 - kinesis consumer received data: 23, Data:  Time:12:21:18.796
current time: 12:21:20.468, time from last batch: 988, messsages until now: 24 - kinesis consumer received data: 24, Data:  Time:12:21:20.111
current time: 12:21:23.501, time from last batch: 3033, messsages until now: 25 - kinesis consumer received data: 25, Data:  Time:12:21:21.467
current time: 12:21:23.501, time from last batch: 0, messsages until now: 26 - kinesis consumer received data: 26, Data:  Time:12:21:22.776
current time: 12:21:24.537, time from last batch: 1036, messsages until now: 27 - kinesis consumer received data: 27, Data:  Time:12:21:24.114
current time: 12:21:27.536, time from last batch: 2999, messsages until now: 28 - kinesis consumer received data: 28, Data:  Time:12:21:25.437
current time: 12:21:27.536, time from last batch: 0, messsages until now: 29 - kinesis consumer received data: 29, Data:  Time:12:21:26.787
current time: 12:21:28.566, time from last batch: 1030, messsages until now: 30 - kinesis consumer received data: 30, Data:  Time:12:21:28.114
current time: 12:21:31.732, time from last batch: 3166, messsages until now: 31 - kinesis consumer received data: 31, Data:  Time:12:21:29.491
current time: 12:21:31.732, time from last batch: 0, messsages until now: 32 - kinesis consumer received data: 32, Data:  Time:12:21:30.835
current time: 12:21:32.738, time from last batch: 1006, messsages until now: 33 - kinesis consumer received data: 33, Data:  Time:12:21:32.180
current time: 12:21:35.749, time from last batch: 3011, messsages until now: 34 - kinesis consumer received data: 34, Data:  Time:12:21:33.529
current time: 12:21:35.749, time from last batch: 0, messsages until now: 35 - kinesis consumer received data: 35, Data:  Time:12:21:34.859
current time: 12:21:36.760, time from last batch: 1011, messsages until now: 36 - kinesis consumer received data: 36, Data:  Time:12:21:36.201

kpl (producer) project output: (No failures - all succeed after 1 attempt)

Record: 1 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 2 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 3 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 4 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 5 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 6 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 7 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 8 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 9 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 10 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 11 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 12 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 13 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 14 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 15 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 16 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 17 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 18 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 19 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 20 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 21 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 22 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 23 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 24 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 25 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 26 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 27 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 28 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 29 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 30 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 31 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 32 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 33 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 34 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 35 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 36 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 37 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 38 was put success, Put record into shard shardId-000000000000 attempts: 1
Record: 39 was put success, Put record into shard shardId-000000000000 attempts: 1

Incorrect Content-Type in SnsInboundChannelAdapter

By default the MappingJackson2HttpMessageConverter's supported media type is application/json, but SNS notifications are in text/plain (even though they send perfectly correct JSON).
See example -> http://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.html#SendMessageToHttp.subscribe

Could we add text/plain to message converter configuration in SnsInboundChannelAdapter's constructor?

this.jackson2HttpMessageConverter.setSupportedMediaTypes(ImmutableList.of(new MediaType("application", "json", MappingJackson2HttpMessageConverter.DEFAULT_CHARSET), new MediaType("text", "plain", MappingJackson2HttpMessageConverter.DEFAULT_CHARSET)));

otherwise it throws:

org.springframework.messaging.MessagingException: Could not convert request: no suitable HttpMessageConverter found for expected type [java.util.HashMap] and content type [text/plain;charset=UTF-8]

Would like to ask SqsMessageDrivenChannelAdapters which queues they manage

When our app starts or leaves service, we control SQS queue consumption by iterating over a list of SqsMessageDrivenChannelAdapter beans and calling start() or stop() respectively. This works really well because everyone gets this functionality for free, and no queues are ever left behind.

However, when we want to control a specific queue, we have to maintain mappings between the SqsMessageDrivenChannelAdapter bean and the queue it was created to manage, as there is no way to ask a SqsMessageDrivenChannelAdapter bean what queues it is responsible for managing.

I propose that we add a getter for SqsMessageDrivenChannelAdapter to allow this functionality:

public String[] getQueues() { return Arrays.copyOf(queues, queues.length); }

Alternatively, a generic "what do you manage?" may be a question that belongs higher in the object hierarchy than just SqsMessageDrivenChannelAdapter, as @twicksell was considering.

S3MessageHandler upload fails for File message.

I have configured an S3MessageHandler from spring-integration-aws to upload a File object to S3.

The upload fails with the following trace:

Caused by: com.amazonaws.AmazonClientException: Data read has a different length than the expected: dataLength=0; expectedLength=26; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ResettableInputStream; markedSupported=true; marked=0; resetSinceLastMarked=false; markCount=1; resetCount=0
at com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:152)
...

Looking at the source code for S3MessageHandler, I'm not sure how uploading a File would ever succeed. The s3MessageHandler.upload() method does the following when I trace its execution:

  • Creates a FileInputStream for the File.
  • Computes the MD5 hash for the file contents, using the input stream.
  • Resets the stream if it can be reset (not possible for FileInputStream).
  • Sets up the S3 transfer using the input stream. This fails because the stream is at the EOF, so the number of transferable bytes doesn't match what's in the Content-Length header.

Spring-cloud-sleuth support question

Hi,

I'm considering using spring-integration-aws for a project which sends SQS messages. I've also got spring-cloud-sleuth integrated in my app, and noticed that the headers injected into each message by spring-cloud-sleuth are limited to a few types, and unfortunately, that Boolean and Span from the sleuth project are not among them.

My question is whether there are any plans to enhance the handling of message attributes? Based on my reading of the code, we are pretty limited here by the AWS limitations on attributes. Maybe someone has a novel approach that could be built which would stuff the non-well-known attributes somewhere in the message body. I can do this in a message transformer perhaps, but would want to avoid doing so if there are thoughts on supporting the functionality inside this library.

For reference, I was tipped off to this issue by the following logged messages:
2017-04-17 14:02:14.803 WARN [app-service,fd898f3246f5b160,761fb80acb6a6851,false] 3932 --- [cTaskExecutor-1] o.s.c.a.m.core.QueueMessageChannel : Message header with name 'X-Message-Sent' and type 'java.lang.Boolean' cannot be sent as message attribute because it is not supported by SQS. 2017-04-17 14:02:14.803 WARN [app-service,fd898f3246f5b160,761fb80acb6a6851,false] 3932 --- [cTaskExecutor-1] o.s.c.a.m.core.QueueMessageChannel : Message header with name 'messageSent' and type 'java.lang.Boolean' cannot be sent as message attribute because it is not supported by SQS. 2017-04-17 14:02:14.803 WARN [app-service,fd898f3246f5b160,761fb80acb6a6851,false] 3932 --- [cTaskExecutor-1] o.s.c.a.m.core.QueueMessageChannel : Message header with name 'currentSpan' and type 'org.springframework.cloud.sleuth.Span' cannot be sent as message attribute because it is not supported by SQS. 2017-04-17 14:02:14.804 WARN [app-service,fd898f3246f5b160,761fb80acb6a6851,false] 3932 --- [cTaskExecutor-1] o.s.c.a.m.core.QueueMessageChannel : Message header with name 'X-Current-Span' and type 'org.springframework.cloud.sleuth.Span' cannot be sent as message attribute because it is not supported by SQS.

Resolving spring-integration-aws as a dependency

Thanks @artembilan for this integration extension. Can you describe how to include it in a build?

I'd like to use the latest version supporting Spring Integration 4.0.2.RELEASE (or higher) -- ultimately to be bundled as a module in Spring XD.

I've included the dependency as such:
compile(group: 'org.springframework.integration', name: 'spring-integration-aws', version: '0.5.0.BUILD-20141117.115052-8') (I've also tried previous snapshot builds).

I've tried to follow the descriptions of the Spring repositories listed here:
https://github.com/spring-projects/spring-framework/wiki/Spring-repository-FAQ

I've also experimented with a variation of repositories in my build.gradle script (some I've tried):
maven { url "https://repo.spring.io/libs-snapshot-local" }
maven { url "https://repo.spring.io/libs-snapshot" }
maven { url "https://repo.spring.io/snapshot" }

I've also verified the dependency resolves correctly in the Artifactory repository browser, but I continue to fail to import this and any of it's transitive dependencies.

Can you point me in the right direction? Thanks!

build issue

When I run gradle build, with clean build test option
The build is failing , showing the below error
Caused by: org.gradle.api.InvalidUserDataException: Cannot expand ZIP 'C:\Users\xxxxx\git\spring-integration-aws\build\distributions\spring-integration-aws-1.0.0.BUILD-SNAPSHOT-schema.zip' as it does not exist.

Processing a record from Kinesis within KinesisMessageDrivenChannelAdapter raises a MessageConversionException

I have been working on an AWS kinesis binder for Spring cloud stream https://github.com/spring-cloud/spring-cloud-stream-binder-aws-kinesis. I have a test application that uses the binder to work with a Kinesis stream. The application successfully places a message on the stream and then through its consumer receives that message.

However while receiving and consuming a message through KinesisMessageDrivenChannelAdapter processRecords an error is raised and the kinesis consumer thread exits.

The message received looks like

2017-05-31 15:51:04 DEBUG o.s.i.a.i.k.KinesisMessageDrivenChannelAdapter - Processing records: [{SequenceNumber: 49573594932519422285515180003318172542847248863894437890,ApproximateArrivalTimestamp: Wed May 31 15:49:24 BST 2017,Data: java.nio.HeapByteBuffer[pos=0 lim=141 cap=141],PartitionKey: name}] for [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}, state=CONSUME}]

A converter is called to convert the message payload from JSON to a String

The message within the application at this point is

GenericMessage [payload=byte[112], headers={aws_partitionKey=name, id=9da5972b-c5f8-91d3-32c9-13f474363b8e, aws_shard=shardId-000000000000, aws_stream=test_stream, aws_sequenceNumber=49573594932519422285515180003318172542847248863894437890, timestamp=1496242509589}]

The payload within that Generic message is

[123, 34, 115, 117, 98, 106, 101, 99, 116, 34, 58, 123, 34, 105, 100, 34, 58, 34, 49, 55, 48, 100, 54, 55, 101, 100, 45, 97, 49, 55, 57, 45, 52, 101, 53, 51, 45, 57, 57, 53, 48, 45, 57, 53, 102, 100, 52, 48, 50, 54, 97, 99, 97, 97, 34, 44, 34, 110, 97, 109, 101, 34, 58, 34, 116, 101, 115, 116, 79, 114, 100, 101, 114, 34, 125, 44, 34, 116, 121, 112, 101, 34, 58, 34, 79, 82, 68, 69, 82, 34, 44, 34, 111, 114, 105, 103, 105, 110, 97, 116, 111, 114, 34, 58, 34, 83, 119, 111, 114, 100, 34, 125]

which through a simple decode

public void test() throws UnsupportedEncodingException {
		
	byte[] bytes = {123, 34, 115, 117, 98, 106, 101, 99, 116, 34, 58, 123, 34, 105, 100, 34, 58, 34, 49, 55, 48, 100, 54, 55, 101, 100, 45, 97, 49, 55, 57, 45, 52, 101, 53, 51, 45, 57, 57, 53, 48, 45, 57, 53, 102, 100, 52, 48, 50, 54, 97, 99, 97, 97, 34, 44, 34, 110, 97, 109, 101, 34, 58, 34, 116, 101, 115, 116, 79, 114, 100, 101, 114, 34, 125, 44, 34, 116, 121, 112, 101, 34, 58, 34, 79, 82, 68, 69, 82, 34, 44, 34, 111, 114, 105, 103, 105, 110, 97, 116, 111, 114, 34, 58, 34, 83, 119, 111, 114, 100, 34, 125};

	String doc2 = new String(bytes, "UTF-8");
		
	System.out.println(doc2);
		
}

produces the correct message placed on the stream

{"subject":{"id":"170d67ed-a179-4e53-9950-95fd4026acaa","name":"testOrder"},"type":"ORDER","originator":"Sword"}

However in the application the converter class MappingJackson2MessageConverter raises an error with the conversion

Within StringDeserializer the current token is the START_OBJECT which I would expect since the first character is {
Within the deserialize method this then results in a call to handleUnexpectedToken which hits line 1118 in DeserializationContext and reports the exception

The stack trace returned is

Exception in thread "-kinesis-consumer-1" org.springframework.messaging.converter.MessageConversionException: Could not read JSON: Can not deserialize instance of java.lang.String out of START_OBJECT token
 at [Source: [B@2fa8611f; line: 1, column: 1]; nested exception is com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of java.lang.String out of START_OBJECT token
 at [Source: [B@2fa8611f; line: 1, column: 1], failedMessage=GenericMessage [payload=byte[112], headers={aws_partitionKey=name, id=9da5972b-c5f8-91d3-32c9-13f474363b8e, aws_shard=shardId-000000000000, aws_stream=test_stream, aws_sequenceNumber=49573594932519422285515180003318172542847248863894437890, timestamp=1496242509589}]
	at org.springframework.messaging.converter.MappingJackson2MessageConverter.convertFromInternal(MappingJackson2MessageConverter.java:224)
	at org.springframework.messaging.converter.AbstractMessageConverter.fromMessage(AbstractMessageConverter.java:175)
	at org.springframework.messaging.converter.AbstractMessageConverter.fromMessage(AbstractMessageConverter.java:167)
	at org.springframework.messaging.converter.CompositeMessageConverter.fromMessage(CompositeMessageConverter.java:55)
	at org.springframework.cloud.stream.binding.MessageConverterConfigurer$ContentTypeConvertingInterceptor.preSend(MessageConverterConfigurer.java:248)
	at org.springframework.integration.channel.AbstractMessageChannel$ChannelInterceptorList.preSend(AbstractMessageChannel.java:538)
	at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:415)
	at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
	at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
	at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
	at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
	at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:292)
	at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:212)
	at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:129)
	at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:115)
	at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
	at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:70)
	at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:64)
	at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
	at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
	at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
	at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:171)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter.access$4300(KinesisMessageDrivenChannelAdapter.java:76)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.processRecords(KinesisMessageDrivenChannelAdapter.java:805)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.access$3200(KinesisMessageDrivenChannelAdapter.java:635)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$2.run(KinesisMessageDrivenChannelAdapter.java:754)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ConsumerInvoker.run(KinesisMessageDrivenChannelAdapter.java:872)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of java.lang.String out of START_OBJECT token
 at [Source: [B@2fa8611f; line: 1, column: 1]
	at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:270)
	at com.fasterxml.jackson.databind.DeserializationContext.reportMappingException(DeserializationContext.java:1234)
	at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1122)
	at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1075)
	at com.fasterxml.jackson.databind.deser.std.StringDeserializer.deserialize(StringDeserializer.java:60)
	at com.fasterxml.jackson.databind.deser.std.StringDeserializer.deserialize(StringDeserializer.java:11)
	at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3798)
	at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2959)
	at org.springframework.messaging.converter.MappingJackson2MessageConverter.convertFromInternal(MappingJackson2MessageConverter.java:211)
	... 29 more

Checkpointing logic resulting in lost messages in a clustered environment (M2 release)

I'm using batch checkpoint mode in a clustered environment. All works fine with the M1 release, but in M2 I am seeing messages occasionally get dropped.

I think the culprit is the change in commit 2b57f34 to return the value of "replace(key, oldVal, newVal)" - whereas it previously always returned true when the existingSequence < sequenceNumber, but I'm not too familiar with the code so I can't be sure this is it. I'm thinking it could be a race condition in ShardCheckpointer where the value stored in DynamoDB changes between the call to getCheckpoint( ) and checkpointStore.replace(...), causing the method to return false.

All I can say for sure if that I can switch between version M1 and M2 and in a load test of ~4500 messages I will usually have a handful of messages dropped (~40). Switching back to the M1 release resolves the issue. Unfortunately this is manual performance testing that I haven't automated, so I don't have a simple test case to share.

In my clustere used case I'm happy for my application to receive some duplicates but obviously not for messages to be dropped. There's a comment in the code about an upcoming "shard leader election implementation", so I guess there are more changes coming to this area?

Spring integration aws - Feature to Archive and Retry mechanism to S3

In the spring-integration-aws xsd there is no child element for outbound adapter to house (Like File Outbound adapter)

<xsd:element name="request-handler-advice-chain" type="integration:handlerAdviceChainType" minOccurs="0" maxOccurs="1" />

Adding this feature will help to archive the files after successful transfer. Is there a road map to add this feature?

Regards
Karthik

SqsMessageDrivenChannelAdapter.stop() causing `TimeoutException`

When scheduledFutureByQueue map contains a string key(of the queue name) and a Future value whose outcome is null (and state == 0) a TimeoutException is thrown whenever
future.get(this.queueStopTimeout, TimeUnit.MILLISECONDS);
is called within stop(String logicalQueueName) of SimpleMessageListenerContainer

Receiving IndexOutOfBounds exception while starting up KinesisMessageDrivenChannelAdapter

I'm seeing IndexOutOfBounds exceptions when I start up a KinesisMessageDriverChannelAdapter:

Exception in thread "kinesisMessageDrivenChannelAdapter-kinesis-dispatcher-2" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
	at java.util.ArrayList.rangeCheck(ArrayList.java:653)
	at java.util.ArrayList.get(ArrayList.java:429)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter.populateConsumer(KinesisMessageDrivenChannelAdapter.java:555)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter.access$1000(KinesisMessageDrivenChannelAdapter.java:79)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$1.run(KinesisMessageDrivenChannelAdapter.java:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

These seem to be generated infinitely. I tried both 1.1.0.M1 and 8d18399, the current HEAD of master.

As well (and this could be a different issue, but I suspect it's related), I'm seeing infinite ShardConsumers being created:

INFO  | 2017-03-28 10:25:04 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-5] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:681) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@57955e00] has been started.
INFO  | 2017-03-28 10:25:05 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-5] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:681) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@1460593a] has been started.
INFO  | 2017-03-28 10:25:06 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-5] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:681) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@15acc0f4] has been started.
INFO  | 2017-03-28 10:25:06 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:681) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@5c51a9e] has been started.
INFO  | 2017-03-28 10:25:07 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-5] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:681) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@596eef16] has been started.
INFO  | 2017-03-28 10:25:07 | [kinesisMessageDrivenChannelAdapter-kinesis-consumer-1] kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1 (KinesisMessageDrivenChannelAdapter.java:681) - The [org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$1@de7299e] has been started.
.
.
.

Currently running with 2 shards.

KinesisMessageDrivenChannelAdapter doesn't shut down cleanly

When I shut down my app I receive an exception in KinesisMessageDrivenChannelAdapter:

Exception in thread "kinesisMessageDrivenChannelAdapter-kinesis-dispatcher-2" com.amazonaws.AbortedException: 
	at com.amazonaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:51)
	at com.amazonaws.internal.SdkFilterInputStream.markSupported(SdkFilterInputStream.java:107)
	at com.amazonaws.http.RepeatableInputStreamRequestEntity.isRepeatable(RepeatableInputStreamRequestEntity.java:139)
	at com.amazonaws.http.AmazonHttpClient.shouldRetry(AmazonHttpClient.java:1160)
	at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:715)
	at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:454)
	at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:416)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:365)
	at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2016)
	at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:1986)
	at com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:704)
	at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$1.run(KinesisMessageDrivenChannelAdapter.java:440)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Looks like handling the AbortException would solve the problem but it's unclear where it would be best to do so.

AmazonKinesisException (internal error) causes ShardConsumer thread to terminate

This happened while running my app against Kinesis using the M1 release - I saw the following error in stdout (nothing in the logs), and the consumer thread was terminated. This meant my app kept running but one shard was no longer being serviced.

Exception in thread "-kinesis-consumer-2" com.amazonaws.services.kinesis.model.AmazonKinesisException: null (Service: AmazonKinesis; Status Code: 500; Error Code: InternalFailure; Request ID: c9b72430-13fb-024e-9aca-ac1f6fc0f3fe)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
        at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2276)
        at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2252)
        at com.amazonaws.services.kinesis.AmazonKinesisClient.executeGetRecords(AmazonKinesisClient.java:1062)
        at com.amazonaws.services.kinesis.AmazonKinesisClient.getRecords(AmazonKinesisClient.java:1038)
        at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.getRecords(KinesisMessageDrivenChannelAdapter.java:869)
        at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.lambda$processTask$1(KinesisMessageDrivenChannelAdapter.java:828)
        at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ConsumerInvoker.run(KinesisMessageDrivenChannelAdapter.java:1018)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Since the Kinesis API calls can occasionally fail in this way, the shard consumer should probably handle it gracefully and retry. I'm not 100% sure that the M2 release has the same behaviour, but from a quick look at the code I think only 'ExpiredIteratorException' and 'ProvisionedThroughputExceededException' are being handled gracefully. Maybe a catch-all should be added for AmazonKinesisException.

I've only seen this happen once.

The coded timeout for provisioning a DynamoDbMetadataStore may be too low

The timeout is specified within DynamoDbMetadataStore

.withPollingStrategy( new PollingStrategy(new MaxAttemptsRetryStrategy(25), new FixedDelayStrategy(1)));

This provides 25 seconds to provision the table, if the table is not ACTIVE within that time an error is raised. I have found that when provisioning tables with large read/write capacity it often takes longer than 25 seconds to make the table ACTIVE.

It would be useful to be able to configure this timeout through the number of attempts.

S3 bucket name starting with a forward slash

If my S3 bucket names starts with a / (as seen in many examples) , S3Session.splitPathToBucketAndKey doesn't parse it correctly and blows up at:
Assert.state(bucketKey.length > 0 && bucketKey[0].length() >= 3,
"S3 bucket name must be at least 3 characters long.");

That is, if I use : s3MessageSource.setRemoteDirectory("anwar-batch-bucket"); it works, whereas s3MessageSource.setRemoteDirectory("/anwar-batch-bucket"); doesn't.

Key not found exception when passing folder in S3StreamingMessageSource.setRemoteDirectory

While passing just the bucket name, S3StreamingMessageSource.setRemoteDirectory is working as expected.

But when passing bucket name/folder, S3StreamingMessageSource.setRemoteDirectory causes key not found exception.

Eg.
Bucker Name: foo
Folder Name: bar
Folder bar has a file called hello.log

S3StreamingMessageSource.setRemoteDirectory(foo/bar);

Then,

AbstractRemoteFileStreamingMessageSource.remotePath appends the folder name twice.

file.getRemoteDirectory returns foo/bar
file.getFileName returns bar/hello.log

the method then returns the remote file path as foo/bar/bar/hello.log which causes the key not found exception.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.