GithubHelp home page GithubHelp logo

softwaremill / elasticmq Goto Github PK

View Code? Open in Web Editor NEW
2.4K 61.0 189.0 4.26 MB

In-memory message queue with an Amazon SQS-compatible interface. Runs stand-alone or embedded.

Home Page: https://softwaremill.com/open-source/

License: Apache License 2.0

Scala 95.21% Java 0.13% HTML 0.12% TypeScript 4.40% CSS 0.07% JavaScript 0.07%
scala sqs-interface elasticmq amazon-sqs aws aws-sqs messaging

elasticmq's Introduction

ElasticMQ

Ideas, suggestions, problems, questions  CI Maven Central

tl;dr

  • in-memory message queue system
  • runs stand-alone (download), via Docker or embedded
  • Amazon SQS-compatible interface
  • fully asynchronous implementation, no blocking calls
  • optional UI, queue persistence
  • created and maintained by:

SoftwareMill logo

Summary

ElasticMQ is a message queue system, offering an actor-based Scala and an SQS-compatible REST (query) interface.

ElasticMQ follows the semantics of SQS. Messages are received by polling the queue. When a message is received, it is blocked for a specified amount of time (the visibility timeout). If the message isn't deleted during that time, it will be again available for delivery. Moreover, queues and messages can be configured to always deliver messages with a delay.

The focus in SQS (and ElasticMQ) is to make sure that the messages are delivered. It may happen, however, that a message is delivered twice (if, for example, a client dies after receiving a message and processing it, but before deleting). That's why clients of ElasticMQ (and Amazon SQS) should be idempotent.

As ElasticMQ implements a subset of the SQS query (REST) interface, it is a great SQS alternative both for testing purposes (ElasticMQ is easily embeddable) and for creating systems which work both within and outside of the Amazon infrastructure.

A simple UI is available for viewing real-time queue statistics.

Community

Installation: stand-alone

You can download the stand-alone distribution here: https://s3/.../elasticmq-server-1.4.2.jar

Java 8 or above is required for running the server.

Simply run the jar and you should get a working server, which binds to localhost:9324:

java -jar elasticmq-server-1.4.2.jar

ElasticMQ uses Typesafe Config for configuration. To specify custom configuration values, create a file (e.g. custom.conf), fill it in with the desired values, and pass it to the server:

java -Dconfig.file=custom.conf -jar elasticmq-server-1.4.2.jar

The config file may contain any configuration for Akka and ElasticMQ. Current ElasticMQ configuration values are:

include classpath("application.conf")

# What is the outside visible address of this ElasticMQ node
# Used to create the queue URL (may be different from bind address!)
node-address {
  protocol = http
  host = localhost
  port = 9324
  context-path = ""
}

rest-sqs {
  enabled = true
  bind-port = 9324
  bind-hostname = "0.0.0.0"
  # Possible values: relaxed, strict
  sqs-limits = strict
}

rest-stats {
  enabled = true
  bind-port = 9325
  bind-hostname = "0.0.0.0"
}

# Should the node-address be generated from the bind port/hostname
# Set this to true e.g. when assigning port automatically by using port 0.
generate-node-address = false

queues {
  # See next sections
}

queues-storage {
  # See next sections
}

# Region and accountId which will be included in resource ids
aws {
  region = us-west-2
  accountId = 000000000000
}

You can also provide an alternative Logback configuration file (the default is configured to log INFO logs and above to the console):

java -Dlogback.configurationFile=my_logback.xml -jar elasticmq-server-1.4.2.jar

How are queue URLs created

Some of the responses include a queue URL. By default, the URLs will use http://localhost:9324 as the base URL. To customize, you should properly set the protocol/host/port/context in the node-address setting (see above).

You can also set node-address.host to a special value, "*", which will cause any queue URLs created during a request to use the path of the incoming request. This might be useful e.g. in containerized (Docker) deployments.

Note that changing the bind-port and bind-hostname settings do not affect the queue URLs in any way unless generate-node-address is true. In that case, the bind host/port are used to create the node address. This is useful when the port should be automatically assigned (use port 0 in such case, the selected port will be visible in the logs).

Automatically creating queues on startup

Queues can be automatically created on startup by providing appropriate configuration:

The queues are specified in a custom configuration file. For example, create a custom.conf file with the following:

# the include should be done only once, at the beginning of the custom configuration file
include classpath("application.conf")

queues {
  queue1 {
    defaultVisibilityTimeout = 10 seconds
    delay = 5 seconds
    receiveMessageWait = 0 seconds
    deadLettersQueue {
      name = "queue1-dead-letters"
      maxReceiveCount = 3 // from 1 to 1000
    }
    fifo = false
    contentBasedDeduplication = false
    copyTo = "audit-queue-name"
    moveTo = "redirect-queue-name"
    tags {
      tag1 = "tagged1"
      tag2 = "tagged2"
    }
  }
  queue1-dead-letters { }
  audit-queue-name { }
  redirect-queue-name { }
}

All attributes are optional (except name and maxReceiveCount when a deadLettersQueue is defined). copyTo and moveTo attributes allow to achieve behavior that might be useful primarily for integration testing scenarios - all messages could be either duplicated (using copyTo attribute) or redirected (using moveTo attribute) to another queue.

FIFO queue creation

To create FIFO queue set value of fifo config parameter to true. You can add .fifo suffix to queue name yourself (a name containing . has to be surrounded with quotes), for example:

queues {
  "testQueue.fifo" {
    fifo = true
    contentBasedDeduplication = true
  }
}

If not then suffix will be added automatically during queue creation.

Persisting queues configuration

Queues configuration can be persisted in an external config file in the HOCON format. Note that only the queue metadata (which queues are created, and with what attributes) will be stored, without any messages.

To enable the feature, create a custom configuration file with the following content:

# the include should be done only once, at the beginning of the custom configuration file
include classpath("application.conf")

queues-storage {
  enabled = true
  path = "/path/to/storage/queues.conf"
}

Any time a queue is created, deleted, or its metadata change, the given file will be updated.

On startup, any queues defined in the given file will be created. Note that the persisted queues configuration takes precedence over queues defined in the main configuration file (as described in the previous section) in the queues section.

Persisting queues and messages to SQL database

Queues and their messages can be persisted to SQL database in runtime. All events like queue or message creation, deletion or update will be stored in H2 in-file database, so that the entire ElasticMQ state can be restored after server restart.

To enable the feature, create a custom configuration file with the following content:

# the include should be done only once, at the beginning of the custom configuration file
include classpath("application.conf")

messages-storage {
  enabled = true
}

By default, the database file is stored in /data/elasticmq.db. In order to change it, custom JDBC uri needs to be provided:

# the include should be done only once, at the beginning of the custom configuration file
include classpath("application.conf")

messages-storage {
  enabled = true
  uri = "jdbc:h2:/home/me/elasticmq"
}

On startup, any queues and their messages persisted in the database will be recreated. Note that the persisted queues take precedence over the queues defined in the main configuration file (as described in the previous section) in the queues section.

Starting an embedded ElasticMQ server with an SQS interface

Add ElasticMQ Server to build.sbt dependencies

libraryDependencies += "org.elasticmq" %% "elasticmq-server" % "1.4.2"

Simply start the server using custom configuration (see examples above):

val config = ConfigFactory.load("elasticmq.conf")
val server = new ElasticMQServer(new ElasticMQServerConfig(config))
server.start()

Alternatively, custom rest server can be built using SQSRestServerBuilder provided in elasticmq-rest-sqs package:

val server = SQSRestServerBuilder.start()
// ... use ...
server.stopAndWait()

If you need to bind to a different host/port, there are configuration methods on the builder:

val server = SQSRestServerBuilder.withPort(9325).withInterface("localhost").start()
// ... use ...
server.stopAndWait()

You can also set a dynamic port with a port value of 0 or by using the method withDynamicPort. To retrieve the port (and other configuration) when using a dynamic port value you can access the server via waitUntilStarted for example:

val server = SQSRestServerBuilder.withDynamicPort().start()
server.waitUntilStarted().localAddress().getPort()

You can also provide a custom ActorSystem; for details see the javadocs.

Embedded ElasticMQ can be used from any JVM-based language (Java, Scala, etc.).

(Note that the embedded server created with SQSRestServerBuilder does not load any configuration files, so you cannot automatically create queues on startup as described above. You can of course create queues programmatically.)

Using the Amazon Java SDK to access an ElasticMQ Server

To use Amazon Java SDK as an interface to an ElasticMQ server you just need to change the endpoint:

String endpoint = "http://localhost:9324";
String region = "elasticmq";
String accessKey = "x";
String secretKey = "x";
AmazonSQS client = AmazonSQSClientBuilder.standard()
    .withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretKey)))
    .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpoint, region))
    .build();

The endpoint value should be the same address as the NodeAddress provided as an argument to SQSRestServerBuilder or in the configuration file.

The rest-sqs-testing-amazon-java-sdk module contains some more usage examples.

Using the Amazon boto (Python) to access an ElasticMQ Server

To use Amazon boto as an interface to an ElasticMQ server you set up the connection using:

region = boto.sqs.regioninfo.RegionInfo(name='elasticmq',
                                        endpoint=sqs_endpoint)
conn = boto.connect_sqs(aws_access_key_id='x',
                        aws_secret_access_key='x',
                        is_secure=False,
                        port=sqs_port,
                        region=region)

where sqs_endpoint and sqs_port are the host and port.

The boto3 interface is different:

client = boto3.resource('sqs',
                        endpoint_url='http://localhost:9324',
                        region_name='elasticmq',
                        aws_secret_access_key='x',
                        aws_access_key_id='x',
                        use_ssl=False)
queue = client.get_queue_by_name(QueueName='queue1')

ElasticMQ via Docker

A Docker image built using GraalVM's native-image, is available as softwaremill/elasticmq-native.

To start, run (9324 is the default REST-SQS API port; 9325 is the default UI port, exposing it is fully optional):

docker run -p 9324:9324 -p 9325:9325 softwaremill/elasticmq-native

The elasticmq-native image is much smaller (30MB vs 240MB) and starts up much faster (milliseconds instead of seconds), comparing to the full JVM version (see below). Custom configuration can be provided by creating a custom configuration file (see above) and using it when running the container:

docker run -p 9324:9324 -p 9325:9325 -v `pwd`/custom.conf:/opt/elasticmq.conf softwaremill/elasticmq-native

If messages storage is enabled, the directory containing database files can also be mapped:

docker run -p 9324:9324 -p 9325:9325 -v `pwd`/custom.conf:/opt/elasticmq.conf -v `pwd`/data:/data softwaremill/elasticmq-native

It is possible to specify custom logback.xml config as well to enable additional debug logging for example. Some logback features, like console coloring, will not work due to missing classes in the native image. This can only be solved by building a custom image.

docker run -p 9324:9324 -p 9325:9325 -v `pwd`/custom.conf:/opt/elasticmq.conf -v `pwd`/logback.xml:/opt/logback.xml softwaremill/elasticmq-native

As for now to run elasticmq-native docker image on ARM based CPU one have to install Qemu docker for amd64.

docker run --privileged --rm tonistiigi/binfmt --install amd64

ElasticMQ via Docker (full JVM)

A Docker image is built on each release an pushed as softwaremill/elasticmq. Run using:

docker run -p 9324:9324 -p 9325:9325 softwaremill/elasticmq

The image uses default configuration. Custom configuration can be provided (e.g. to change the port, or create queues on startup) by creating a custom configuration file (see above) and using it when running the container:

docker run -p 9324:9324 -p 9325:9325 -v `pwd`/custom.conf:/opt/elasticmq.conf softwaremill/elasticmq

If messages storage is enabled, the directory containing database files can also be mapped:

docker run -p 9324:9324 -p 9325:9325 -v `pwd`/custom.conf:/opt/elasticmq.conf -v `pwd`/data:/data softwaremill/elasticmq

To pass additional java system properties (-D) you need to prepare an application.ini file. For instance, to set custom logback.xml configuration, application.ini should look as follows:

application.ini:
-Dconfig.file=/opt/elasticmq.conf
-Dlogback.configurationFile=/opt/docker/conf/logback.xml

To run container with customized application.ini file (and custom logback.xml in this particular case) the following command should be used:

docker run -v `pwd`/application.ini:/opt/docker/conf/application.ini -v `pwd`/logback.xml:/opt/docker/conf/logback.xml -p 9324:9324 -p 9325:9325 softwaremill/elasticmq

In case of problems with file mounting on Windows place the application.ini and the configuration file elasticmq.conf in the same directory then mount this directory to /opt/docker/conf:

--mount type=bind,source="$(pwd)"/somefolder,target=/opt/docker/conf.

Another option is to use custom Dockerfile:

FROM openjdk:8-jre-alpine

ARG ELASTICMQ_VERSION
ENV ELASTICMQ_VERSION ${ELASTICMQ_VERSION:-1.4.2}

RUN apk add --no-cache curl ca-certificates
RUN mkdir -p /opt/elasticmq/log /opt/elasticmq/lib /opt/elasticmq/conf
RUN curl -sfLo /opt/elasticmq/lib/elasticmq.jar https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-${ELASTICMQ_VERSION}.jar

COPY ${PWD}/elasticmq.conf /opt/elasticmq/conf/elasticmq.conf

WORKDIR /opt/elasticmq

EXPOSE 9324

ENTRYPOINT [ "/usr/bin/java", "-Dconfig.file=/opt/elasticmq/conf/elasticmq.conf", "-jar", "/opt/elasticmq/lib/elasticmq.jar" ]

and override the entrypoint passing the required properties.

ElasticMQ dependencies in SBT

// Scala 2.13 and 2.12
val elasticmqSqs        = "org.elasticmq" %% "elasticmq-rest-sqs" % "1.4.2"

If you don't want the SQS interface, but just use the actors directly, you can add a dependency only to the core module:

val elasticmqCore       = "org.elasticmq" %% "elasticmq-core" % "1.4.2"

If you want to use a snapshot version, you will need to add the https://oss.sonatype.org/content/repositories/snapshots/ repository to your configuration.

ElasticMQ dependencies in Maven

Dependencies:

<dependency>
    <groupId>org.elasticmq</groupId>
    <artifactId>elasticmq-rest-sqs_2.12</artifactId>
    <version>1.4.2</version>
</dependency>

If you want to use a snapshot version, you will need to add the https://oss.sonatype.org/content/repositories/snapshots/ repository to your configuration.

Current versions

Stable: 1.4.2

Logging

ElasticMQ uses Slf4j for logging. By default no logger backend is included as a dependency, however Logback is recommended.

Performance

Tests done on a 2012 MBP, 2.6GHz, 16GB RAM, no replication. Throughput is in messages per second (messages are small).

Directly accessing the client:

Running test for [in-memory], iterations: 10, msgs in iteration: 100000, thread count: 1.
Overall in-memory throughput: 21326.054040

Running test for [in-memory], iterations: 10, msgs in iteration: 100000, thread count: 2.
Overall in-memory throughput: 26292.956117

Running test for [in-memory], iterations: 10, msgs in iteration: 100000, thread count: 10.
Overall in-memory throughput: 25591.155697

Through the SQS REST interface:

Running test for [rest-sqs + in-memory], iterations: 10, msgs in iteration: 1000, thread count: 20.
Overall rest-sqs + in-memory throughput: 2540.553587

Running test for [rest-sqs + in-memory], iterations: 10, msgs in iteration: 1000, thread count: 40.
Overall rest-sqs + in-memory throughput: 2600.002600

Note that both the client and the server were on the same machine.

Test class: org.elasticmq.performance.LocalPerformanceTest.

Building, running, and packaging

To build and run with debug (this will listen for a remote debugger on port 5005):

~/workspace/elasticmq $ sbt -jvm-debug 5005
> project server
> run

To build a jar-with-dependencies:

~/workspace/elasticmq $ sbt
> project server
> assembly

Building the native image

Do not forget to adjust the CPU and memory settings for the Docker process. It was checked with 6CPUs, 8GB of memory and 2GB of swap. Also, make sure that you are running sbt with the graalvm java, as the way the jars are composed seem to differ from other java implementations, and affect the native-image process that is run later! To rebuild the native image, run:

sbt "project nativeServer; clean; assembly; docker:publishLocal"

Generating GraalVM config files is a manual process currently. You need to run the fat-jar using the GraalVM VM (w/ native-image installed using gu), and then run the following commands to generate the configs:

  • java -agentlib:native-image-agent=config-output-dir=... -jar elasticmq-server-assembly.jar
  • java -agentlib:native-image-agent=config-merge-dir=... -Dconfig.file=test.conf -jar elasticmq-server-assembly.jar (to additionally generate config needed to load custom elasticmq config)

These files should be placed in native-server/src/main/resources/META-INF/native-image and are automatically used by the native-image process.

In case of issues with running GraalVM with native-image-agent it's possible to execute above commands inside of docker container (the image is generated by the sbt command above). graalVmVersion is defined in build.sbt:

docker run -it -v `pwd`:/opt/graalvm --entrypoint /bin/bash --rm ghcr.io-graalvm-graalvm-ce-native-image:java11-${graalVmVersion}

Building multi-architecture image

Publishing Docker image for two different platforms: amd64 and arm64 is possible with Docker Buildx plugin. Docker Buildx is included in Docker Desktop and Docker Linux packages when installed using the DEB or RPM packages. build.sbt has following setup:

  • dockerBuildxSettings creates Docker Buildx instance
  • Docker base image is openjdk:11-jdk-stretch which supports multi-arch images
  • dockerBuildCommand is extended with operator buildx
  • dockerBuildOptions has two additional parameters: --platform=linux/arm64,linux/amd64 and --push

For the native server configuration is the same apart from Docker base image.

Parameter --push is very crucial. Since docker buildx build subcommand is not storing the resulting image in the local docker image list, we need that flag to determine where the final image will be stored. Flag --load makes output destination of type docker. However, this currently works only for single architecture images. Therefore, both sbt commands - docker:publishLocal and docker:publish are pushing images to a Docker registry.

To change this - switch parameters for dockerBuildOptions:

  • from --push to --load and
  • from --platform=linux/arm64,linux/amd64 to --platform=linux/amd64

To build images locally:

  • switch sbt to module server - sbt project server (or sbt project nativeServer for module native-server)
  • make sure Docker Buildx is running docker buildx version
  • create Docker Buildx instance docker buildx create --use --name multi-arch-builder
  • generate the Dockerfile executing sbt docker:stage - it will be generated in server/target/docker/stage
  • generate multi-arch image and push it to Docker Hub:
docker buildx build --platform=linux/arm64,linux/amd64 --push -t softwaremill/elasticmq .
  • or generate single-arch image and load it to docker images locally:
docker buildx build --platform=linux/amd64 --load -t softwaremill/elasticmq .

Tests and coverage

To run the tests:

~/workspace/elasticmq $ sbt test

To check the coverage reports:

~/workspace/elasticmq $ sbt
> coverage
> tests
> coverageReport
> coverageAggregate

Although it's mostly only the core project that is relevant for coverage testing, each project's report can be found in their target directory:

  • core/target/scala-2.12/scoverage-report/index.html
  • common-test/target/scala-2.12/scoverage-report/index.html
  • rest/rest-sqs/target/scala-2.12/scoverage-report/index.html
  • server/target/scala-2.12/scoverage-report/index.html

The aggregate report can be found at target/scala-2.12/scoverage-report/index.html

UI

ElasticMQ-UI

UI provides real-time information about the state of messages and attributes of queue.

Using UI in docker image

UI is bundled with both standard and native images. It is exposed on the address that is defined in rest-stats configuration (by default 0.0.0.0:9325).

In order to turn it off, you have to switch it off via rest-stats.enabled flag.

Using UI locally

You can start UI via yarn start command in the ui directory, which will run on localhost:3000 address.

MBeans

ElasticMQ exposes Queues MBean. It contains three operations:

  • QueueNames - returns array of names of queues
  • NumberOfMessagesForAllQueues - returns tabular data that contains information about number of messages per queue
  • getNumberOfMessagesInQueue - returns information about number of messages in specified queue

Technology

  • Core: Scala and Pekko.
  • Rest server: Pekko HTTP, a high-performance, asynchronous, REST/HTTP toolkit.
  • Testing the SQS interface: Amazon Java SDK; see the rest-sqs-testing-amazon-java-sdk module for the testsuite.

Commercial Support

We offer commercial support for ElasticMQ and related technologies, as well as development services. Contact us to learn more about our offer!

Copyright

Copyright (C) 2011-2021 SoftwareMill https://softwaremill.com.

elasticmq's People

Contributors

adamw avatar brainoutsource avatar conniec avatar dbroda avatar ddossot avatar dependabot[bot] avatar ghostbuster91 avatar hayesgm avatar ideas-into-software avatar janjaali avatar jchyb avatar jononu avatar kamegu avatar kamil-rafalko avatar katlasik avatar mberlanda avatar mergify[bot] avatar micossow avatar opalo avatar pask423 avatar pbuda avatar pierscin avatar scala-steward avatar simong avatar slobo avatar softwaremill-ci avatar sullis avatar szymonneo avatar willejs avatar yamachu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elasticmq's Issues

Can't send a message w/binary attribute and/or more than one attribute

Hello!

I'm trying to send a message having two String attributes and a message having at least one Binary attribute.

For example, this request fails with response code 500:

{  
  "Action":[  
    "SendMessage"
  ],
  "Version":[  
    "2012-11-05"
  ],
  "MessageBody":[  
    "Test message"
  ],
  "MessageAttribute.1.Name":[  
    "stringAttribute"
  ],
  "MessageAttribute.1.Value.StringValue":[  
    "someString"
  ],
  "MessageAttribute.1.Value.DataType":[  
    "String"
  ],
  "MessageAttribute.2.Name":[  
    "binaryAttribute"
  ],
  "MessageAttribute.2.Value.BinaryValue":[  
    "c29tZVN0cmluZw=="
  ],
  "MessageAttribute.2.Value.DataType":[  
    "Binary"
  ]
}

However, a message having only one String attribute is sent fine.

Unfortunately, I have no logs from the ElasticMQ, the only thing's visible from the client side is 500 There was an internal server error.

Support for wait_time_seconds=?

Hello,

I am attempting to use elasticmq for as a fake backend for local testing. With the exception of one call, it seems to work very well (for my purposes, at least). Below, I attempt to set the default wait_time_seconds attribute value, which leads to an exception. The read operation surprisingly works. Is this something that is intended to be supported?

aws_options = {
  use_ssl: false,
  sqs_endpoint: "localhost",
  sqs_port: 9324,
  access_key_id: "xxx",
  secret_access_key: "xxx"
}
AWS.config(aws_options)

sqs = AWS::SQS.new
queue_name = "test_queue"
q = sqs.queues.create(queue_name)
# Returns 0:
q.wait_time_seconds
# Raises an exception with elasticmq, but not SQS:
q.wait_time_seconds = 20

The backtrace is:

AWS::SQS::Errors::InvalidAttributeName: See the SQS docs.
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/aws-sdk-1.10.0/lib/aws/core/client.rb:360:in `return_or_raise'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/aws-sdk-1.10.0/lib/aws/core/client.rb:461:in `client_request'
    from (eval):3:in `set_queue_attributes'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/aws-sdk-1.10.0/lib/aws/sqs/queue.rb:754:in `set_attribute'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/aws-sdk-1.10.0/lib/aws/sqs/queue.rb:405:in `wait_time_seconds='
    from (irb):18
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/bundler-1.3.5/lib/bundler/cli.rb:619:in `console'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/bundler-1.3.5/lib/bundler/vendor/thor/task.rb:27:in `run'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/bundler-1.3.5/lib/bundler/vendor/thor/invocation.rb:120:in `invoke_task'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/bundler-1.3.5/lib/bundler/vendor/thor.rb:344:in `dispatch'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/bundler-1.3.5/lib/bundler/vendor/thor/base.rb:434:in `start'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/bundler-1.3.5/bin/bundle:20:in `block in <top (required)>'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/bundler-1.3.5/lib/bundler/friendly_errors.rb:3:in `with_friendly_errors'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/gems/bundler-1.3.5/bin/bundle:20:in `<top (required)>'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/bin/bundle:19:in `load'
    from /Users/ahannon/.rvm/gems/ruby-1.9.3-p327@event_queue_manager/bin/bundle:19:in `<main>'

And thanks for putting the effort into this project. Cheers!

SQS queue url seems to hardcoded to 9324

this will cause connection error when trying to use AmazonSQSClient to send msg later, because it tries to connect to port 9324 instead of the correct ephemeral port 59923.

here is the output from the test code below.

16:29:30,699  INFO main TheSQSRestServerBuilder:158 - Started SQS rest server, bind address localhost:0, visible server address http://localhost:9324
[INFO] [12/22/2014 16:29:31.121] [elasticmq-akka.actor.default-dispatcher-4] [akka://elasticmq/user/IO-HTTP/listener-0] Bound to localhost/127.0.0.1:0
actual port bound: 59923
16:29:31,741  INFO elasticmq-akka.actor.default-dispatcher-4 QueueManagerActor:25 - Creating queue QueueData(foo,MillisVisibilityTimeout(30000),PT0S,PT0S,2014-12-22T16:29:31.726-08:00,2014-12-22T16:29:31.726-08:00)
queueUrl: http://localhost:9324/queue/foo

here is the complete test code to reproduce the issue

package com.netflix.schlep.sqs;

import akka.io.Tcp;
import com.amazonaws.services.sqs.AmazonSQSClient;
import com.amazonaws.services.sqs.model.ListQueuesResult;
import com.netflix.schlep.sqs.aws.FixedAWSCredentialsProvider;
import org.elasticmq.rest.sqs.SQSRestServer;
import org.elasticmq.rest.sqs.SQSRestServerBuilder;
import org.junit.Assert;
import org.junit.Test;
import scala.util.Try;

import java.util.List;

public class ElasticMQPortTest {

    @Test
    public void portZeroTest() throws Exception {
        final String hostname = "localhost";
        int port = 0;
        SQSRestServer sqsServer = SQSRestServerBuilder
                .withInterface(hostname)
                .withPort(port)
                .start();
        sqsServer.waitUntilStarted();
        Try<Object> tryVal = sqsServer.startFuture().value().get();
        if(tryVal.isSuccess()) {
            Tcp.Bound bound = (Tcp.Bound) tryVal.get();
            // extract the actual port number bound
            port = bound.localAddress().getPort();
        } else {
            throw new RuntimeException("startup failed");
        }

        final String endpoint = String.format("http://%s:%d", hostname, port);
        AmazonSQSClient sqsClient = new AmazonSQSClient(new FixedAWSCredentialsProvider("fakeAccessId", "fakeSecretKey"));
        sqsClient.setEndpoint(endpoint);

        final String queueName = "foo";
        sqsClient.createQueue(queueName);
        ListQueuesResult listQueuesResult = sqsClient.listQueues(queueName);
        List<String> queueUrls = listQueuesResult.getQueueUrls();
        Assert.assertEquals(1, queueUrls.size());
        final String queueUrl = queueUrls.get(0);
        System.out.println("queueUrl: " + queueUrl);
    }
}

Support application/x-www-form-urlencoded Request Content Type

Hi Adam, thanks for the great work.

Would it be possible for ElasticMQ 0.9+ to support request coming in as application/x-www-form-urlencoded? Perl module Amazon::SQS::Simple utilizes that, and it works fine on live SQS and 0.8.12, but falls over on emq 0.9.0 (probably something when moving from Spray to Akka?).

On calling CreateQueue: 400 Bad Request 
Invalid request: UnsupportedRequestContentTypeRejection(Set(application/x-www-form-urlencoded)); see the SQS docs.

Cheers

Ask timed out on [Actor[akka://elasticmq/user/IO-HTTP#940271636]] does not kill process

Sometimes when I automatically execute elasticmq on starting machine I got

12:05:06.080 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.8.7) ... 12:05:59.947 [main] INFO o.e.rest.sqs.TheSQSRestServerBuilder 
- Started SQS rest server, bind address 0.0.0.0:9324, 
visible server address http://10.12.0.25:9324 12:06:09.865 
[main] ERROR org.elasticmq.server.Main$ - Uncaught exception in thread: main akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://elasticmq/user/IO-HTTP#940271636]] after [10000 ms] 
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375) ~[elasticmq-server-0.8.7.jar:0.8.7] 
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45] 

The java process is running, but sqs service is unresponsive.

I would expect two possible behaviours:

  • elasticmq retries sqs until success,
  • or elasticmq kills itself so I can restart it by script.

When machine is not under pressure, (all other services have already started), elasticmq will start properly. Problematic is concurrent start with other services.

IO-HTTP ask timed out

I recently upgraded my vagrant box to ubuntu trusty 14.04, and now every time I execute ElasticMQ, I get an ask timeout error:

vagrant@vagrant-ubuntu-trusty-64:~/elasticmq$ java -jar elasticmq-server-0.8.1.jar
13:06:44.161 [main] INFO  org.elasticmq.server.Main$ - Starting ElasticMQ server (0.8.1) ...
13:06:46.470 [main] INFO  o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
13:06:47.520 [main] ERROR org.elasticmq.server.Main$ - Uncaught exception in thread: main
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://elasticmq/user/IO-HTTP#1257263541]] after [1000 ms]
        at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375) ~[elasticmq-server-0.8.1.jar:0.8.1]
        at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_55]
[INFO] [06/09/2014 13:06:47.811] [elasticmq-akka.actor.default-dispatcher-4] [akka://elasticmq/user/IO-HTTP/listener-0] Bound to /0.0.0.0:9324
[INFO] [06/09/2014 13:06:47.823] [elasticmq-akka.actor.default-dispatcher-3] [akka://elasticmq/deadLetters] Message [akka.io.Tcp$Bound] from Actor[akka://elasticmq/user/IO-HTTP/listener-0#-2113371191] to Actor[akka://elasticmq/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

The clincher is that it still binds to the port and is still usable; it's just the inconvenience that my test scripts fail because of that "ERROR" line. Here's the JRE I'm running. The same distribution worked under ubuntu precise:

vagrant@vagrant-ubuntu-trusty-64:~/elasticmq$ java -version
java version "1.7.0_55"
OpenJDK Runtime Environment (IcedTea 2.4.7) (7u55-2.4.7-1ubuntu1)
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)

Not super high priority as it still works, but thought it was worth a report. Let me know if there's any other way I can help, and thanks again for the awesome product!

Persist queues after restart

Is there any way to persist the queues after restarting ElasticMQ? I'm not really interested in persisting the messages, just the queue itself.

Timeouts for 20 second long poll requests

Hi,

When performing a long poll of 20 seconds (the longest allowed by SQS) I'm getting responses of The server was not able to produce a timely response to your request. or There was an internal server error.

This looks to be an internal timeout:
14:01:43.897 [elasticmq-akka.actor.default-dispatcher-13] ERROR o.e.r.s.TheSQSRestServerBuilder$$anon$1 - Exception when running routes akka.pattern.AskTimeoutException: Timed out at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:312) ~[elasticmq-server-0.7.0.jar:0.7.0] at akka.actor.DefaultScheduler$$anon$8.run(Scheduler.scala:191) ~[elasticmq-server-0.7.0.jar:0.7.0] at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:137) [elasticmq-server-0.7.0.jar:0.7.0] at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:506) [elasticmq-server-0.7.0.jar:0.7.0] at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:262) [elasticmq-server-0.7.0.jar:0.7.0] at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975) [elasticmq-server-0.7.0.jar:0.7.0] at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1478) [elasticmq-server-0.7.0.jar:0.7.0] at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) [elasticmq-server-0.7.0.jar:0.7.0] 14:01:43.900 [elasticmq-akka.actor.default-dispatcher-13] WARN akka://elasticmq/user/IO-HTTP/listener-0/1 - Cannot dispatch HttpResponse as response (part) for POST request to 'http://queue1.internal.office.blubolt.com:9324/queue/PPDActions' since current response state is 'Completed' but should be 'Uncompleted'

I haven't tried raising any of the various timeouts in the application config, but can confirm that long polls of, for example, 10 seconds work just fine.

Let me know if you need any further information.

Cheers,
Trevor

MD5 Mismatch for some SQS messages

Firstly, thank you for making this excellent tool, it has been very useful.

I have found that some SQS messages when read through the Amazon SDK result in errors like:
MD5 hash mismatch for message id d97a1912-c5a3-4f41-8aa5-e6e37a87474f
at Amazon.SQS.Util.AmazonSQSUtil.ValidateMD5(String message, String messageId, String md5FromService)
at Amazon.SQS.Util.AmazonSQSUtil.ValidateMD5(Message message)
at Amazon.SQS.Util.AmazonSQSUtil.ValidateReceiveMessage(ReceiveMessageResponse response)
at Amazon.SQS.AmazonSQSClient.ReceiveMessage(ReceiveMessageRequest request)

I have experimented with the text I am sending. The following test case will result in that error. (It is worth adding that this message does work on the real amazon SQS)
"{"Body":"{\"Event\":\"{\\"RequestId\\":\\"b6889680-c986-4f99-9dff-a29000a800b1\\"\\"EmailSubject\\":\\"Toolbox – reset password\\"}\"}"}"

However other more complex messages go through fine.
Thank you,
Ivan

Feature Request: ReceiptHandles

I am using ElasticMQ to test code that will be used with SQS. Thus, it is important for me that ElasticMQ conforms with the behavior of SQS, even if that's a bit weird.

One of the features of SQS is that it returns a ReceiptHandle on receive calls. This handle is then used in "delete" and "set visibility" messages to determine the message to delete and to change the visibility.

In SQS, the idea is that the handle must be different than the message id because only the last consumer to get the message has the right to delete it. That is, if consumer A gets message M, then the message timeout expires, then consumer B gets message M, only consumer B is allowed to delete it. A delete request from A should not throw an error, but the message should not be deleted.

However, ElasticMQ does not implement this behavior. The returned receipt handle is always the same as the message id, and on "delete", the receipt handle is treated as a message id.

So, do you think it's possible to bring ElasticMQ's behavior closer to SQS? All what's needed is to generate a receipt handle at "receive" (using UUID's random, for example), and then keep a map in the server from message id to last receipt handle to ensure that the client requesting a delete or set visibility is allowed to do that. That would be trivial for InMemoryStorage, but I imagine there would be other implications for distributed environments.

Maven repository down

Hitting https://nexus.softwaremill.com/content/repositories/releases returns a 503 Service Temporarily Unavailable.

Spring Cloud produced messages fail parsing

I'm trying to use an embedded elasticmq in my Spring Boot/Spring Cloud application but getting exceptions when sending a message:

2015-04-23 22:41:53.312 ERROR 14025 --- [t-dispatcher-13] o.e.r.s.TheSQSRestServerBuilder$$anon$1  : Exception when running routes

java.lang.Exception: Currently only handles String typed attributes
    at org.elasticmq.rest.sqs.SendMessageDirectives$$anonfun$getMessageAttributes$1.apply(SendMessageDirectives.scala:62)
    at org.elasticmq.rest.sqs.SendMessageDirectives$$anonfun$getMessageAttributes$1.apply(SendMessageDirectives.scala:53)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
    at scala.collection.immutable.Range.foreach(Range.scala:166)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at org.elasticmq.rest.sqs.SendMessageDirectives$class.getMessageAttributes(SendMessageDirectives.scala:53)
    at org.elasticmq.rest.sqs.TheSQSRestServerBuilder$$anon$1.getMessageAttributes(SQSRestServerBuilder.scala:89)
    at org.elasticmq.rest.sqs.SendMessageDirectives$class.doSendMessage(SendMessageDirectives.scala:72)
    at org.elasticmq.rest.sqs.TheSQSRestServerBuilder$$anon$1.doSendMessage(SQSRestServerBuilder.scala:89)
    at org.elasticmq.rest.sqs.SendMessageBatchDirectives$$anonfun$1$$anonfun$apply$1$$anonfun$2.apply(SendMessageBatchDirectives.scala:16)
    at org.elasticmq.rest.sqs.SendMessageBatchDirectives$$anonfun$1$$anonfun$apply$1$$anonfun$2.apply(SendMessageBatchDirectives.scala:15)
    at org.elasticmq.rest.sqs.BatchRequestsModule$$anonfun$2.apply(BatchRequestsModule.scala:35)
    at org.elasticmq.rest.sqs.BatchRequestsModule$$anonfun$2.apply(BatchRequestsModule.scala:31)
    at scala.collection.immutable.List.map(List.scala:273)
    at org.elasticmq.rest.sqs.BatchRequestsModule$class.batchRequest(BatchRequestsModule.scala:31)
    at org.elasticmq.rest.sqs.TheSQSRestServerBuilder$$anon$1.batchRequest(SQSRestServerBuilder.scala:89)
    at org.elasticmq.rest.sqs.SendMessageBatchDirectives$$anonfun$1$$anonfun$apply$1.apply(SendMessageBatchDirectives.scala:15)
    at org.elasticmq.rest.sqs.SendMessageBatchDirectives$$anonfun$1$$anonfun$apply$1.apply(SendMessageBatchDirectives.scala:12)
    at org.elasticmq.rest.sqs.directives.AnyParamDirectives2$$anonfun$anyParamsMap$1$$anonfun$apply$1.apply(AnyParamDirectives2.scala:13)
    at org.elasticmq.rest.sqs.directives.AnyParamDirectives2$$anonfun$anyParamsMap$1$$anonfun$apply$1.apply(AnyParamDirectives2.scala:11)
    at spray.routing.ApplyConverterInstances$$anon$22$$anonfun$apply$1.apply(ApplyConverterInstances.scala:25)
    at spray.routing.ApplyConverterInstances$$anon$22$$anonfun$apply$1.apply(ApplyConverterInstances.scala:24)
    at spray.routing.ConjunctionMagnet$$anon$1$$anon$2$$anonfun$happly$1$$anonfun$apply$1.apply(Directive.scala:38)
    at spray.routing.ConjunctionMagnet$$anon$1$$anon$2$$anonfun$happly$1$$anonfun$apply$1.apply(Directive.scala:37)
    at spray.routing.directives.BasicDirectives$$anon$1.happly(BasicDirectives.scala:26)
    at spray.routing.ConjunctionMagnet$$anon$1$$anon$2$$anonfun$happly$1.apply(Directive.scala:37)
    at spray.routing.ConjunctionMagnet$$anon$1$$anon$2$$anonfun$happly$1.apply(Directive.scala:36)
    at spray.routing.directives.BasicDirectives$$anon$2.happly(BasicDirectives.scala:79)
    at spray.routing.Directive$$anon$7$$anonfun$happly$4.apply(Directive.scala:86)
    at spray.routing.Directive$$anon$7$$anonfun$happly$4.apply(Directive.scala:86)
    at spray.routing.directives.BasicDirectives$$anon$3$$anonfun$happly$1.apply(BasicDirectives.scala:92)
    at spray.routing.directives.BasicDirectives$$anon$3$$anonfun$happly$1.apply(BasicDirectives.scala:92)
    at spray.routing.directives.BasicDirectives$$anon$3$$anonfun$happly$1.apply(BasicDirectives.scala:92)
    at spray.routing.directives.BasicDirectives$$anon$3$$anonfun$happly$1.apply(BasicDirectives.scala:92)
    at spray.routing.directives.ExecutionDirectives$$anonfun$handleExceptions$1$$anonfun$apply$4.apply(ExecutionDirectives.scala:35)
    at spray.routing.directives.ExecutionDirectives$$anonfun$handleExceptions$1$$anonfun$apply$4.apply(ExecutionDirectives.scala:33)
    at org.elasticmq.rest.sqs.directives.FutureDirectives$$anonfun$futureRouteToRoute$1$$anonfun$apply$1.apply(FutureDirectives.scala:14)
    at org.elasticmq.rest.sqs.directives.FutureDirectives$$anonfun$futureRouteToRoute$1$$anonfun$apply$1.apply(FutureDirectives.scala:11)
    at scala.util.Success$$anonfun$map$1.apply(Try.scala:236)
    at scala.util.Try$.apply(Try.scala:191)
    at scala.util.Success.map(Try.scala:236)
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[Fatal Error] :1:1: Content is not allowed in prolog.
[Fatal Error] :1:1: Content is not allowed in prolog.

I originally thought that it was Spring incorrectly creating the message, which elasticMq was then failing to process, however, when running the same code against a real SQS service, both sending and receiving messages works.

My project code looks like this:

@SpringBootApplication
@Import(MessagingConfig.class)
public class App {
    public static void main(String[] args) {
        SpringApplication.run(App.class, args);
    }
}
@Configuration
public class MessagingConfig implements InitializingBean, DisposableBean {

    private SQSRestServer sqsRestServer;

    @Value("${elasticmq.enabled:false}")
    private boolean elasticMqEnabled;

    @Autowired
    private AmazonSQSAsync client;

    @Bean
    public QueueMessagingTemplate queueMessagingTemplate(AmazonSQS amazonSqs, ResourceIdResolver resourceIdResolver) {
        return new QueueMessagingTemplate(amazonSqs, resourceIdResolver);
    }

    @Override
    public void afterPropertiesSet() throws Exception {
        if (elasticMqEnabled) {
            sqsRestServer = SQSRestServerBuilder.start();
            sqsRestServer.waitUntilStarted();

            client.setEndpoint("http://localhost:9324");
        }
        client.createQueue("TestQueue");
    }

    @Override
    public void destroy() throws Exception {
        String queueUrl = client.getQueueUrl("TestQueue").getQueueUrl();
        client.deleteQueue(queueUrl);

        if (elasticMqEnabled) {
            sqsRestServer.stopAndWait();
        }
    }
}
@Component
public class Sender {

    @Autowired
    private QueueMessagingTemplate queueMessagingTemplate;

    public void send(MyMessage message) {
        queueMessagingTemplate.convertAndSend("TestQueue", message);
    }
}
@Component
public class GreetingEventReceiver {
    @MessageMapping("TestQueue")
    public void receiveGreetingEvent(Greeting greeting) {
        System.out.println(greeting.getMessage());
    }
}

Akka dead letters encountered

This is the stacktrace:

--------------------------------------
[INFO] [12/18/2015 14:19:00.695] [elasticmq-akka.actor.default-dispatcher-9] [akka://elasticmq/deadLetters] Message [scala.collection.immutable.Nil$] from Actor[akka://elasticmq/user/$a/$a#-833948945] to Actor[akka://elasticmq/deadLetters] was not delivered. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[Fatal Error] :1:1: Content is not allowed in prolog.
| Error 2015-12-18 14:19:00,746 [elasticmq-akka.actor.default-dispatcher-10] ERROR sqs.TheSQSRestServerBuilder$$anon$1  - Exception when running routes
Message: Ask timed out on [Actor[akka://elasticmq/user/$a/$a#-833948945]] after [21000 ms]
   Line | Method
->> 334 | apply$mcV$sp     in akka.pattern.PromiseActorRef$$anonfun$1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
|   117 | run              in akka.actor.Scheduler$$anon$7
|   599 | unbatchedExecute in scala.concurrent.Future$InternalCallbackExecutor$
|   109 | execute          in scala.concurrent.BatchingExecutor$class
|   597 | execute . . . .  in scala.concurrent.Future$InternalCallbackExecutor$
|   467 | executeTask      in akka.actor.LightArrayRevolverScheduler$TaskHolder
|   419 | executeBucket$1  in akka.actor.LightArrayRevolverScheduler$$anon$8
|   423 | nextTick         in     ''
|   375 | run . . . . . .  in     ''
^   745 | run              in java.lang.Thread
2015-12-18 14:19:02,604 [quartzScheduler_Worker-2] DEBUG job.RememberLoginAuthTokenDeleteJob  - Deleting expired REMEMBER_LOGIN tokens...
2015-12-18 14:19:03,917 [quartzScheduler_Worker-10] D

Is there a way to fix this issue? I'm using elasticmq version 0.8.12. Thanks

SendMessage response locks up the AWS .NET SDK

The SendMessage call returns a "// TODO:" comment in the XML response. This breaks the AWS .NET SDK. 0.8.1 does not have this issue.

The offending file is at rest/rest-sqs/src/main/scala/org/elasticmq/rest/sqs/SendMessageDirectives.scala.

java.lang.NoClassDefFoundError for embedded server

Hi, I am trying to start an embedded ElasticMQ server for testing SQS. Following your example, I'm just doing

val server = SQSRestServerBuilder.start()

Here is my stacktrace, I'm not sure what is wrong, maybe some config is not set?

java.lang.NoClassDefFoundError: scala/Predef$any2stringadd$
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at akka.actor.RootActorPath.(ActorPath.scala:165)
at akka.event.Logging$StandardOutLogger.(Logging.scala:779)
at akka.event.Logging$.(Logging.scala:787)
at akka.event.Logging$.(Logging.scala)
at akka.event.LoggingBus$class.setUpStdoutLogger(Logging.scala:71)
at akka.event.LoggingBus$class.startStdoutLogger(Logging.scala:87)
at akka.event.EventStream.startStdoutLogger(EventStream.scala:26)
at akka.actor.ActorSystemImpl.(ActorSystem.scala:571)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:108)
at org.elasticmq.rest.sqs.TheSQSRestServerBuilder$$anonfun$getOrCreateActorSystem$2.apply(SQSRestServerBuilder.scala:171)
at org.elasticmq.rest.sqs.TheSQSRestServerBuilder$$anonfun$getOrCreateActorSystem$2.apply(SQSRestServerBuilder.scala:170)
at scala.Option.getOrElse(Option.scala:120)
at org.elasticmq.rest.sqs.TheSQSRestServerBuilder.getOrCreateActorSystem(SQSRestServerBuilder.scala:170)
at org.elasticmq.rest.sqs.TheSQSRestServerBuilder.start(SQSRestServerBuilder.scala:82)
at com.twilio.auditevents.processor.SQSSendReceive$$anonfun$1.apply$mcV$sp(SQSSendReceive.scala:39)

SQSSendReceive is the file I am calling the build server method from.

ElasticMQ bails on unsupported properties when updating queue config

The MessageRetentionPeriod queue attribute is not supported by ElasticMQ but is supported by SQS. Trying to update this attribute on a queue will cause a 400 error.

My use case is testing. I use ElasticMQ as a fake SQS for testing. Therefore, ElasticMQ should support this attribute, or if that proves difficult, this case (and everything else supported by SQS but not yet supported by ElasticMQ, e.g. RedrivePolicy) should be a no-op with a logged warning instead of throwing 400. Perhaps implement an SQS compatibility mode for this.

Characters being removed from messages

Hi,

I am using a simple queue with in memory storage. When I send a message that contains "\r", this character is removed from the message somehow.

Using a wrapper around amazon's Java SQS client, I do something like this:

Queue q = client.createQueue("myqueue");
client.send(q, "Test\rMessage");
Message m = client.receive(q);

However, here m.getBody() is "TestMessage", the "\r" is lost somewhere.

I tested the same code in SQS, so this looks like a problem in ElasticMQ's handlers. Can you verify whether this is really a problem in ElasticMQ?

Support for "Access-Control-Allow-Origin"

Right now trying to use the browser (JS) SDK leads to:

XMLHttpRequest cannot load http://localhost:9324/.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:8000' is therefore not allowed access.
The response had HTTP status code 400.

i.e. the browser is preventing CORS.

See: https://coderwall.com/p/0izzta/cors-directive-for-spray
and: http://jkinkead.blogspot.com/2014/11/handling-cors-headers-with-spray-routing.html

Feature Request: support SentTimestamp attribute

I am using ElasticMQ to test a monitoring tool for SQS. The intent of the tool is to look at the age of the oldest items in the queue (or at least the first ones found by SQS) and alert if those messages are beyond a certain age.

ElasticMQ doesn't set this attribute (or any others that SQS may set) when messages are added to a queue. I have been able to work around it by adding a String attribute called SentTimestamp myself, however I'm unsure at this point how SQS itself will respond to that, especially since I think this should be a Number data type. When I try to add SentTimestamp with a Number data type as follows, I get this error:

public void put(Message message) {
    client.sendMessage(new SendMessageRequest(getQueueURL(), message.getContent())
            .withMessageAttributes(sentTimestamp()));
}

private Map<String, MessageAttributeValue> sentTimestamp() {
    Map<String,MessageAttributeValue> toReturn = new LinkedHashMap<String, MessageAttributeValue>();
    toReturn.put("SentTimestamp", new MessageAttributeValue()
            .withDataType("Number")
            .withStringValue(Long.toString(System.currentTimeMillis())));
    return toReturn;
}

[Fatal Error] :1:1: Content is not allowed in prolog.
[Fatal Error] :1:1: Content is not allowed in prolog.
[Fatal Error] :1:1: Content is not allowed in prolog.
[Fatal Error] :1:1: Content is not allowed in prolog.

com.amazonaws.AmazonServiceException: Unable to unmarshall error response (There was an internal server error.) (Service: AmazonSQS; Status Code: 500; Error Code: 500 Internal Server Error; Request ID: null)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1073)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:721)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:456)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:2291)
at com.amazonaws.services.sqs.AmazonSQSClient.sendMessage(AmazonSQSClient.java:908)
Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog.
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:246)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:124)
at com.amazonaws.util.XpathUtils.documentFrom(XpathUtils.java:125)
at com.amazonaws.util.XpathUtils.documentFrom(XpathUtils.java:132)
at com.amazonaws.http.DefaultErrorResponseHandler.handle(DefaultErrorResponseHandler.java:80)
at com.amazonaws.http.DefaultErrorResponseHandler.handle(DefaultErrorResponseHandler.java:40)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1041)
... 32 more

Can't run SBT build

Running sbt clean test on master or release-0.8.5 fails with:

[error] File name too long
[error] one error found
[error] File name too long
[error] one error found
[error] (elasticmq-rest-sqs/compile:compile) Compilation failed
[error] (elasticmq-core/test:compile) Compilation failed
[error] Total time: 42 s, completed 5-Feb-2015 10:29:42 AM

Full build transcript here: http://pastebin.com/iwGZH1d0

More info:

$ sbt about
[info] Loading project definition from /home/ddossot/dev/scala/libs/elasticmq/project
[info] Set current project to elasticmq-root (in build file:/home/ddossot/dev/scala/libs/elasticmq/)
[warn] Credentials file /home/ddossot/.ivy2/.credentials does not exist
[info] This is sbt 0.13.6
[info] The current project is {file:/home/ddossot/dev/scala/libs/elasticmq/}elasticmq-root 0.8.5
[info] The current project is built against Scala 2.11.4
[info] Available Plugins: sbt.plugins.IvyPlugin, sbt.plugins.JvmPlugin, sbt.plugins.CorePlugin, sbt.plugins.JUnitXmlReportPlugin, net.virtualvoid.sbt.graph.Plugin, com.typesafe.sbt.SbtPgp, sbtassembly.Plugin, com.gu.TeamCityTestReporting
[info] sbt, sbt plugins, and build definitions are using Scala 2.10.4

Wrong MessageId returned sometimes

Hi Adam,
I'm running into an issue where only sometimes MessageId from result of SendMessage is not the same one that gets returned by ReceiveMessage. It seems that SendMessage generates "double the actions" (see log), and then ReceiveMessage does the same, and one of the MessageIds gets returned first at random.

A simple script that utilizes official AWS SDK v3 is pasted at bottom that just generates a random message, then sends it, receives it back and compares MessageId and MessageBody. MessageBody always matches, but MessageId sometimes doesn't.

This is my config:

include classpath("application.conf")

node-address {
    protocol = http
    host = "*"
    port = 80
    context-path = ""
}

rest-sqs {
    enabled = true
    bind-port = 9324
    bind-hostname = "0.0.0.0"
    # Possible values: relaxed, strict
    sqs-limits = strict
}

queues {
    test-123 {}
}

log configuration:

<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <logger name="org.elasticmq" level="TRACE"/>

    <root level="TRACE">
        <appender-ref ref="STDOUT" />
    </root>
</configuration>

Log:

aws-sqs1 2016-04-08T22:31:43.158011421Z 22:31:43.157 [elasticmq-akka.actor.default-dispatcher-8] DEBUG akka.io.TcpListener - New connection accepted
aws-sqs1 2016-04-08T22:31:43.165876552Z 22:31:43.165 [elasticmq-akka.actor.default-dispatcher-4] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue test-123, found?: true
aws-sqs1 2016-04-08T22:31:43.180708743Z 22:31:43.180 [elasticmq-akka.actor.default-dispatcher-8] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue test-123, found?: true
aws-sqs1 2016-04-08T22:31:43.181264827Z 22:31:43.180 [elasticmq-akka.actor.default-dispatcher-8] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue test-123, found?: true
aws-sqs1 2016-04-08T22:31:43.190740346Z 22:31:43.190 [elasticmq-akka.actor.default-dispatcher-4] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue test-123, found?: true
aws-sqs1 2016-04-08T22:31:43.190948696Z 22:31:43.190 [elasticmq-akka.actor.default-dispatcher-4] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue test-123, found?: true
aws-sqs1 2016-04-08T22:31:43.191659715Z 22:31:43.191 [elasticmq-akka.actor.default-dispatcher-4] DEBUG org.elasticmq.actor.queue.QueueActor - test-123: Sent message with id 620173eb-1e11-4f51-b5a0-f14a31b5f426
aws-sqs1 2016-04-08T22:31:43.194201887Z 22:31:43.191 [elasticmq-akka.actor.default-dispatcher-2] DEBUG org.elasticmq.actor.queue.QueueActor - test-123: Sent message with id b994f26a-b80e-4777-a952-7dde16951d06
aws-sqs1 2016-04-08T22:31:43.209773326Z 22:31:43.209 [elasticmq-akka.actor.default-dispatcher-6] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue test-123, found?: true
aws-sqs1 2016-04-08T22:31:43.210000653Z 22:31:43.209 [elasticmq-akka.actor.default-dispatcher-6] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue test-123, found?: true
aws-sqs1 2016-04-08T22:31:43.210338297Z 22:31:43.210 [elasticmq-akka.actor.default-dispatcher-2] DEBUG org.elasticmq.actor.queue.QueueActor - test-123: Receiving message 620173eb-1e11-4f51-b5a0-f14a31b5f426
aws-sqs1 2016-04-08T22:31:43.213298828Z 22:31:43.212 [elasticmq-akka.actor.default-dispatcher-8] DEBUG org.elasticmq.actor.queue.QueueActor - test-123: Receiving message b994f26a-b80e-4777-a952-7dde16951d06

Simple test program that sometimes succeeds, sometimes fails

<?php

use Aws\Sqs\SqsClient;

class AwsFakeTest extends PHPUnit_Framework_TestCase {

  public function testSQS(){
    $sqs = new SqsClient([
      'version'  => 'latest',
      'region'   => 'us-east-1',
      'endpoint' => 'http://localhost:9324',]);

    $queue = $sqs->getQueueUrl(['QueueName' => 'test-123']);

    // ensure we are operating on clean dataset
    $sqs->purgeQueue(['QueueUrl' => $queue['QueueUrl']]);

    $message_body = 'hello world ' . uniqid();

    $message = $sqs->sendMessage([
      'MessageBody' => $message_body,
      'QueueUrl'    => $queue['QueueUrl']]);

    $this->assertNotEmpty($message['MessageId'],
      'Got back some MessageId when storing new message');

    $receipt = $sqs->receiveMessage([
      'AttributeNames' => ['All'],
      'QueueUrl' => $queue['QueueUrl']]);

    $this->assertEquals($message_body, $receipt['Messages'][0]['Body'],
      'Received same message content');

    $this->assertEquals($message['MessageId'], $receipt['Messages'][0]['MessageId'],
      'Received back same message_id ');
  }
}

The test that fails is 'Received back same message_id`. Any suggestions?

Here is a trace from mitmproxy, only one SendMessage is issued with the following body:

2016-04-08 16:37:09 POST http://10.10.0.110:9324/queue/test-123
                         ← 200 text/plain 579B 26ms
                       Request                                             Response                                              Detail
Host:                   10.10.0.110:9324
Proxy-Connection:       Keep-Alive
Content-Type:           application/x-www-form-urlencoded
aws-sdk-invocation-id:  614d4c4662680d30f9ee1b0538c4e492
aws-sdk-retry:          0/0
X-Amz-Date:             20160408T233709Z
Authorization:          AWS4-HMAC-SHA256 Credential=dummy_aws_access_key/20160408/us-east-1/sqs/aws4_request,
                        SignedHeaders=aws-sdk-invocation-id;aws-sdk-retry;host;x-amz-date,
                        Signature=68fa98c4943903dea5186bd1737b435b2e98b0c048f73c27afe6d7a6b4cc7ae9
User-Agent:             aws-sdk-php/3.17.5 GuzzleHttp/6.2.0 curl/7.47.0 PHP/5.6.19
Content-Length:         139
URLEncoded form                                                                                                                                       [m:Auto]
Action:      SendMessage
Version:     2012-11-05
MessageBody: hello world 570840a584481
QueueUrl:    http://10.10.0.110:9324/queue/test-123

(Btw, it seems that with perl library i talked about last time this is not a problem)

Setting the SQSRestServer port to something other than 9324 gives consumer error

Setting the elasticmq port to something other than default:
'SQSRestServerBuilder.withPort(elasticMqPort).withInterface("localhost").start();'

During startup it shows the wrong port in the log:
'15:54:00.102 [main] INFO o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address localhost:9325, visible server address http://localhost:9324'

Consumer fails with:
'java.net.ConnectException: Connection refused'

Everything works fine when using the default port 9324

Unexpected AskTimeoutException

Hello,

I am using elasticmq-server-0.8.8

When I start the process I get a strange error. I made no changes (that I know of) that would have caused this.

09:36:13.382 [main] ERROR org.elasticmq.server.Main$ - Uncaught exception in thread: main
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://elasticmq/user/IO-HTTP#121832015]] after [10000 ms]
        at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:335) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375) ~[elasticmq-server-0.8.8.jar:0.8.8]
        at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_65]
:9354 failed7/2015 09:36:15.320] [elasticmq-akka.actor.default-dispatcher-6] [akka://elasticmq/user/IO-HTTP/listener-0] Bind to localhost
[INFO] [05/17/2015 09:36:15.351] [elasticmq-akka.actor.default-dispatcher-5] [akka://elasticmq/deadLetters] Message [akka.io.Tcp$CommandFailed] from Actor[akka://elasticmq/user/IO-HTTP/listener-0#439419937] to Actor[akka://elasticmq/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [05/17/2015 09:36:15.430] [elasticmq-akka.actor.default-dispatcher-5] [akka://elasticmq/system/IO-TCP/selectors/$a/0] Message [akka.dispatch.sysmsg.DeathWatchNotification] from Actor[akka://elasticmq/system/IO-TCP/selectors/$a/0#2054186458] to Actor[akka://elasticmq/system/IO-TCP/selectors/$a/0#2054186458] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

How can I stop this and what is causing this?

Thank you for your terrific solution and all your hard work!
~Mo

Lack of documentation

Hi!

Thank you for such a good project, looks promising. But, could you please add more (or at least some :)) documentation?

For example:

  1. how to configure replication / clusterization
  2. what is the max value for message delay?
  3. Is it possible to specify exact time of delivery? (with correction of load and queue size)

Thanks!

secure=False required?

Hello,

I'm just getting started with elasticmq. I'm using boto (the official aws client) as the client, and I was running into a bunch of errors I didn't understand:

[WARN] [05/06/2015 20:36:37.435] [elasticmq-akka.actor.default-dispatcher-51] [akka://elasticmq/user/IO-HTTP/listener-0/23] Illegal request, responding with status '501 Not Implemented': Unsupported HTTP method

I was able to get it working by setting is_secure=False in boto (it defaults to is_secure=True).

I didn't see anything in the README about secure=False, or SSL. Is this expected? Is there some way to get it to work with secure? If not, could you add a note to the docs somewhere about that?

Thanks!

Content-Type response header should be text/xml

The Content-Type header coming back in responses is: Content-Type: text/plain; charset=UTF-8

This is causing the XML parsing library I'm using to fail since it expects an xml content type.

Amazon SQS sends back: Content-Type: text/xml

Invalid authorization header

I cant get it to work! I use just the out-of-the-box setup on both sides, but it throws the error message:

[WARN] [10/22/2013 21:57:55.154] [elasticmq-akka.actor.default-dispatcher-32] [akka://elasticmq/user/IO-HTTP/listener-0/53] Illegal request header: Illegal 'Authorization' header: Invalid input '/', expected CTL, ListSep or EOI (line 1, pos 30):
AWS4-HMAC-SHA256 Credential=x/20131022/us-east-1/us-east-1/aws4_request, SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=542710720863c9fe4d7ca7794d69eb6b8839af0a031f09ee0b67955fe855253f

Queues created using AWS SDK have incorrect host and port

com.amazonaws:aws-java-sdk:1.6.8 and org.elasticmq:elasticmq-rest-sqs_2.11:0.8.0 the following test fails since the resulting queueUrl value is http://localhost:9324
@test
public void demoBadQueueUrl() {
SQSRestServer sqsRestServer = SQSRestServerBuilder.withPort(60000).withInterface("10.39.136.164").start();
AmazonSQSClient client = new AmazonSQSClient(new BasicAWSCredentials("x", "x"));
client.setEndpoint("http://10.39.136.164:60000", "sqs", "");
CreateQueueResult cqres = client.createQueue(new CreateQueueRequest("myqueue"));
assertEquals("http://10.39.136.164:60000/queue/myqueue", cqres.getQueueUrl());
}

xml parser error

i followed your custom.conf and ran it
java -Dconfig.file=custom.conf -jar elasticmq-server-0.8.0.jar

i copied the custom conf

my queue polling is on 0.0.0.0:9324

but it throws "XMLParserError"

would you please explain how to set it up correctly and how to make it persistant with sql db?

SQS short polling not possible in strict mode?

Hi,

it does not seem possible to send a WaitTimeSeconds of 0 when receiving a message from a queue.

In the SQS documentation it states that

Short polling occurs when the WaitTimeSeconds parameter of a ReceiveMessage call is set to 0. This happens in one of two ways – either the ReceiveMessage call sets WaitTimeSeconds to 0, or the ReceiveMessage call doesn’t set WaitTimeSeconds and the queue attribute ReceiveMessageWaitTimeSeconds is 0.

But in ElasticMQ, when in strict mode, we seem to be enforcing values between and including 1 and 20. Example: https://github.com/adamw/elasticmq/blob/161eae5f6b828bfb4445d41e25d5cc34430a6614/rest/rest-sqs/src/main/scala/org/elasticmq/rest/sqs/SQSRestServerBuilder.scala#L370

Is this intended behavior?

Some libraries, like spring-cloud-aws defaults to 0 WaitTimeSeconds on receive, and cannot be easily changed for some methods. The workaround for using spring-cloud-sqs seems to be to disable strict mode in ElasticMQ.

Support for Redrive Policy/dead letter queues

Hi!
First of all: Thanks for this amazing work!

It would be nice if elasticmq had support for sqs redrive policy and dead letter queues, e.g. in the following places:

  • getQueueAttributes
  • setQueueAttributes
  • also add support for setting 'maxReceiveCount' and 'deadLetterTargetArn' via elasticmq queue configuration file.
  • Correct dead letter queue configuration should be implemented in the 'business logic' accordingly

See aws doc at: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSDeadLetterQueue.html

Regards, Gunther

java.lang.NoClassDefFoundError: shapeless/HListerAux

elasticmq version: 0.8.8

Not sure if this is an issue and I cannot see anything obviously wrong with my dependencies. I am getting the following exception:

java.lang.NoClassDefFoundError: shapeless/HListerAux$
at org.elasticmq.rest.sqs.directives.ElasticMQDirectives$class.action(ElasticMQDirectives.scala:24)
at org.elasticmq.rest.sqs.TheSQSRestServerBuilder$$anon$1.action(SQSRestServerBuilder.scala:89)
at org.elasticmq.rest.sqs.CreateQueueDirectives$class.$init$(CreateQueueDirectives.scala:19)
at org.elasticmq.rest.sqs.TheSQSRestServerBuilder$$anon$1.(SQSRestServerBuilder.scala:89)
at org.elasticmq.rest.sqs.TheSQSRestServerBuilder.start(SQSRestServerBuilder.scala:89)

when I use this line in my test:

val server = SQSRestServerBuilder.start()

The below is how the shapeless library is imported via spray-routing-shapeless2_2.11:1.3.3. There is no other mention of shapeless in the dependency tree except here:

�[0m[�[0minfo�[0m] �[0m +-io.spray:spray-json_2.11:1.3.2 [S]�[0m
�[0m[�[0minfo�[0m] �[0m +-io.spray:spray-routing-shapeless2_2.11:1.3.3 [S]�[0m
�[0m[�[0minfo�[0m] �[0m +-com.chuusai:shapeless_2.11:2.1.0 [S]�[0m
�[0m[�[0minfo�[0m] �[0m +-io.spray:spray-http_2.11:1.3.3 [S]�[0m
�[0m[�[0minfo�[0m] �[0m | +-io.spray:spray-util_2.11:1.3.3 [S]�[0m
�[0m[�[0minfo�[0m] �[0m | +-org.parboiled:parboiled-scala_2.11:1.1.7 [S]�[0m
�[0m[�[0minfo�[0m] �[0m | +-org.parboiled:parboiled-core:1.1.7�[0m

Im using Spray 1.3.3, akka 2.3.11, aws-sdk 1.10.0.

Is this something that anyone has experienced?

Instructions to Run From Source

I am trying to use sbt to run this from source. Within the shell I am running run but I get:

> run
[warn] Credentials file /Users/alex/.ivy2/.credentials does not exist
java.lang.RuntimeException: No main class detected.
    at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last elasticmq-root/compile:run for the full output.
[error] (elasticmq-root/compile:run) No main class detected.
[error] Total time: 0 s, completed Feb 23, 2015 12:33:18 PM

alternatively, how do I create a JAR like the one posted in the README?

Error parsing XML with AWS SDK for .NET

I haven't done a wire capture with WireShark / Fiddler yet , but I'm guessing this is somehow related to the BOM or has something to do with UTF-8 vs UTF-16 encoding (or similar).

I'm trying to get a queue url with this code:

string accessKey = "1234", secretKey = "5678";
var name = "Test-EmailNotifications";
var config = new AmazonSQSConfig()
{
    ServiceURL = "http://localhost:9324",
};

using (var client = Amazon.AWSClientFactory.CreateAmazonSQSClient(accessKey, secretKey, config))
{       
  var queueUrl = client.GetQueueUrl(new GetQueueUrlRequest() { QueueName = name })
    .GetQueueUrlResult.QueueUrl;
}

Note that the queue operation doesn't seem to matter here... the parsing of the Xml response is dying regardless of whether I not I try to create a queue, get a queue url, write to a queue, etc.

My stacktrace has

 at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
   at Amazon.SQS.AmazonSQSClient.Invoke[T](SQSRequest sqsRequest, IDictionary`2 parameters)
   at Amazon.SQS.AmazonSQSClient.GetQueueUrl(GetQueueUrlRequest request)

GetQueueUrl -> https://github.com/aws/aws-sdk-net/blob/master/AWSSDK/Amazon.SQS/AmazonSQSClient.cs#L458
Invoke -> https://github.com/aws/aws-sdk-net/blob/master/AWSSDK/Amazon.SQS/AmazonSQSClient.cs#L585
Deserialize -> https://github.com/aws/aws-sdk-net/blob/master/AWSSDK/Amazon.SQS/AmazonSQSClient.cs#L650

The error message is

There is an error in XML document (1, 2).

<CreateQueueResponse xmlns='http://queue.amazonaws.com/doc/2009-02-01/'> was not expected.

So is there any way to tweak the response being generated from the server at all? I suppose this might require some code changes... not sure if the .NET client sends headers different from the Java client to vary what the server responds to...

Unsupported major.minor version 52.0

Downloading https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.8.10.jar and running

java -jar elasticmq-server-0.8.10.jar

results in the following error

java.lang.UnsupportedClassVersionError: com/typesafe/config/ConfigFactory : Unsupported major.minor version 52.0
    at java.lang.ClassLoader.defineClass1(Native Method) ~[na:1.7.0_75]
    at java.lang.ClassLoader.defineClass(ClassLoader.java:800) ~[na:1.7.0_75]
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[na:1.7.0_75]
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) ~[na:1.7.0_75]
    at java.net.URLClassLoader.access$100(URLClassLoader.java:71) ~[na:1.7.0_75]
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361) ~[na:1.7.0_75]
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_75]
    at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_75]
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_75]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_75]
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_75]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_75]
    at org.elasticmq.server.Main$.main(Main.scala:15) ~[elasticmq-server-0.8.10.jar:0.8.10]
    at org.elasticmq.server.Main.main(Main.scala) ~[elasticmq-server-0.8.10.jar:0.8.10]

It seems like Config only supports Java 8 from version 1.3.0 (lightbend/config#321). I think you'll need to change the README.me file as it says it's compatible with Java 6 (I'm currently running Java 7).

Long polling for messages with delay doesn't work

Hi,

I've faced with an issue, when long polling doesn't work for messages with delay.

Use case:

  1. Create some_test queue
  2. Send message to this queue with DelaySeconds=5
  3. Start long polling with the following parameters: WaitTimeSeconds=12 and Timeout=30

Result:
Receiver gets message on the second attempt.

Logs:

java -Dlogback.configurationFile=my_logback.xml -jar elasticmq-server-0.8.12.jar 
19:34:06.730 [main] INFO  org.elasticmq.server.Main$ - Starting ElasticMQ server (0.8.12) ...
19:34:07.647 [main] INFO  o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
[INFO] [01/13/2016 19:34:08.166] [elasticmq-akka.actor.default-dispatcher-3] [akka://elasticmq/user/IO-HTTP/listener-0] Bound to /0.0.0.0:9324
19:34:08.170 [main] INFO  org.elasticmq.server.Main$ - === ElasticMQ server (0.8.12) started in 1791 ms ===
19:34:45.822 [elasticmq-akka.actor.default-dispatcher-3] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue some_test, found?: false
19:34:45.843 [elasticmq-akka.actor.default-dispatcher-4] INFO  o.elasticmq.actor.QueueManagerActor - Creating queue QueueData(some_test,MillisVisibilityTimeout(30000),PT0S,PT0S,2016-01-13T19:34:45.790+03:00,2016-01-13T19:34:45.819+03:00)
19:34:45.885 [elasticmq-akka.actor.default-dispatcher-3] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue some_test, found?: true
19:34:45.905 [elasticmq-akka.actor.default-dispatcher-4] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue some_test, found?: true
19:34:45.921 [elasticmq-akka.actor.default-dispatcher-4] DEBUG org.elasticmq.actor.queue.QueueActor - some_test: Sent message with id d0022c91-6e04-4ce1-810f-dc751e32015e
19:34:45.933 [elasticmq-akka.actor.default-dispatcher-3] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue some_test, found?: true
19:34:45.940 [elasticmq-akka.actor.default-dispatcher-3] DEBUG org.elasticmq.actor.queue.QueueActor - some_test: Awaiting messages: start for sequence 0.
19:34:57.960 [elasticmq-akka.actor.default-dispatcher-8] DEBUG org.elasticmq.actor.queue.QueueActor - some_test: Awaiting messages: sequence 0 timed out. Replying with no messages.
19:34:57.972 [elasticmq-akka.actor.default-dispatcher-3] DEBUG o.elasticmq.actor.QueueManagerActor - Looking up queue some_test, found?: true
19:34:57.975 [elasticmq-akka.actor.default-dispatcher-3] DEBUG org.elasticmq.actor.queue.QueueActor - some_test: Receiving message d0022c91-6e04-4ce1-810f-dc751e32015e

Thanks

SQSRestServer exposes ephemeral port number bound

ElasticMQ can be an excellent choice for in-memory SQS replacement for unit test. For testing, I don't want to specify a specific/static port number. I would rather let system pick a ephemeral port number when passing port number zero to SQSRestServerBuilder. This way, I can avoid potential port number conflict.

But I would need SQSRestServer to expose the actual ephemeral port bound by http server listener, so that I can construct the proper endpoint for SQS client.

Binding failed

Error Message
Binding failed. Switch on DEBUG-level logging for `akka.io.TcpListener` to log the cause
Stacktrace
java.lang.RuntimeException: Binding failed. Switch on DEBUG-level logging for `akka.io.TcpListener` to log the cause.
    at spray.routing.SimpleRoutingApp$$anonfun$startServer$1.apply(SimpleRoutingApp.scala:68)
    at spray.routing.SimpleRoutingApp$$anonfun$startServer$1.apply(SimpleRoutingApp.scala:63)
    at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
    at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:249)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Standard Output
329453 main  INFO org.elasticmq.rest.sqs.TheSQSRestServerBuilder Started SQS rest server, bind address localhost:56781, visible server address http://localhost:9324
[WARN] [10/07/2015 19:13:31.265] [elasticmq-akka.actor.default-dispatcher-2] [akka://elasticmq/user/IO-HTTP/listener-0] Bind to localhost/127.0.0.1:56781 failed, timeout 1 second expired
[INFO] [10/07/2015 19:13:31.298] [elasticmq-akka.actor.default-dispatcher-4] [akka://elasticmq/user/IO-HTTP/listener-0] Message [akka.io.Tcp$Bound] from Actor[akka://elasticmq/system/IO-TCP/selectors/$a/0#-260055602] to Actor[akka://elasticmq/user/IO-HTTP/listener-0#-1846361151] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

Support QueueArn attribute

It would be great if each queue could have an associated QueueArn attribute: that would be ElasticMQ behave more like SQS, allowing code run the same way whether using one or the other as a back-end.

HTTPS support?

Does elasticmq support HTTPS? I'm trying the following configuration and the server only understands me when I make an HTTP connection to 443. Is there a setting I've missed?

node-address {
    protocol = https
    host = sqs.us-west-2.amazonaws.com
    port = 443
    context-path = ""
}

rest-sqs {
    enabled = true
    bind-port = 443
    bind-hostname = "0.0.0.0"
    // Possible values: relaxed, strict
    sqs-limits = relaxed
}

spray.can.server.parsing.illegal-header-warnings = off

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.