GithubHelp home page GithubHelp logo

wascally's Introduction

Wascally

No longer under activate development. Please see rabbot for a similar API with more features and better failure semantics.

Version npm npm Downloads Dependencies

This is a very opinionated abstraction over amqplib to help simplify certain common tasks and (hopefully) reduce the effort required to use RabbitMQ in your Node services.

Features:

  • Gracefully handle re-connections
  • Automatically re-define all topology on re-connection
  • Automatically re-send any unconfirmed messages on re-connection
  • Support the majority of RabbitMQ's extensions
  • Handle batching of acknowledgements and rejections
  • Topology & configuration via the JSON configuration method (thanks to @JohnDMathis!)

Assumptions & Defaults:

  • Fault-tolerance/resilience over throughput
  • Default to publish confirmation
  • Default to ack mode on consumers
  • Heterogenous services that include statically typed languages
  • JSON as the only serialization provider

Demos

API Reference

This library implements promises for many of the calls via when.js.

Connectivity Events

Wascally emits both generic and specific connectivity events that you can bind to in order to handle various states:

  • Any Connection
  • connected
  • closed
  • failed
  • Specific Connection
  • [connectionName].connected.opened
  • [connectionName].connected.closed
  • [connectionName].connected.failed

The connection object is passed to the event handler for each event. Use the name property of the connection object to determine which connection the generic events fired for.

!IMPORTANT! - wascally handles connectivity for you, mucking about with the connection directly isn't supported (don't do it).

Sending & Receiving Messages

Publish

The publish call returns a promise that is only resolved once the broker has accepted responsibility for the message (see Publisher Acknowledgments for more details). If a configured timeout is reached, or in the rare event that the broker rejects the message, the promise will be rejected. More commonly, the connection to the broker could be lost before the message is confirmed and you end up with a message in "limbo". Wascally keeps a list of unconfirmed messages that have been published in memory only. Once a connection is re-established and the topology is in place, Wascally will prioritize re-sending these messages before sending anything else.

In the event of a disconnect, all publish promises that have not been resolved are rejected. This behavior is a problematic over-simplification and subject to change in a future release.

Publish timeouts can be set per message, per exchange or per connection. The most specific value overrides any set at a higher level. There are no default timeouts set at any level. The timer is started as soon as publish is called and only cancelled once wascally is able to make the publish call on the actual exchange's channel. The timeout is cancelled once publish is called and will not result in a rejected promise due to time spent waiting on a confirmation.

publish( exchangeName, options, [connectionName] )

This syntax uses an options object rather than arguments, here's an example showing all of the available properties:

rabbit.publish( 'exchange.name', {
		routingKey: 'hi',
		type: 'company.project.messages.textMessage',
		correlationId: 'one',
		body: { text: 'hello!' },
		messageId: '100',
		expiresAfter: 1000 // TTL in ms, in this example 1 second
		timestamp: // posix timestamp (long)
		headers: {
			'random': 'application specific value'
		},
		timeout: // ms to wait before cancelling the publish and rejecting the promise
	},
	connectionName: '' // another optional way to provide connection name if needed
);

publish( exchangeName, typeName, messageBody, [routingKey], [correlationId], [connectionName] )

Messages bodies are simple objects. A type specifier is required for the message which will be used to set AMQP's properties.type. If no routing key is provided, the type specifier will be used. A routing key of '' will prevent the type specifier from being used.

// the first 3 arguments are required
// routing key is optional and defaults to the value of typeName
// connectionName is only needed if you have multiple connections to different servers or vhosts

rabbit.publish( 'log.entries', 'company.project.messages.logEntry', {
		date: Date.now(),
		level: logLevel,
		message: message
	}, 'log.' + logLevel, someValueToCorrelateBy );

request( exchangeName, options, [connectionName] )

This works just like a publish except that the promise returned provides the response (or responses) from the other side.

// when multiple responses are provided, all but the last will be provided via the .progress callback.
// the last/only reply will always be provided to the .then callback
rabbit.request( 'request.exchange', {
		// see publish example to see options for the outgoing message
	} )
	.progress( function( reply ) {
		// if multiple replies are provided, all but the last will be sent via the progress callback
	} )
	.then( function( final ) {
		// the last message in a series OR the only reply will be sent to this callback
	} );

handle( typeName, handler, [context] )

Notes:

  • Handle calls should happen before starting subscriptions.
  • The message's routing key will be used if the type is missing or empty on incoming messages

Message handlers are registered to handle a message based on the typeName. Calling handle will return a reference to the handler that can later be removed. The message that is passed to the handler is the raw Rabbit payload. The body property contains the message body published. The message has ack, nack (requeue the message) and reject (don't requeue the message) methods control what Rabbit does with the message.

Explicit Error Handling

In this example, any possible error is caught in an explicit try/catch:

var handler = rabbit.handle( 'company.project.messages.logEntry', function( message ) {
	try {
		// do something meaningful?
		console.log( message.body );
		message.ack();
	} catch( err ) {
		message.nack();
	}
} );

handler.remove();

Automatically Nack On Error

This example shows how to have wascally wrap all handlers with a try catch that:

  • nacks the message on error
  • console.log that an error has occurred in a handle
// after this call, any new callbacks attached via handle will be wrapped in a try/catch
// that nacks the message on an error
rabbit.nackOnError();

var handler = rabbit.handle( 'company.project.messages.logEntry', function( message ) {
	console.log( message.body );
	message.ack();
} );

handler.remove();

// after this call, new callbacks attached via handle will *not* be wrapped in a try/catch
rabbit.ignoreHandlerErrors();

Late-bound Error Handling

Provide a strategy for handling errors to multiple handles or attach an error handler after the fact.

var handler = rabbit.handle( 'company.project.messages.logEntry', function( message ) {
	console.log( message.body );
	message.ack();
} );

handler.catch( function( err, msg ) {
	// do something with the error & message
	msg.nack();
} );

!!! IMPORTANT !!!

Failure to handle errors will result in silent failures and lost messages.

Unhandled Messages

In previous versions, if a subscription was started in ack mode (the default) without a handler to process the message, the message would get lost in limbo until the connection (or channel) was closed and then the messages would be returned to the queue. This is very confusing and undesirable behavior. To help protect against this, the new default behavior is that any message received that doesn't have any elligible handlers will get nack'd and sent back to the queue immediately.

This is still problematic because it can create churn on the client and server as the message will be redelivered indefinitely.

To change this behavior, use one of the following calls:

Note: only one of these strategies can be activated at a time

onUnhandled( handler )

rabbit.onUnhandled( function( message ) {
	 // handle the message here
} );

nackUnhandled() - default

Sends all unhandled messages back to the queue.

rabbit.nackUnhandled();

rejectUnhandled()

Rejects unhandled messages so that will will not be requeued. DO NOT use this unless there are dead letter exchanges for all queues.

rabbit.rejectUnhandled();

startSubscription( queueName, [connectionName] )

Recommendation: set handlers for anticipated types up before starting subscriptions.

Starts a consumer on the queue specified. connectionName is optional and only required if subscribing to a queue on a connection other than the default one.

Message Format

The following structure shows and briefly explains the format of the message that is passed to the handle callback:

{
	// metadata specific to routing & delivery
	fields: {
		consumerTag: "", // identifies the consumer to rabbit
		deliveryTag: #, // identifies the message delivered for rabbit
		redelivered: true|false, // indicates if the message was previously nacked or returned to the queue
		exchange: "" // name of exchange the message was published to,
		routingKey: "" // the routing key (if any) used when published
	},
	properties:{
		contentType: "application/json", // wascally's default
		contentEncoding: "utf8", // wascally's default
		headers: {}, // any user provided headers
		correlationId: "", // the correlation id if provided
		replyTo: "", // the reply queue would go here
		messageId: "", // message id if provided
		type: "", // the type of the message published
		appId: "" // not used by wascally
	},
	content: { "type": "Buffer", "data": [ ... ] }, // raw buffer of message body
	body: , // this could be an object, string, etc - whatever was published
	type: "" // this also contains the type of the message published
}

Message API

Wascally defaults to (and assumes) queues are in ack mode. It batches ack and nack operations in order to improve total throughput. Ack/Nack calls do not take effect immediately.

message.ack()

Enqueues the message for acknowledgement.

message.nack()

Enqueues the message for rejection. This will re-enqueue the message.

message.reject()

Rejects the message without re-queueing it. Please use with caution and consider having a dead-letter-exchange assigned to the queue before using this feature.

message.reply( message, [more], [replyType] )

Acknowledges the messages and sends the message back to the requestor. The message is only the body of the reply. Providing true to more will cause the message to get sent to the .progress callback of the request promise so that you can send multiple replies. The replyType argument sets the type of the reply message. (important when messaging with statically typed languages)

Queues in noBatch mode

Wascally now supports the ability to put queues into non-batching behavior. This causes ack, nack and reject calls to take place against the channel immediately. This feature is ideal when processing messages are long-running and consumer limits are in place. Be aware that this feature does have a significant impact on message throughput.

Reply Queues

By default, wascally creates a unique reply queue for each connection which is automatically subscribed to and deleted on connection close. This can be modified or turned off altogether.

Changing the behavior is done by passing one of three values to the replyQueue property on the connection hash:

!!! IMPORTANT !!! wascally cannot prevent queue naming collisions across services instances or connections when using the first two options.

Custom Name

Only changes the name of the reply queue that wascally creates - autoDelete and subscribe will be set to true.

rabbit.addConnection( {
	// ...
	replyQueue: 'myOwnQueue'
} );

Custom Behavior

To take full control of the queue name and behavior, provide a queue definition in place of the name.

wascally provides no defaults - it will only use the definition provided

rabbit.addConnection( {
	// ...
	replyQueue: {
		name: 'myOwnQueue',
		subscribe: 'true',
		durable: true
	}
} );

No Automatic Reply Queue

Only pick this option if request/response isn't in use or when providing a custom overall strategy

rabbit.addConnection( {
	// ...
	replyQueue: false
} );

Managing Connections

addConnection ( options )

The call returns a promise that can be used to determine when the connection to the server has been established.

Options is a hash that can contain the following:

  • name String the name of this connection. Defaults to "default" when not supplied.
  • server String the IP address or DNS name of the RabbitMQ server. Defaults to "localhost"
  • port String the TCP/IP port on which RabbitMQ is listening. Defaults to 5672
  • vhost String the named vhost to use in RabbitMQ. Defaults to the root vhost, '%2f' ("/")
  • protocol String the connection protocol to use. Defaults to 'amqp://'
  • user String the username used for authentication / authorization with this connection. Defaults to 'guest'
  • pass String the password for the specified user. Defaults to 'guest'
  • timeout number how long to wait for a connection to be established. No default value
  • heartbeat number how often the client and server check to see if they can still reach each other, specified in seconds. Defaults to 30 (seconds)
  • replyQueue String the name of the reply queue to use (see above)
  • publishTimeout number the default timeout in milliseconds for a publish call

Note that the "default" connection (by name) is used when any method is called without a connection name supplied.

rabbit.addConnection( {
	user: 'someUser',
	pass: 'sup3rs3cr3t',
	server: 'my-rqm.server',
	port: 5672,
	timeout: 2000,
	vhost: '%2f',
	heartbeat: 10
} );

Managing Topology

addExchange( exchangeName, exchangeType, [options], [connectionName] )

The call returns a promise that can be used to determine when the exchange has been created on the server.

Valid exchangeTypes:

  • 'direct'
  • 'fanout'
  • 'topic'

Options is a hash that can contain the following:

  • autoDelete true|false delete when consumer count goes to 0
  • durable true|false survive broker restarts
  • persistent true|false a.k.a. persistent delivery, messages saved to disk
  • alternate 'alt.exchange' define an alternate exchange
  • publishTimeout 2^32 timeout in milliseconds for publish calls to this exchange

addQueue( queueName, [options], [connectionName] )

The call returns a promise that can be used to determine when the queue has been created on the server.

Options is a hash that can contain the following:

  • autoDelete true|false delete when consumer count goes to 0
  • durable true|false survive broker restarts
  • exclusive true|false limits queue to the current connection only (danger)
  • subscribe true|false auto-start the subscription
  • limit 2^16 max number of unacked messages allowed for consumer
  • noAck true|false the server will remove messages from the queue as soon as they are delivered
  • noBatch true|false causes ack, nack & reject to take place immediately
  • queueLimit 2^32 max number of ready messages a queue can hold
  • messageTtl 2^32 time in ms before a message expires on the queue
  • expires 2^32 time in ms before a queue with 0 consumers expires
  • deadLetter 'dlx.exchange' the exchange to dead-letter messages to
  • maxPriority 2^8 the highest priority this queue supports

bindExchange( sourceExchange, targetExchange, [routingKeys], [connectionName] )

Binds the target exchange to the source exchange. Messages flow from source to target.

bindQueue( sourceExchange, targetQueue, [routingKeys], [connectionName] )

Binds the target queue to the source exchange. Messages flow from source to target.

Configuration via JSON

Note: setting subscribe to true will result in subscriptions starting immediately upon queue creation.

This example shows most of the available options described above.

	var settings = {
		connection: {
			user: 'guest',
			pass: 'guest',
			server: '127.0.0.1',
			// server: '127.0.0.1, 194.66.82.11',
			// server: ['127.0.0.1', '194.66.82.11'],
			port: 5672,
			timeout: 2000,
			vhost: '%2fmyhost'
			},
		exchanges:[
			{ name: 'config-ex.1', type: 'fanout', publishTimeout: 1000 },
			{ name: 'config-ex.2', type: 'topic', alternate: 'alternate-ex.2', persistent: true },
			{ name: 'dead-letter-ex.2', type: 'fanout' }
			],
		queues:[
			{ name:'config-q.1', limit: 100, queueLimit: 1000 },
			{ name:'config-q.2', subscribe: true, deadLetter: 'dead-letter-ex.2' }
			],
		bindings:[
			{ exchange: 'config-ex.1', target: 'config-q.1', keys: [ 'bob','fred' ] },
			{ exchange: 'config-ex.2', target: 'config-q.2', keys: 'test1' }
		]
	};

To establish a connection with all settings in place and ready to go call configure:

	var rabbit = require( 'wascally' );

	rabbit.configure( settings ).done( function() {
		// ready to go!
	} );

Closing Connections

Wascally will attempt to resolve all outstanding publishes and recieved messages (ack/nack/reject) before closing the channels and connection. If you would like to defer certain actions until after everything has been safely resolved, then use the promise returned from either close call.

!!! CAUTION !!! - using reset is dangerous. All topology associated with the connection will be removed meaning wasclly will not be able to re-establish it all should you decide to reconnect.

close( [connectionName], [reset] )

Closes the connection, optionall resetting all previously defined topology for the connection. The connectionName uses default if one is not provided.

closeAll( [reset] )

Closes all connections, optionally resetting the topology for all of them.

AMQPS, SSL/TLS Support

Providing the following configuration options setting the related environment varibles will cause wascally to attempt connecting via AMQPS. For more details about which settings perform what role, refer to the amqplib's page on SSL.

	connection: { 		// sample connection hash
		caPath: '', 	// comma delimited paths to CA files. RABBIT_CA
		certPath: '', 	// path to cert file. RABBIT_CERT
		keyPath: '',	// path to key file. RABBIT_KEY
		passphrase: '', // passphrase associated with cert/pfx. RABBIT_PASSPHRASE
		pfxPath: ''		// path to pfx file. RABBIT_PFX
	}

Channel Prefetch Limits

Wascally mostly hides the notion of a channel behind the scenes, but still allows you to specify channel options such as the channel prefetch limit. Rather than specifying this on a channel object, however, it is specified as a limit on a queue defintion.

queues: [{
  // ...

  limit: 5
}]

// or

rabbit.addQueue("some.q", {
  // ...

  limit: 5
});

This queue configuration will set a prefetch limit of 5 on the channel that is used for consuming this queue.

Note: The queue limit is not the same as the queueLimit option - the latter of which sets the maximum number of messages allowed in the queue.

Additional Learning Resources

Watch Me Code

Thanks to Derick Bailey's input, the API and documentation for wascally have improved a lot. You can learn from Derick's hands-on experience in his Watch Me Code series.

RabbitMQ In Action

Alvaro Vidella and Jason Williams literally wrote the book on RabbitMQ.

Enterprise Integration Patterns

Gregor Hophe and Bobby Woolf's definitive work on messaging. The site provides basic descriptions of the patterns and the book goes into a lot of detail.

I can't recommend this book highly enough; understanding the patterns will provide you with the conceptual tools need to be successful.

Contributing

PRs with insufficient coverage, broken tests or deviation from the style will not be accepted.

Behavior & Integration Tests

PRs should include modified or additional test coverage in both integration and behavioral specs. Integration tests assume RabbitMQ is running on localhost with guest/guest credentials and the consistent hash exchange plugin enabled. You can enable the plugin with the following command:

rabbitmq-plugins enable rabbitmq_consistent_hash_exchange

Running gulp will run both sets after every file change and display a coverage summary. To view a detailed report, run gulp coverage once to bring up the browser.

Vagrant

Wascally now provides a sample Vagrantfile that will set up a virtual machine that runs RabbitMQ. Under the hood, it uses the official RabbitMQ Docker image. It will forward RabbitMQ's default ports to localhost.

First, you will need to copy the sample file to a usable file:

$ cp Vagrantfile.sample Vagrantfile

Adjust any necessary settings. Then, from the root of the project, run:

$ vagrant up

This will create your box. Right now, it supports the virtualbox and vmware_fusion providers. To access the box, run:

$ vagrant ssh

Once inside, you can view the RabbitMQ logs by executing:

$ docker logs rabbitmq

When the Vagrant box is running, RabbitMQ can be accessed at localhost:5672 and the management console at http://localhost:15672.

Click here for more information on Vagrant, Docker, and official RabbitMQ Docker image.

To run tests using Vagrant:

Execute from the host machine:

$ vagrant up
$ gulp

Style

This project has both an .editorconfig and .esformatter file to help keep adherance to style simple. Please also take advantage of the .jshintrc file and avoid linter warnings.

Roadmap

  • additional test coverage
  • support RabbitMQ backpressure mechanisms
  • (configurable) limits & behavior when publishing during connectivity issues
  • ability to capture/log unpublished messages on shutdown
  • add support for Rabbit's HTTP API
  • enable better cluster utilization by spreading connections out over all nodes in cluster

wascally's People

Contributors

arobson avatar chrisgundersen avatar dcneiner avatar dvideby0 avatar esatterwhite avatar ifandelse avatar joseph-onsip avatar leankitscott avatar openam avatar secretfader avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wascally's Issues

Basic Demo for Simple Worker Queue

I need a basic worker queue, similar to Sidekiq or Kue. The publisher doesn't need to receive responses from the worker, so this should be a simple unidirectional queue pattern โ€” and yet, I can't see how to implement it with Wascally.

Any tips?

Is the rabbitmq_consistent_hash_exchange plugin required?

If it's required then it would be useful to have a note about adding the plugin to the server.

rabbit-plugins enable rabbitmq_consistent_hash_exchange

If it's not required then mentioning it in the tests section would be helpful for anyone trying out the tests.

remove specific listener to listened emitted event

I added listeners to wascally connections for me to listen to fail and connection scenarios after rabbit.configure. I also have to remove specific listener to some events emitted. Is there a way to do that in wascally?

wascally.connections.default.connection.once( 'failed', function () {
   var error = new Error( 'Failed to connect to the RabbitMQ Server' );
   throw error;
} );

Adding handlers after connection / subscription has been established

Per conversation in #17 and my own experiencing of this issue, I wanted to open this as a bug report instead of just the conversation from that other ticket.

I ran in to this scenario today, where I have my rabbus library auto-subscribing to queues. If there are messages in the queue, the auto-subscription will not deliver them to my handlers. The subscriber to the queue is set up correctly, and I see the message go in to un-ack'd mode in the queue. However, the messages are never delivered to my handler.

The work-around is to not use the subscribe: true option for queue definitions. Instead, use the manual rabbit.startSubscription method call, after the handlers are in place.

rabbit.handle("message.type", myHandlerFn);
rabbit.startSubscription(queueName);

As a work-around, this is fine. When my code connects to the queue, and there are existing messages in the queue, the messages are picked up and handled.

As a long-term fix, @arobson identified some potential problems in ticket #17:

@derickbailey - the problem you're running into is that handlers should be defined before the connection/subscription is created. The gist is establishing the connection and the subscription to the queue before the handler is registered so wascally can't do anything with it. Once the connection is broken, the message goes back to the queue.

The simplest solution
I will update the README to explain that handlers should be defined before establishing a connection.

Long term fix
Right now, the lib doesn't really do anything to help you out or even let you know if there's a problem with how things are being configured. I need to think through what it should do when issues like this come up. There are a lot of options, but none of them are great. Holding on to unhandled messages waiting for a handler could lead to a memory overflow. Rejecting them without a handler could cause an endless loop - chewing up the server and causing order-scrambling in the queue. Dumping them to a log means nothing else will get the message and could also lead to disk space issues. The only two safe things I can think of are console.log a warning or throw an exception.

Given the problems that could occur, as noted here, I would suggest the simplest fix possible: remove the subscribe: true option entirely. Force people to manually call the rabbit.startSubscription method and document that this must be done after the handlers are in place.

wascally wabbit on redhat losing consumers?

have you seen wascally drop consumers, randomly?

i've got a situation that i can almost reproduce on command, in RedHat Linux 7 (64bit) where my consumers drop.

is there an event or something in wascally that i can latch on to, when this happens, so i can get notified?

issue with queue `limit` options?

I'm running in to an issue where a subscriber with a limit: ## on a queue is stopping work after a few seconds... and i don't know why.

here's the scenario: i had some busted code in SignalLeaf, and it racked up 56,000 messages in a queue... yeah. busted code, plus a few days of time :)

so i fixed the code and put in a queue limit of 25 (instead of the unlimited, causing crashes from eating all available memory) and things worked for a few seconds, but then stopped. it would process a couple hundred messages, and then wouldn't process anymore.

the crazy part is, no messages were stuck in unacknowledged status. the code just stopped working. it would no longer pull in new messages.

drop the limit back down to 1 and it works fine. it takes a lot longer to process them, of course, since it's limited to 1 at a time. but at least 1 doesn't stop working.

it's not just "25" either... i tried a limit of 10, and same thing happened. processes a few batches, and then stops doing any work. i even tried a limit of 2, and it stops after a few rounds of processing.

it only seems to work with either a limit of 1, or unlimited.

any idea what this may be? have you seen wascally stop processing messages with a limit, like this?

issues w/ ack in "direct" exchanges

I'm using a fairly simple configuration for a direct exchange, using a sample routing key

rabbit.configure({
  connection: { ... },

  queues: [{name: "some-q"}],

  exchanges: [{name: "some-ex", type: "direct"}],

  bindings: [
    {exchange: "some-ex", target: "some-q", keys: ["foo"]}
  ]
});

when I publish a message to some-ex with a routingKey of "foo", it publishes... but if I don't put a ridiculously long setTimeout around the message processing, the message will never be acknowledged in the queue.

essentially, I have to do this:

rabbit.handle("some-message-type", function(msg){

  msg.ack();

  setTimeout(function(){
    myCallBack(msg.body);
  }, 500);

});

if I don't have that 500ms delay between acknowledging the message and doing the work, the message will never be acknowledged and will be re-delivered to the queue.

i don't have this problem with fanout or topic exchanges... only direct exchanges

one thing i've noticed, which may be me missing something or may be a symptom of this... there don't appear to be any specs that use direct exchanges.

are you guys doing any work w/ direct exchanges? are you seeing anything similar? can you reproduce the problem, as I seem to be able to do?

halp?

is it possible to limit request handler to 1 thing at a time?

I've got a basic request / response setup, based on what i found in the specs... it generally works as expected. i am able to make a request and send a reply.

in my specific scenario, though, i need to limit the work being done by the request handler, to 1 thing at a time. i've been trying to find a "prefetch" equivalent, which is what I've done in the past, to limit the number of items being worked on. i'm not seeing that option, though, so I'm wondering if there is another way limit a handle to doing one thing at a time.

basically, I want to do this:

  1. set up a handler and listen for a request
  2. receive a request, and stop the handler from accepting new requests
  3. process the request - which can take a very long time, involving database, etc.
  4. send the reply back
  5. go to step 1

I tried doing something like this, as an example:

function handleMessage(){

  var handler = rabbit.handle("config.test.message", function( msg ) {
    handler.remove();

    setTimeout(function(){
      var reply = {
        foo: "this is a reply!",
        originalId: msg.properties.messageId
      };
      msg.reply(reply);
      handleMessage();
    }, 1500);

  });
}

this sort of works...

if I send a single message to the queue, this code picks it up and handles it, then waits for a new message to process. if i send another message, it picks it up, handles it, waits... and on and on

but if i have a client send 2 messages to the queue rapidly (as fast as the node client can send them), then the server never picks up the second message. it gets the first message, and it handles it. it then re-adds the handler and looks for another message... but the message that is already sitting in the queue is never picked up.

I'm probably doing something dumb / wrong in my configuration and/or code, here... any ideas on how to get my scenario working, and/or a better way to do this?

SSL Uri not built correctly

While attempting to make an SSL connection to RabbitMQ, the following error occurred:

Error: Failed to create queue 'test' on connection 'default' with 'Expected amqp: or amqps: as the protocol; got amqpstestuser:'

where the queue is 'test' and the user name is 'testuser'. The configuration settings are as described in the docs.

(btw, thanks for the cool module)

Clarification of "message type" in handlers

First, thank you for this library!

I have a question about how the handlers are able to handle messages based on the type property.

Calling handle returns a subscription:

var subscription = rabbit.handle('published', function (message) {
        console.log(JSON.stringify(message.body));
    });

The first parameter of handle is referred to as message type. In the subscription returned, I see that the subscription's topic field is set to the value that was passed for type. This appears confusing, though this could be because I do not fully understand the nomenclature.

I eventually discovered that no matter what I chose to use for message type, none of the handlers would pick up messages from the queues. In the RabbitMQ web console, I could see messages being delivered, but getting immediately nacked.

I tried to use the # as a message type (presuming a confusion with topic matching) AND I've tried to use the onUnhandled method. Still, I have been unable to handle messages.

Debugging further, I discovered that the messages on my queues do not have a type field specified. In the src/amqp/queue.js module, I see dispatch.publish( raw.type, raw, function( data ) { ... and raw.type was undefined.

Should the onUnhandled method be made capable of handling messages where the type header property is not defined? The property itself does not appear to be required by producers.

Thank you for the consideration!

rabbit.handle is swallowing errors

I'm not entirely sure where, as I don't understand the workings of postal... but rabbit.handle is swallowing exceptions.

demo code:

var handler = rabbit.handle("test.message", function( msg ) {

  console.log("");
  console.log("Incoming message!");
  console.log("ID:", msg.properties.messageId)
  console.log("Body:", msg.body);

  throw new Error("foo");

  msg.ack();
  setTimeout(function(){
    rabbit.closeAll().then(function(){
      process.exit();
    });
  }, 500);

});

process.on("unhandledException", function(err){
  console.log("UNHANDLED!!!!!!!!!!!!!!!!");
  console.log(err.stack);
});

send a message to a queue with this code, and you will see this output:

Incoming message!
ID:
Body: { foo: 'bar', baz: 'quux' }

but you will never see an exception being thrown, and never hitting the unhandled exception event in the process.

the complete code, with "sender" and "receiver" is available here, to reproduce the problem: https://gist.github.com/derickbailey/4c61f4ae3f470946f83f

i have to manually add a try / catch block inside of the handle callback function, to capture and prevent it from swallowing the exception.

Safely closing a connection to the message broker

Hi, Is there a way to close the connection when needed by using a function call in this module, akin to what one does in amqlib (e.g conn.close() ). If such a function is not provided, is there any another way to close the connection safely ?

Channels Limit on Heroku (Cloudamqp)

Hi,

We're using the cloudamqp server on heroku and they have a limit of allowing 200 channels for free version. We have around 15 queues which are interacting across 4 dynos to send messages between them.
Since the channel management is hidden from us. Is there a way to control them?

Thanks
Diraj

How to properly close / release connections

Running a simple file causes node to hang and never terminate normally.

require("wascally");

Save that line as index.js and run as node index.js. Node hangs and doesn't terminate.

Memory leak on postal.js when using req-res pattern

Problem

There was a leak on postal.js on postal.js/lib/postal.js line 270 after benchmarking the http server using wascally as rpc to my services. The problem was that the req-res pattern of wascally uses the messageId as a topic which postal creates a cache base on the topic. The problem with that is that it doesnt serve the purpose of caching it when you know that the topic is unique.

Tools on benchmark:
  1. wrk as my http benchmark.
  2. pm2 to see the memory usage of http server.
This is the current memory footprint before tests ( wascally requests are in "api-server" )

screen shot 2015-06-11 at 9 30 35 am

Test A

After running wrk and leaving postal.js line 270 as is. This was the result.

screen shot 2015-06-11 at 9 38 20 am

Test B

Disabling the cache postal.js line 270 making this to false

screen shot 2015-06-11 at 9 39 25 am

Findings:

  1. On Test A it piles up on the memory and slowly hanging up the http server.
  2. On Test B It didnt pile up the memory.

Is there way to make wascally pass a reset on cache or disables the cache if its a req-res pattern?

force batch ack to process?

I have a scenario where I stand up and tear down queue handlers on a rather frequent basis. Right now, I have to use a setTimeout function with a delay of at least 500ms before tearing down the handler, otherwise my messages get stuck in the queue as unacknowledged.

is there a way for me to force any batched acks to go through, prior to tearing down my handler? I don't like having this delay in my code... i have no confidence that my timeout will be long enough under heavy load, and I don't want to continue increasing it just to account for that.

thanks, yo

Making sense of demo

I cannot seem to make any sense of the demo. I'm starting the publisher first, then the subscriber. The subscriber sends a request every 3 seconds and the publisher picks up all of them and then starts the long running for-loop on the first one. The subscriber doesn't receive anything until the for-loop is done (which I didn't expect). It keeps sending requests (about 43-44 of them) and the requests pile up as unacked (the publisher is sent the message but is busy running that long for-loop, so they never get a chance to expire). Then the publisher starts running the loop again and again for all of those requests (as I discovered by adding a console.log to for-loop itself). However, only the first request sends any messages back to the subscriber, which I also don't understand.

Is this expected behavior?

autoDelete and durable?

this is more a question than an issue...

it seems a little odd, to me, to have both autoDelete and durable set to true for an exchange / queue... which you have in all the specs. wouldn't autoDelete make durability meaningless? once the connection goes away, the queue is deleted... so why bother having it durable?

i'm certainly not claiming expertise here. am i missing something? is there a good use case for autoDelete and durable?

with multiple handlers in a single process, if one nacks the message will infinite loop between handlers

scenario:

  • single nodejs process
  • 2 listeners for the same message type, on the same queue
  • 1 of the listeners nack's the message
  • other listener ack's the message
  • the message will be put on infinite loops, passing between the two handlers

see https://gist.github.com/derickbailey/a962cb12cf51b2b26960 for a demonstration of this.

i expect (need) the ack the to stop the message from processing any further. currently, that is not happening.

take the code from the gist above, and run the receiver.js first. then run the sender.js file. you will see a single message sent across the exchange / queue. once it is picked up by the receiver, the message will ping-pong between the two handlers inside of the receiver.js file, infinitely. no matter how many times the msg.ack() is called, it will still get pushed to the other handler and msg.nack() will be called again, sending it back to the 1st handler which will ack() it again, and round and round and round forever.

if you comment out the handleMessage2() call in the receiver, the message acks and it is removed from RMQ. but if you have both handleMessage() and handleMessage2() running, it will infinite loop, passing the message round in circles.

Wascally doesn't work with the node 'config' module installed

I'm setting up a new project and tried to just get Wascally working with the pub/sub demo code. Was stumped for hours on some mysterious errors. Turns out it was all because I was trying to use the node config module; once I hard-coded my configuration settings into the js file and uninstalled the config module, the demo code ran just fine.

I'm hoping this is not too difficult to fix, because it's kind of a deal-breaker for us. ๐Ÿ˜ž

Error: Configuration property "RABBIT_BROKER" is not defined
    at Object.Config.get (./node_modules/config/lib/config.js:177:11)
    at getOption (./node_modules/wascally/src/amqp/connection.js:11:15)
    at new Adapter (./node_modules/wascally/src/amqp/connection.js:38:19)
    at module.exports (./node_modules/wascally/src/amqp/connection.js:141:16)
    at machina.Fsm.extend.states.initializing._onEnter (./node_modules/wascally/src/connectionFsm.js:75:19)
    at _.extend.transition (./node_modules/wascally/node_modules/machina/lib/machina.js:200:56)
    at Fsm (./node_modules/wascally/node_modules/machina/lib/machina.js:111:18)
    at new fsm (./node_modules/wascally/node_modules/machina/lib/machina.js:330:24)
    at Connection (./node_modules/wascally/src/connectionFsm.js:186:9)
    at Broker.addConnection (./node_modules/wascally/src/index.js:35:20)

TypeError: Cannot read property 'channels' of undefined
    at Broker.getExchange (./node_modules/wascally/src/index.js:126:43)
    at bound [as getExchange] (./node_modules/wascally/node_modules/lodash/dist/lodash.js:729:21)
    at Broker.publish (./node_modules/wascally/src/index.js:187:14)
    at bound [as publish] (./node_modules/wascally/node_modules/lodash/dist/lodash.js:729:21)
    at null.<anonymous> (./node_modules/wascally/src/index.js:205:8)
    at runResolver (./node_modules/wascally/node_modules/when/lib/makePromise.js:59:5)
    at new Promise (./node_modules/wascally/node_modules/when/lib/makePromise.js:30:4)
    at Function.promise (./node_modules/wascally/node_modules/when/when.js:87:10)
    at Broker.request (./node_modules/wascally/src/index.js:196:14)
    at bound [as request] (./node_modules/wascally/node_modules/lodash/dist/lodash.js:729:21)

npm install wascally does not work

Hi,

Installing 0.2.7 or 0.2.6 do not work, whereas installing 0.2.5 will work.

The errors I get when installing are:

Command failed: git clone --template=/Users/admin/.npm/_git_remotes/_templates --mirror git://github.com/ifandelse/riveter.git /Users/admin/.npm/_git-remotes/git-github-com-ifandelse-riveter-git-c54a5909

Thanks

message.ack() not working for requests

After testing multiple releases we discovered that any build since 0.2.0-8 (February 11th) the reply queue will fill up and show all messages as unacked. Here is the sample code we used to test this.

Publisher

var rabbit = require( 'wascally' );
var settings = {
  connection: {
    user: '',
    pass: '',
    server: '127.0.0.1',
    port: 5672,
    vhost: ''
  },
  exchanges:[
    { name: 'snd.1', type: 'topic', persistent:true  }
  ]
};
rabbit.configure( settings ).done( function() {

});
var x = 0;
  setInterval(function(){
    rabbit.request( 'snd.1', {
      routingKey: 'post',
      type: 'snd.1.fb.post',
      body: { count: x }
    }).then(function(final){
      console.log(final.body);
      final.ack();
    });
    console.log(x);
    x ++;
  }, 1000);

Subscriber

var rabbit = require( 'wascally' );
var settings = {
  connection: {
    user: '',
    pass: '',
    server: '127.0.0.1',
    port: 5672,
    vhost: ''
  },
  exchanges:[
    { name: 'snd.1', type: 'topic', persistent:true  }
  ],
  queues:[
    { name:'fb', limit: 500, queueLimit: 1000, subscribe: true, durable: true }
  ],
  bindings:[
    { exchange: 'snd.1', target: 'fb', keys: [ 'post','request' ] }
  ]
};
var handler = rabbit.handle( 'snd.1.fb.post', function( message ) {
  try {
    console.log( message.body );
    message.reply(message.body);
  } catch( err ) {
    message.nack();
  }
} );
rabbit.configure( settings ).done( function() {

} );

use of `batchAck`, and clearing out some messages while other still run?

Regarding the fixes for #24 - it looks like there is a wascally.batchAck() method available now. Is that the correct method to use when I want to force acknowledgements to process?

I'm currently running in to a scenario where I have a very very very long (20+ minute) job running, with 5 other jobs that have completed. The other 5 jobs are sitting in unacked mode in rabbitmq, in spite of the work having been completed already.

In my scenario, it looks like Wascally is not processing the ack on those 5 messages, due to the 1 message that remains unack'd. Is that correct?

Assuming that is correct, will a call to wascally.batchAck() force the ack of those 5 messages to process, while the last one still runs?

also - there's not any documentation for the batchAck method that I can see.

port option not honored

In the Adapter the port option is never checked. It is just looking for RABBIT_PORT and falls back to the default rabbitmq port.

Reply queue message not ack'ed on req-resp pattern

On request-response pattern, the reply queue has messages piled up not .ack (). Saw previous issue where this bug should had been fixed. But it is appear on my end.

Can anyone shed some light on this? Thanks.

I am using

  • wascally 0.2.7,
  • node.js 0.10.36
  • rabbitMQ 3.5.4
server.post ('/test', function (req, res, next) {
  var body = req.body;
  rabbit.configure (local).then (function () {
    rabbit.request (exchange, { 
      routingKey:"",
      type: "reqres.type",
      body: body
    })
    .then (function (response) {
      console.log ('received response...'.success);
      console.log (response);
      response.ack ();
      res.send (); 
      return next ();
    })
    .then (undefined, function (err) {
      next (new restify.errors.BadRequestError (err));
      handleError (err);
    });

    rabbit.handle("reqres.type", function(message) {
      try {
        console.log ('');
        console.log ('receiving msg..'.success);
        console.log(message);

        message.reply({
          foo: "replied bar"
        });
      } catch(err) {
        message.nack();
      }
    });

    rabbit.startSubscription ('user.create');
  })
  .then (undefined, handleError);
});

receiving msg..

{ fields: 
   { consumerTag: 'amq.ctag-Xb3S6BUP9Kos6geuTPWMhw',
     deliveryTag: 1,
     redelivered: false,
     exchange: 'user',
     routingKey: '' },
  properties: 
   { contentType: 'application/json',
     contentEncoding: 'utf8',
     headers: {},
     deliveryMode: undefined,
     priority: undefined,
     correlationId: '',
     replyTo: '82547150-3758-11e5-a17f-493c531269a7.response.queue',
     expiration: undefined,
     messageId: 'ed3d85b0-3758-11e5-a17f-493c531269a7',
     timestamp: undefined,
     type: 'reqres.type',
     userId: undefined,
     appId: '',
     clusterId: undefined },
  content: <Buffer...>,
  body: { foo: "bar"},
  ack: [Function],
  nack: [Function],
  reject: [Function],
  reply: [Function],
  type: 'reqres.type' }

received response...

{ fields: 
   { consumerTag: 'amq.ctag-LWBka6t32Q5rbIPkQ26CtQ',
     deliveryTag: 37,
     redelivered: false,
     exchange: '',
     routingKey: '82547150-3758-11e5-a17f-493c531269a7.response.queue' },
  properties: 
   { contentType: 'application/json',
     contentEncoding: 'utf8',
     headers: { sequence_end: true },
     deliveryMode: undefined,
     priority: undefined,
     correlationId: 'ed3d85b0-3758-11e5-a17f-493c531269a7',
     replyTo: '82547150-3758-11e5-a17f-493c531269a7.response.queue',
     expiration: undefined,
     messageId: undefined,
     timestamp: undefined,
     type: 'reqres.type.reply',
     userId: undefined,
     appId: undefined,
     clusterId: undefined },
  content: <Buffer ...>,
  body: { foo: 'replied bar' },
  ack: [Function],
  nack: [Function],
  reject: [Function],
  reply: [Function],
  type: 'reqres.type.reply' }

Handling graceful termination of a subscriber

#1. setup handlers with rabbit.handle
#2. configure rabbit

process.on('SIGINT', function() {
  #3. stop accepting more requests  
  #4. wait for all requests currently being handled to finish
});

How would one accomplish 3 or 4?

Publish method seems async

When I run the following program from the command line - say 10 times, I generate only +- 3 new messages on Rabbit. Meaning that sometimes it appears to close all connections before the message has been published. If I leave off the closeAll() line - and run the program say 10 times, I get 10 new messages on Rabbit.

The amqplib sendToQueue method isn't async - if I were to run the scenario below using all amqplib commands it would also publish 10 messages if I were to run the program 10 times.

wascally.configure(rabbitConfig).done(function() {
  wascally.publish("somet.ex", {
    routingKey: "some-key",
    type: "msg.t",
    body: {}
  })
  wascally.closeAll()
})

Am I just misreading the documentation and code - or is there something wrong here?

every connection creates a response queue, even if it doesn't need it

as reported on my Rabbus project: mxriverlynn/rabbus#1

i created a sample connection using this code:

var Rabbit = require("wascally");

Rabbit.configure({
  connection: {
    // ...  your connection info here
  } 
}).then(function(){;

  console.log('connected');

});

and a response queue showed up in my queue list... i have not defined any queues, i haven't tried to do any request/response work... all i did was create a connection, and the response queue showed up in my queue list.

it shouldn't create a response queue until i need one... until i'm doing request/response work.

how can i prevent these extra queues from showing up?

Issue when replying to a message

Im getting the following error message when I am using message.reply(). I was able to simply put a conditional in to see if this._lastByStatus( 'ack' ) exists before doing the assignment but should this be unnecessary?

Code

var handler2 = rabbit.handle( 'exchange.name', function( message ) { try { t.get('search/tweets', { q: message.body.query , count: 100 }, function(err, data, response) { console.log(message); message.reply(data); message.ack(); }); } catch( err ) { message.nack(); } } );

Error

TypeError: Cannot read property 'tag' of undefined at StatusList._ackAll (/Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/src/statusList.js:24:44) at StatusList._processBatch (/Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/src/statusList.js:116:9) at null.<anonymous> (/Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/src/statusList.js:78:8) at Object.invokeSubscriber (/Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/node_modules/postal/lib/postal.js:132:35) at /Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/node_modules/postal/lib/postal.js:431:28 at Function.forEach (/Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/node_modules/lodash/dist/lodash.js:3297:15) at Object.publish (/Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/node_modules/postal/lib/postal.js:430:19) at ChannelDefinition.publish (/Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/node_modules/postal/lib/postal.js:43:18) at Broker.batchAck (/Users/rbrookfield/node/snd2/snd-twitter-worker/node_modules/wascally/src/index.js:74:9) at wrapper [as _onTimeout] (timers.js:261:14)

create basic example documentation

I'm interested in using Wascally, as I've run in to issues and limitations in the library I'm currently using (rabbit.js). But I can't find a single working example to even start w/ Wascally.

I would like to request some very basic demo code be put together, to show various scenarios... starting with the most basic samples to show a message being sent and received, and moving toward other scenarios like request/reply, etc.

replyQueue: false - messages do not get published

Hi,

Not sure if I'm doing something wrong or if there's an issue elsewhere but I noticed that if I add the connection option replyQueue: false the publish() method does not actually publish anything. As soon as the option is removed, messages start going through.

var rabbit = require('wascally');
var settings = {
    "connection": {
        "user": "admin",
        "pass": "password",
        "server": "localhost",
        "port": 5672,
        "vhost": "idio",
        "replyQueue": false  //No messages with this on. Comment out and messages get sent to queue
    },
    "exchanges": [
        { "name": "crud", "type": "direct", "durable": true, "persistent": true, "autoDelete": false }
    ],
    "queues": [
        { "name": "ops", "durable": true , "autoDelete": false} 
    ],
    "bindings": [
        { "exchange": "crud", "target": "ops", "keys": [] }
    ]
}


rabbit.configure(settings)


function sendMessage(obj) {
    rabbit.publish('crud', {
        type: 'idio.message',
        routingKey: '',
        body: obj
    });
    console.log('Sent: ' + obj.id)
}


function send(total) {
    console.time('mq')
    for (var i = 1; i <= total; i++) {
        var obj = {
            id: i,
            name: 'Hello ' + i,
            total: total
        }
        sendMessage(obj)
    }
}

send(10)

Is there a better way to get to channel prefetch?

Currently, I am getting to the underlying channel prefetch function by using the following. Is there a better way to do this? It looks really odd to see .channel.channel.

var ch = rabbit.getQueue('my-queue').channel.channel;
ch.prefetch(1);

`pass` should be escaped

Currently pass, for instance, is not escaped. So a password like aaaa>AAAA#N9r0bcj631sG will make the amqp connection fail.

Handle messages in "ready" state on reconnect

I'm experiencing trouble with processing items that are "ready" in the rabbitmq queue, when my worker reconnects, is this expected behavior or a bug? A bit more detailed:

I'm currently testing wascally out where i have a "producer" that puts an amount of messages onto the queues, and then the worker will be started later, however, the worker does not process the "ready messages". The one thing I have noticed is that all the "ready" messages goes into "unacked" state on the queue, however, I see no messages being processed in the worker node.

The only way to make it work, is to publish a message while the worker is connected.

Topology json:

var config = {
    connection: {},
    queues: [
        {name: "queue1", subscribe: true, durable: true}
    ],
    exchanges: [{name: "my-exchange", type: 'direct', persistent: true, durable: true}],
    bindings: [
        {exchange: "my-exchange", target: "queue1Handler", keys: ["queue1Handler"]}
    ]
};

And the handler (simplified):

Rabbit.configure(Que.Config)
    .then(setUpRabbitHandlers);
function setUpRabbitHandlers() {
    console.info('Setting up rabbit handlers...');
    Rabbit.handle("queue1", function (msg) {
        console.info('Starting processing of message...');
        // do stuff 
        msg.ack();
    });

On OSX 10.10.2 with node v0.10.33 and RabbitMQ 3.4.2

Handler to Queue binding

I was writing up a test app, and ran into a behavior I wasn't expecting and I'm curious if there is a way to resolve it.

I made two wascally connections to my rabbit server, each in their own closure, following the topology pattern from the pubsub pattern. Each handler was subscribed to an independent queue, but handled the same message type. The queues were bound to a single exchange in fanout mode.

It looks like what happened is this: both handlers got two copies of each message, one from each queue. If one handler ack'd messages and the other one nack'd them, both queues would be drained. I believe this is because the messages are routed in memory from the individual queues through a shared set of Postal topics.

Is there a way to strictly bind a handler to only receive messages from it's queues, or does the in-memory Postal routing lose that information?

Reconnections and disconnections

Hey there - I'm trying to figure out how to handle reconnecting to rabbit when I lose the connection - I can't see where I tap into any kinds of disconnect events to try and connect again after losing the connection. Is there a preferred way to handle this?

How to remove binding from a direct exchange?

Hi @derickbailey @arobson ,

I am trying to elastically scale my servers. Right now I am trying to use a direct exchange for all chat messages between the servers. When a server become online, I register the new server id, creates a new que with the same id, and binds this que to the direct exchange.

Is there a way to remove this binding when the server leaves?

I guess I can use autoDelete: true.

Thanks!

Unacked messages piling up in response queue

I have a producer/dispatcher/consumer topology that involves one case of request instead of publish but since adding that, the related response queue is filling up with unacked messages, despite the fact that I'm calling reply on the consumer side and res.ack in the promise then of the request. What could be going on here? I even tried explicitly acking the message in the consumer, with no luck.

// topology.js
'use strict';

module.exports = function (rabbit, subscribeTo) {
    return rabbit.configure({
        // arguments used to establish a connection to a broker
        connection: {
            user:   'guest',
            pass:   'guest',
            server: ['127.0.0.1'],
            port:   5672,
            vhost:  '%2f'
        },

        // define the exchanges
        exchanges: [
            {
                name: 'tasks-x',
                type: 'fanout',
                autoDelete: true
            },
            {
                name: 'dispatch-x',
                type: 'direct',
                autoDelete: true
            },
            {
                name: 'requests-x',
                type: 'fanout',
                autoDelete: true
            }
        ],

        // setup the queues, only subscribing to the one this service
        // will consume messages from
        queues: [
            {
                name: 'tasks-q',
                autoDelete: true,
                limit: 1,
                subscribe: subscribeTo === 'tasks'
            },
            {
                name: 'requests-q',
                autoDelete: true,
                limit: 1,
                subscribe: subscribeTo === 'requests'
            },
            {
                name: 'p1-q',
                autoDelete: true,
                limit: 1,
                subscribe: subscribeTo === 'p1'
            },
            {
                name: 'p2-q',
                autoDelete: true,
                limit: 1,
                subscribe: subscribeTo === 'p2'
            }
        ],

        // binds exchanges and queues to one another
        bindings: [
            {
                exchange: 'tasks-x',
                target:   'tasks-q',
                keys:     []
            },
            {
                exchange: 'requests-x',
                target:   'requests-q',
                keys:     []
            },
            {
                exchange: 'dispatch-x',
                target:   'p1-q',
                keys:     ['p1']
            },
            {
                exchange: 'dispatch-x',
                target:   'p2-q',
                keys:     ['p2']
            }
        ]
    });
};
// producer.js
'use strict';

var rabbit = require('wascally');

var i = 0;

require('./topology')(rabbit).then(function () {
    setInterval(function () {
        console.log('Publishing task');
        rabbit.publish('tasks-x', 'task', {
            task: ++i
        });
    }, 3000);
});
// dispatcher.js
'use strict';

var Kefir  = require('kefir');
var rabbit = require('wascally');

var taskEmitter = Kefir.emitter();
var reqEmitter  = Kefir.emitter();
var pairs = Kefir.zip([taskEmitter, reqEmitter]);

pairs.onValue(function (pair) {
    var task = pair[0];
    var req  = pair[1];
    console.log(task.body, req.body);
    rabbit.request('dispatch-x', {
        type: 'process',
        body: task.body,
        routingKey: 'p1'
    })
        .then(function (res) {
            console.log(res.body);
            res.ack();
            task.ack();
            req.ack();
        });
});

rabbit.handle('task', function (task) {
    console.log('Dispatch received', JSON.stringify(task.body));
    taskEmitter.emit(task);
});

rabbit.handle('request', function (req) {
    console.log('Dispatch received', JSON.stringify(req.body));
    reqEmitter.emit(req);
});

require('./topology')(rabbit).then(function () {
    rabbit.startSubscription('requests-q');
    rabbit.startSubscription('tasks-q');
});
/// consumer.js
'use strict';

var rabbit = require('wascally');

var i = 0;

function requestJob () {
    rabbit.publish('requests-x', 'request', {
        req: ++i
    });
}

rabbit.handle('process', function (msg) {
    console.log('Received:', JSON.stringify(msg.body));
    msg.reply({ success: true });
    requestJob();
});

require('./topology')(rabbit, 'p1').then(function () {
    requestJob();
});

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.