GithubHelp home page GithubHelp logo

optimalbits / bull Goto Github PK

View Code? Open in Web Editor NEW
15.4K 114.0 1.4K 3.75 MB

Premium Queue package for handling distributed jobs and messages in NodeJS.

License: Other

JavaScript 89.17% Lua 10.83%
nodejs message-queue job-queue queue job message priority scheduler rate-limiter

bull's People

Contributors

aduwillie avatar aleccool213 avatar alexkh13 avatar ariksfaradi avatar aslakhellesoy avatar bradvogel avatar cbjuan avatar davideviolante avatar dependabot[bot] avatar dhritzkiv avatar doublerebel avatar evanhuang8 avatar fearphage avatar gabegorelick avatar holm avatar josephwarrick avatar kulbirsaini avatar lchenay avatar leolannenmaki avatar leontastic avatar manast avatar marshall007 avatar mxstbr avatar rissem avatar roggervalf avatar ryan-sandy avatar semantic-release-bot avatar stansv avatar tobie avatar vortec4800 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bull's Issues

LIFO support

Currently, bull doesn't support adding jobs to the other end of the queue (rpush instead of lpush to facilitate a LIFO queue)

I've written a simple solution (adding a 'lifo' option to the Queue.add method), but if you may want to consider an other solution

Is there a way to trigger Redis Auth command?

I've just got Bull set up on Heroku with Redistogo/RedisCloud, but I've had to add a couple of lines into your queue.js file to do so. I'm wondering if I needed to or not.

Looking at most Redis service documentation it's fairly common that client.auth() is required to be triggered after the client is created. eg.
https://devcenter.heroku.com/articles/redistogo#using-with-node

From what I can see there isn't a way to achieve this with Bull out-of-the-box or am I missing something? I've looked at the extra options that can be passed into the Redis creation here: https://github.com/mranney/node_redis it doesnt look like it can be passed as an extra option?

thanks for your work on this btw, just what I needed.

I have lots of zombie jobs in the redis

I have about 20K zombie jobs that look something like this

1) "data"
2) "{\"url\":\"http://www.xxxxx-in.com.au/show/xxxxxx/videos/xxxxxxxx/\",\"plugin\":\"javascript\",\"timestamp\":\"2014-08-05T04:19:06.935Z\",\"timezoneOffset\":240,\"sessionId\":\"3323473c-9a2e-21ad-5cd4-2127c4f51863\",\"publisherId\":\"8b746d3b-b05e-45b4-a8ae-bfbeb097affe\",\"mediaId\":\"3706240057001\",\"mediaDuration\":616.72,\"events\":[{\"timestamp\":\"2014-08-05T04:19:06.934Z\",\"event\":\"PROGRESS\",\"fromPosition\":460.6,\"toPosition\":520.6}],\"createdAt\":\"2014-08-05T04:19:06.542Z\",\"ipAddress\":\"24.114.58.95\",\"remoteAddress\":\"6ec79e53f022f8dc51490289a0d623a3\",\"userAgent\":{\"browser\":\"Safari\",\"version\":\"7.0\",\"os\":\"OS X\",\"platform\":\"iPhone\"},\"deviceId\":\"58a8d01f6b00e9d5d0f64872388b1ebf\"}"
3) "progress"
4) "0"
5) "opts"
6) "{}"
redis 127.0.0.1:6379> hget bull:oztam.collect.incoming:2702190 progress
"0"

When the server restarts these jobs are not being processed. How should these zombie jobs be handled?

Error "Wrong number of arguments for set command"

Hi I am seeing this error in my logs -- although bull is processing jobs just fine now (it seemed like it wasn't before, but that might have been an issue with my mail server that self-resolved) -- I decided to inject a debug line into bull's redis module to help investigate:

SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'info', args: [ [Function] ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'info', args: [ [Function] ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'select', args: [ 0 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'lrange',
  args: [ 'bull:Notifications:active', 0, -1 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'select', args: [ 0 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'brpoplpush',
  args:
   [ 'bull:Notifications:wait',
     'bull:Notifications:active',
     0,
     [Function: PromiseResolver$_callback] ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'hgetall',
  args:
   [ 'bull:Notifications:3',
     [Function: PromiseResolver$_callback] ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'set',
  args:
   [ 'bull:Notifications:3:lock',
     'ce549b34-c6e6-4109-8efd-5b857171b3d4',
     'PX',
     5000,
     [Function: PromiseResolver$_callback] ] }
Possibly unhandled Error: ERR wrong number of arguments for 'set' command
    at ReplyParser.<anonymous> (/home/app/notification/node_modules/bull/node_modules/redis/index.js:308:31)
    at ReplyParser.EventEmitter.emit (events.js:95:17)
    at ReplyParser.send_error (/home/app/notification/node_modules/bull/node_modules/redis/lib/parser/javascript.js:296:10)
    at ReplyParser.execute (/home/app/notification/node_modules/bull/node_modules/redis/lib/parser/javascript.js:181:22)
    at RedisClient.on_data (/home/app/notification/node_modules/bull/node_modules/redis/index.js:535:27)
    at Socket.<anonymous> (/home/app/notification/node_modules/bull/node_modules/redis/index.js:91:14)
    at Socket.EventEmitter.emit (events.js:95:17)
    at Socket.<anonymous> (_stream_readable.js:746:14)
    at Socket.EventEmitter.emit (events.js:92:17)
    at emitReadable_ (_stream_readable.js:408:10)
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'MULTI', args: [] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'lrem', args: [ 'bull:Notifications:active', 0, 3 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'sadd', args: [ 'bull:Notifications:completed', 3 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'EXEC', args: [] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command  { command: 'brpoplpush',

Thanks

Stalled jobs only checked at queue startup?

First of all, thanks for your work on bull. It's small, clean and the code is self-explanatory. Just what we want from such a critical piece of software :).

Anyway, it seems that the queue only checks for stalled jobs at startup. For us, this means that crashed jobs will only get unstuck after a worker restarts.

I'm not sure I follow you here. Can you elaborate on your intent ?

Lots of keys in the redis database

I am processing around 130 jobs a second and when i look at the redis database i see a lot of keys like this

  1. "bull:oztam.collect.incoming:558202"
  2. "bull:oztam.collect.incoming:310495"
  3. "bull:oztam.collect.incoming:431641"
  4. "bull:oztam.collect.incoming:299107"
  5. "bull:oztam.collect.incoming:99116"
  6. "bull:oztam.collect.incoming:230128"
  7. "bull:oztam.collect.incoming:357696"
  8. "bull:oztam.collect.incoming:553756"
  9. "bull:oztam.collect.incoming:129818"
  10. "bull:oztam.collect.incoming:523370"
  11. "bull:oztam.collect.incoming:303102"
  12. "bull:oztam.collect.incoming:65450"
  13. "bull:oztam.collect.incoming:394821"
  14. "bull:oztam.collect.incoming:527360"
  15. "bull:oztam.collect.incoming:74980"
  16. "bull:oztam.collect.incoming:106163"
  17. "bull:oztam.collect.incoming:25845"
  18. "bull:oztam.collect.incoming:389269"
  19. "bull:oztam.collect.incoming:491633"
  20. "bull:oztam.collect.incoming:383233"
  21. "bull:oztam.collect.incoming:173992"
  22. "bull:oztam.collect.incoming:254010"
  23. "bull:oztam.collect.incoming:448659"
  24. "bull:oztam.collect.incoming:478939"
  25. "bull:oztam.collect.incoming:403671"
  26. "bull:oztam.collect.incoming:498413"
  27. "bull:oztam.collect.incoming:52886"
  28. "bull:oztam.collect.incoming:24260"
  29. "bull:oztam.collect.incoming:505913"
  30. "bull:oztam.collect.incoming:227375"
  31. "bull:oztam.collect.incoming:484003"
  32. "bull:oztam.collect.incoming:227217"
  33. "bull:oztam.collect.incoming:204889"
  34. "bull:oztam.collect.incoming:444754"
  35. "bull:oztam.collect.incoming:465690"
  36. "bull:oztam.collect.incoming:447873"
  37. "bull:oztam.collect.incoming:239019"
  38. "bull:oztam.collect.incoming:248783"
  39. "bull:oztam.collect.incoming:438189"
  40. "bull:oztam.collect.incoming:234468"
  41. "bull:oztam.collect.incoming:20597"
  42. "bull:oztam.collect.incoming:422187"
  43. "bull:oztam.collect.incoming:103637"

Does bull clean up after itself or do i need to manage aspects of the clean up process.

Jobs failed why ?

Moin,

all jobs called jobDone(); without err in callback, but from ca 2500 jobs, 13 marked as failed.

How can i solved this problem ?

thx
Sven

Processor stops working after long period of time

Whenever I leave my processor running all night (with no jobs) and the next morning I try to create a job, the job becomes "active" but never actually has any work done on it. If I add two jobs, the first one becomes active but nothing ever happens to it, and the second is always pending until I restart the processor.

Is there anything I can do about this?

bull will not work if there's any '.' (dot) in queue name.

codes below will not work:

var myQueue = Queue('bla balba. balbal. blala', 6379, '127.0.0.1');

and if remove the dot chars, it works:

var myQueue = Queue('bla balba balbal blala', 6379, '127.0.0.1');

why? can it be handled normally? thanks a lot.

BRPOPLPUSH in moveJob

Hi,

We're implementing an f# version of the bull.js queue over at https://github.com/curit/oxen, and I have question.

If RPOPLPUSH is atomic why do you use the blocking version?

  • pub/sub mechanism to message clients there are new jobs on the queue
  • integration test to provoke race conditions in the queue

Possibly unhandled Error: ERR wrong number of arguments for 'set' command

I am trying to use bull but am coming across a redis exception

Here is the program

var Queue = require('bull');

var longRunQueue = Queue('long run');


longRunQueue.process(function(job, done) {
  done()
});

longRunQueue.once('ready', function() {
  console.log("She's ready");
  longRunQueue.add({timeout: 1});
});

Here is my package.json

{
  "name": "example",
  "version": "0.0.0",
  "description": "ERROR: No README.md file found!",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": "",
  "author": "",
  "license": "BSD",
  "dependencies": {
    "bull": "0.1.4"
  }
}

and here is the exception

Possibly unhandled Error: ERR wrong number of arguments for 'set' command
    at ReplyParser.<anonymous> (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/index.js:308:31)
    at ReplyParser.EventEmitter.emit (events.js:95:17)
    at ReplyParser.send_error (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/lib/parser/javascript.js:296:10)
    at ReplyParser.execute (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/lib/parser/javascript.js:181:22)
    at RedisClient.on_data (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/index.js:535:27)
    at Socket.<anonymous> (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/index.js:91:14)
    at Socket.EventEmitter.emit (events.js:95:17)
    at Socket.<anonymous> (_stream_readable.js:746:14)
    at Socket.EventEmitter.emit (events.js:92:17)
    at emitReadable_ (_stream_readable.js:408:10)

I am running redis-cli 2.6.10

Is this a known issue?

cheers

Monitoring Tool

Hi guys,
Do you use any redis tools to monitor queues / jobs?

BEST,
Afshin

Message Queue

I took the example about Message Queue, but nothing happens...
Queue(queueName. should be the same for sendQueue and receiveQueue ?

thnx btw for a great lib

is job.remove slow?

I had an issue where i had 900K jobs in the backlog and it was very, very slow to clear. It was taking me 20sec to process 200 jobs. When i removed the call to job.remove i was processing 500 jobs per second.

Has anyone else experienced such an issue?

Querying jobs

I'm currently using Kue for my production app, but I'd ideally like to switch to using Bull instead.

The issue I'm having at the moment is that I need the ability to query for jobs based on an id (either actualy job id, or custom uuid in the data).

The app currently uses express for posting new jobs and getting job status, but I'm also planning on implementing cluster support with Bull.

The app works as follows:

  1. Client posts a new job with specific data and gets returned a job ID
  2. App fires up a job with this data and pulls new data in from a different app
  3. At this point the client is as polling the job status based on the ID at specific intervals
  4. When the job is done (completed or failed), the job status function returns the job based on the job ID, with the new data from the other app.

I know that I could probably just store all the data in an array as custom job objects and mark them as complete when the job completes, but I'm not sure how that would work if the app was, for example, restarted.

I'm not sure how confusing that sounds, but I'm hoping that either Bull supports querying like this or that someone could point me in the right direction on how to implement this kind of logic. :)

Queue.add() Options?

When using Queue.add() what is the purpose of the opts argument?

  opts {PlainObject} A plain object with arguments that will be passed
    to the job processing function in job.opts

I see I can access a stringified version of the object I parse in through job.opts - Is this purposely stringified? What is the usecase?

Cheers,

Gareth

Tagged releases

It would be really useful if you could start tagging each release in git. This would make it really easy to see what's changed between versions.

https://github.com/OptimalBits/bull/compare/v0.1.5...v0.1.6

Also a change-log would also make an extremely useful addition to the project.

Can't get active jobs

To give base info, I am working on a rest api/socket wrapper for the bull job queue using restify. I have forked the repo and have started working on it. I also plan to write a separate express based frontend for it, once i am done with the rest api, but that will be a separate repo.

I am trying to add a listener for the get jobs function you have in Queue.js. It returns fine for completed, wait, failed but when i send request for active jobs (and there is an active job running using setTimeout i have put a delay on the done() call) i get the error given below

videoQueue.process(function(job, done){
  // transcode video asynchronously and report progress
  job.progress(42);

  // call done when finished
  setTimeout(function(){
    console.log("done!");
    done();  
  }, 20000);
});

Error:

RejectionError: WRONGTYPE Operation against a key holding the wrong kind of value

My server handler function for get jobs (Note: I am skipping start and end vars)

var url = '/' + self.queue.name +  "/:type";

    self.server.get(url, function (req, res, next) {

        var type = req.params.type;

        if(self.debug){
            console.log("GET request recieved with params " + JSON.stringify(req.params));
        }

        var jobs = self.queue.getJobs(type).then(function(jobs){
            if(jobs == undefined)
            {
                jobs = [];  
            }

            //Make the jobs array serialize
            var jobsArray = [];
            for(var i=0;i<jobs.length;i++){
                jobsArray.push({jobId: jobs[i].jobId, paused: jobs[i].queue.paused, data: jobs[i].data, opts: jobs[i].opts, progress: jobs[i]._progress});
            }

            if(self.debug){
                console.log(jobsArray);
            }
            res.send(jobsArray);
            return next();

        }, function(err){
            if(self.debug){
                console.log(self.queue.name + " received a new job request via PUT");
                console.log("An error occurred with adding new job. Error: " + err);
            }
            res.writeHead(200, {'Content-Type': 'application/json; charset=utf-8'});
            res.end(JSON.stringify({message:"An error occured. Check console for error message."}));                

            return next();
        });
    });

Thanks!

Regular exceptions

Hi Guys,
I have regular (not happens fast) exceptions in one of my modules that uses Pause & Resume methods on queues!

BEST,

/opt/opxi2/node_modules/bull/node_modules/redis/index.js:582
            throw err;
                  ^
TypeError: Object #<Object> has no method 'emit'
    at /opt/opxi2/node_modules/bull/lib/queue.js:94:13
    at /opt/opxi2/node_modules/bull/node_modules/redis/index.js:981:13
    at try_callback (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:579:9)
    at RedisClient.return_reply (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:664:13)
    at HiredisReplyParser.<anonymous> (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:312:14)
    at HiredisReplyParser.emit (events.js:95:17)
    at HiredisReplyParser.execute (/opt/opxi2/node_modules/bull/node_modules/redis/lib/parser/hiredis.js:43:18)
    at RedisClient.on_data (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:535:27)
    at Socket.<anonymous> (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:91:14)
    at Socket.emit (events.js:95:17)```

How can i use the count function ?

Moin,

var queue1 = Queue('quue1', 6379, '127.0.0.1');

console.log(queue1.count());
returns

[object Object]

what is wrong on my code ?

thx for your answer ;)

Sven

How do jobs end up on the paused list?

I was reading through the code and I can't find where jobs are getting pushed onto the paused queue.

I ran into this because I don't understand why you return the length of the longest list in Queue#count

Queue.prototype.count = function(){
  var multi = this.multi();
  multi.llen(this.toKey('wait'));
  multi.llen(this.toKey('paused'));

  return multi.execAsync().then(function(res){
    return Math.max.apply(Math, res);
  });
}

I'd expect this to be the length of paused + the length of wait. But since its unclear to me how jobs end up on the wait list. I thought I'd just ask.

Is example in README.md correct?

I'm new to bull and uncertain regarding one of the examples. Under Useful Patterns in the README.md, with the Message Queue example, there is a send and receive queue defined. They use two different strings for the queue name definitions. The example would seem to imply that one is sending and receiving from the same queue with the sample code. I find this example confusing unless the code sample is supposed to convey that it is sending and receiving from the same queue.

Should we close queue after adding jobs?

I am new to bull so forgive me if I miss something obvious.

To start playing around with bull, I created following test.js:

var Queue = require('bull');
var queue = Queue('test');
queue.add({ a:1 });

Run node test.js and notice the script doesn't exit. I assume the second line is waiting for redis; but since I am using it as a client here (to add job), would it be a good idea to close queue? say:

queue.add({ a:1 }).then(function() {
  queue.close();
});

Jobs without processors completes automatic

Consider the following code:

var Queue = require('bull'),
    queue = new Queue('test', 6379, '127.0.0.1');

queue.on('completed', function(job){
  console.log('[Job#%d] Complete', job.jobId)
})
.on('failed', function(job, err){
  console.log('[Job#%d] Failed', job.jobId)
})
.on('progress', function(job, progress){
  console.log('[Job#%d] %d%', job.jobId, progress)
})

queue.createJob('monkey');

_Expected output:_ None
_Output:_

» node server.js
[Job#1] Complete

Receiving a REDIS error when processing...

Receiving this error:

Possibly unhandled Error: ERR wrong number of arguments for 'set' command
    at ReplyParser.<anonymous> (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/index.js:308:31)
    at ReplyParser.EventEmitter.emit (events.js:103:17)
    at ReplyParser.send_error (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/lib/parser/javascript.js:296:10)
    at ReplyParser.execute (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/lib/parser/javascript.js:181:22)
    at RedisClient.on_data (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/index.js:535:27)
    at Socket.<anonymous> (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/index.js:91:14)
    at Socket.EventEmitter.emit (events.js:103:17)
    at readableAddChunk (_stream_readable.js:156:16)
    at Socket.Readable.push (_stream_readable.js:123:10)
    at TCP.onread (net.js:508:20)

Adding to queue:

ImportFromS3Queue.add({msg: "test");

Processing queue:

ImportFromS3Queue.process(function(job, done){

    console.log("ImportFromS3Job :: Processing Job", job.jobId, job.data);

    job.progress(50);

    done();
});

Package.json

{
  "name": "MyProject",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "bull": "~0.1.4",
    "nconf": "~0.6.9",
    "mongoose": "~3.8.4"
  }
}

Using RedisToGo.com with Redis version: 2.4.17

License conflict, Readme vs package.json

Another thing I noticed, was that in package.json the License is BSD, and in the README the license is stated as MIT.

I'm not an expert in the legal implications of a situation like this, but it should be cleared up to only use one license.

want 'resume' example

can you please show us the 'resume' example in README? can queue be resumed after ubuntu shut down and restart? thank you.

What happen to failed jobs?

Hello there,

Nice package! What happen when a job fails? Will it go back to que queue as the next job or it will fail permanently?

Thx.

Process concurrently

Add a way to specify max number of concurrent handlers allowed. It's now always 1 as I understand.

More example using promise apis in documentation

I see bull make use of promise(bluebird) extensively.

It can be confusing for devs(including me) who are not familiar with this pattern.
Showing a bit of example code with use of promise chain would be helpful.

queue.add({ foo: 'bar' }).then(function(job) {
  // do something
}, function(err){
  // handle err
});

Job Priority

Hi there,
Is there a schedule for implementing priority feature on jobs?

Best!

Returning data on job completion

What is the good reason why neither you or Kue support returning data in done()?

A pain in the ass needing to persist a response through another mechanism.

topics

We (myself and @albertjan) use bull (and oxen; a bull implementation in f#) and would like to add support for "topics". With "topic" I mean an identifier for a grouping of queues I can add a job to. Each queue would have the same semantics as it currently does.

We were thinking along the following lines:

  • two Queue constructors
    1. as it is now: Queue(queuename, port, host, opts)
    2. Queue(topicname, queuename, port, host, opts)
  • otherwise the api stays the same
  • queues in redis can either have bull:queuename:wait etc. identifiers or bull:topicname:queuename:wait etc. identifiers
  • jobs can have either bull:queuename:id or bull:topicname:queuename:id identifiers; this means copying the jobs themselves instead of referencing the same job in all queues in the topic, but otherwise we could get into trouble updating the job's progess.
  • a queue.add() where the queue was constructed with ii. will add the job to all queue's that have the same topicname; however if the Queue instance only adds jobs an unused wait set would be created. This could be prevented if you construct a queue with i. and add a topic: true option to the opts in Queue.add; the queuename parameter would then be used as the topicname.
  • if a topic exists, but the queue does not, then the queuename will be added to a bull:topicname:queues set of queue ids. Normal queue semantics apply, no existing jobs are copied.

Does anyone have similar use-cases? What would be requirements? Are we missing anything?

We intend to write an implementation and submit a pull request to bull en to oxen.

@manast would you consider accepting such a pull request?

Delayed jobs

It's occasionally a requirement for a job to not be done for at least N seconds.

There are two obvious API methods for doing this:

  1. Addition of "delay" to the job options accepted by Queue.add / Job.create
  2. Addition of a "delay" method to the Job class

(For Kue, the second option makes sense because Job objects can be used from within their own processor, so failed jobs can have a delay before retrying. With Bull it doesn't make much sense, i think?)

How to implement this? Essentially, delayed jobs are put into a ZSET with the score being the time that they should be processed after. Then, the system polls ZRANGEBYSCORE (once Queue.process is called, or once Queue.processDelayed() or something like that is called) from 0 to now, and puts those jobs onto the active queue as before.

If you want me to hammer out a PR once we have discussed the API choices a bit then I'd be happy to - something with a bit more care than Kue would be great!

Of course, it might also be a nice distinct library / extension for bull that can be added distinctly.

Processing Concurrency

Support for processing multiple jobs at the same time would be nice!

As a workaround, is it safe to run multiple instances with the same database?

Selecting db

We are using each redis instance for various purposes divided by utilizing SELECT command.
With kue, we were able to do SELECT by overriding kue.redis.createClient function.

Currently, manually calling queue.client.select() and queue.bclient.select() seems the only way.

It would be very helpful if bull supports this pattern.
Adding argument for using select from Queue() initializer, or making createClient override possible will do.

Thanks.

Add method needs a callback!

Hi,
In brief, Is it possible to add a callback function to the add method?

Actually I experienced a situation that I needed to add one job data to several queues with some slightly modifications! I found that my modification after the previous add invocation could change the last one! I hope my explanation to be clear :)

BEST,
-- Afshin

Unused parameters in Queue.js

The getWaiting, getActive etc methods all have start and end parameters, but these aren't passed to Queue.getJobs

Concurrency: multi server cluster

Hi,

I have a setup with multiple worker servers.
Each running a worker-app instance on each of its cores (using pm2). They take jobs from a bull queue, two types of jobs.

Besides this I have an other applications who creates the jobs.

But, when I restart a server it starts working on a job that is already being worked on.
Is this expected or am I doing it wrong :)

Queue name screws up processing?

Depending on the queue name, only the second job gets processed (or two jobs, if number of added jobs is even). See code below.

"use strict";

var Queue = require('bull');

var videoQueue = Queue('video transcoding', 6379, '127.0.0.1');

videoQueue.process(function(job, done){
    console.log('video job %d started.', job.jobId);
    done();
});

videoQueue.on('completed', function(job) {
    console.log('video job %d completed.', job.jobId);
});

videoQueue.add({video: 'http://example.com/video1.mov'});
videoQueue.add({video: 'http://example.com/video1.mov'});
videoQueue.add({video: 'http://example.com/video1.mov'});

Outputs:

video job 2 started.
video job 2 completed.

When changing the queue name from video transcoding just video, I'm getting the following output (after flushing Redis):

video job 1 started.
video job 1 completed.
video job 2 started.
video job 2 completed.
video job 3 started.
video job 3 completed.

To add some more weird behavior, when I add a fourth job to the queue, run it as video transcoding, I'm getting:

video job 5 started.
video job 5 completed.
video job 7 started.
video job 7 completed.

Same with 6 jobs, but not with 5 (only one processed again).

Note that videoQueue.getCompleted() lists always all entries, independently from the queue title.

Running on Ubuntu 14.04, Node v0.10.29 and Redis 2:2.8.4-2. I wasn't able to reproduce this on my Win8x64 box running Node v0.10.28 and Redis 2.8.12

Comments?

Stop processing jobs

queue.stop() and queue.abort() would be nice!

stop() = Won't start more jobs
abort() = Aborts current jobs, ie. setting the status to aborted

Can I add more context information to the job.

I was wondering whether i could add some context information to the job, which will allow me to restart and better handler restarted jobs. For instance it would be handy if i could add data to job.data, that would be persistent in redis along with the job

Queue Events are emitted only on queue consumer

Hi,
you did great job with bull library, congrats.

I have tested it a little, and I found out that queue events ('completed', 'failed', etc ...) on some message are only emitted to the process that is processing those message. This is code where it works:

var Queue = require('bull');
var messageQueue = Queue('message', 6379, '127.0.0.1');

// Processing
messageQueue.process(function(job, done){
    setTimeout(done, 1000);
});

// Events
messageQueue.on('completed', function(job){
    console.log("Completed " + JSON.stringify(job.data));
});

// Producer
for(var i = 0; i < 50; i++) {
    messageQueue.add({count: i});
}

In this code I will receive event 'completed' for each completed job. But if I move event listener to another process which will only listen to the message queue events then no events will be emitted to that process.

var Queue = require('bull');
var messageQueue = Queue('message', 6379, '127.0.0.1');

// Processing (not here, it is in another process)

// Events
messageQueue.on('completed', function(job){
    console.log("Completed " + JSON.stringify(job.data));   // NO EVENT!
});

Is that the flow you wanted to make or?
I doesn't make sense for me that only process that is processing message can receive events from queue.

uncaught error not handled

var Queue = require('bull');

var videoQueue = Queue('video transcoding', 6379, '127.0.0.1');

videoQueue.process(function(job, done){
  // job.data contains the custom data passed when the job was created
  // job.jobId contains id of this job.

  console.log('start process');
  console.log('job: '+JSON.stringify(job.data))

    Simplepromise(job, function() {
        console.log('job done simplepromise');  
        done();
    });

});


function Simplepromise(job,cb) {    
    console.log('start simplepromise');     
    //intentional uncaught error    
    ERROR1  
     throw (Error('some unexpected error'));    
    cb();   
}

videoQueue.on('completed', function(job){
  console.log('job completed')
})

videoQueue.add({video: '1'});
videoQueue.add({video: '2'});

Add support for keeping only a given number of completed or failed tasks.

To save memory, it should be possible to restrict the max number of saved completed or failed jobs in Redis.
This can be easily achieved using ZSET with a timestamp instead of standard SETS. As long as all the clocks of the queue instances are more or less synchronized everything should work fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.