optimalbits / bull Goto Github PK
View Code? Open in Web Editor NEWPremium Queue package for handling distributed jobs and messages in NodeJS.
License: Other
Premium Queue package for handling distributed jobs and messages in NodeJS.
License: Other
Currently, bull doesn't support adding jobs to the other end of the queue (rpush instead of lpush to facilitate a LIFO queue)
I've written a simple solution (adding a 'lifo' option to the Queue.add method), but if you may want to consider an other solution
I've just got Bull set up on Heroku with Redistogo/RedisCloud, but I've had to add a couple of lines into your queue.js file to do so. I'm wondering if I needed to or not.
Looking at most Redis service documentation it's fairly common that client.auth() is required to be triggered after the client is created. eg.
https://devcenter.heroku.com/articles/redistogo#using-with-node
From what I can see there isn't a way to achieve this with Bull out-of-the-box or am I missing something? I've looked at the extra options that can be passed into the Redis creation here: https://github.com/mranney/node_redis it doesnt look like it can be passed as an extra option?
thanks for your work on this btw, just what I needed.
I have about 20K zombie jobs that look something like this
1) "data"
2) "{\"url\":\"http://www.xxxxx-in.com.au/show/xxxxxx/videos/xxxxxxxx/\",\"plugin\":\"javascript\",\"timestamp\":\"2014-08-05T04:19:06.935Z\",\"timezoneOffset\":240,\"sessionId\":\"3323473c-9a2e-21ad-5cd4-2127c4f51863\",\"publisherId\":\"8b746d3b-b05e-45b4-a8ae-bfbeb097affe\",\"mediaId\":\"3706240057001\",\"mediaDuration\":616.72,\"events\":[{\"timestamp\":\"2014-08-05T04:19:06.934Z\",\"event\":\"PROGRESS\",\"fromPosition\":460.6,\"toPosition\":520.6}],\"createdAt\":\"2014-08-05T04:19:06.542Z\",\"ipAddress\":\"24.114.58.95\",\"remoteAddress\":\"6ec79e53f022f8dc51490289a0d623a3\",\"userAgent\":{\"browser\":\"Safari\",\"version\":\"7.0\",\"os\":\"OS X\",\"platform\":\"iPhone\"},\"deviceId\":\"58a8d01f6b00e9d5d0f64872388b1ebf\"}"
3) "progress"
4) "0"
5) "opts"
6) "{}"
redis 127.0.0.1:6379> hget bull:oztam.collect.incoming:2702190 progress
"0"
When the server restarts these jobs are not being processed. How should these zombie jobs be handled?
Hi I am seeing this error in my logs -- although bull is processing jobs just fine now (it seemed like it wasn't before, but that might have been an issue with my mail server that self-resolved) -- I decided to inject a debug line into bull's redis module to help investigate:
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'info', args: [ [Function] ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'info', args: [ [Function] ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'select', args: [ 0 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'lrange',
args: [ 'bull:Notifications:active', 0, -1 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'select', args: [ 0 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'brpoplpush',
args:
[ 'bull:Notifications:wait',
'bull:Notifications:active',
0,
[Function: PromiseResolver$_callback] ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'hgetall',
args:
[ 'bull:Notifications:3',
[Function: PromiseResolver$_callback] ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'set',
args:
[ 'bull:Notifications:3:lock',
'ce549b34-c6e6-4109-8efd-5b857171b3d4',
'PX',
5000,
[Function: PromiseResolver$_callback] ] }
Possibly unhandled Error: ERR wrong number of arguments for 'set' command
at ReplyParser.<anonymous> (/home/app/notification/node_modules/bull/node_modules/redis/index.js:308:31)
at ReplyParser.EventEmitter.emit (events.js:95:17)
at ReplyParser.send_error (/home/app/notification/node_modules/bull/node_modules/redis/lib/parser/javascript.js:296:10)
at ReplyParser.execute (/home/app/notification/node_modules/bull/node_modules/redis/lib/parser/javascript.js:181:22)
at RedisClient.on_data (/home/app/notification/node_modules/bull/node_modules/redis/index.js:535:27)
at Socket.<anonymous> (/home/app/notification/node_modules/bull/node_modules/redis/index.js:91:14)
at Socket.EventEmitter.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:746:14)
at Socket.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'MULTI', args: [] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'lrem', args: [ 'bull:Notifications:active', 0, 3 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'sadd', args: [ 'bull:Notifications:completed', 3 ] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'EXEC', args: [] }
SEND COMMAND DEBUG LINE 723 in RedisClient.prototype.send_command { command: 'brpoplpush',
Thanks
First of all, thanks for your work on bull. It's small, clean and the code is self-explanatory. Just what we want from such a critical piece of software :).
Anyway, it seems that the queue only checks for stalled jobs at startup. For us, this means that crashed jobs will only get unstuck after a worker restarts.
I'm not sure I follow you here. Can you elaborate on your intent ?
I am processing around 130 jobs a second and when i look at the redis database i see a lot of keys like this
Does bull clean up after itself or do i need to manage aspects of the clean up process.
Moin,
all jobs called jobDone(); without err in callback, but from ca 2500 jobs, 13 marked as failed.
How can i solved this problem ?
thx
Sven
Whenever I leave my processor running all night (with no jobs) and the next morning I try to create a job, the job becomes "active" but never actually has any work done on it. If I add two jobs, the first one becomes active but nothing ever happens to it, and the second is always pending until I restart the processor.
Is there anything I can do about this?
codes below will not work:
var myQueue = Queue('bla balba. balbal. blala', 6379, '127.0.0.1');
and if remove the dot chars, it works:
var myQueue = Queue('bla balba balbal blala', 6379, '127.0.0.1');
why? can it be handled normally? thanks a lot.
Hi,
We're implementing an f# version of the bull.js queue over at https://github.com/curit/oxen, and I have question.
If RPOPLPUSH
is atomic why do you use the blocking version?
There are several places in the code where run is called without handling the possible returned error. It actually returns a promise that could generate an error event on the queue instance.
I am trying to use bull but am coming across a redis exception
Here is the program
var Queue = require('bull');
var longRunQueue = Queue('long run');
longRunQueue.process(function(job, done) {
done()
});
longRunQueue.once('ready', function() {
console.log("She's ready");
longRunQueue.add({timeout: 1});
});
Here is my package.json
{
"name": "example",
"version": "0.0.0",
"description": "ERROR: No README.md file found!",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"repository": "",
"author": "",
"license": "BSD",
"dependencies": {
"bull": "0.1.4"
}
}
and here is the exception
Possibly unhandled Error: ERR wrong number of arguments for 'set' command
at ReplyParser.<anonymous> (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/index.js:308:31)
at ReplyParser.EventEmitter.emit (events.js:95:17)
at ReplyParser.send_error (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/lib/parser/javascript.js:296:10)
at ReplyParser.execute (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/lib/parser/javascript.js:181:22)
at RedisClient.on_data (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/index.js:535:27)
at Socket.<anonymous> (/Users/jima/projects/bull/example/node_modules/bull/node_modules/redis/index.js:91:14)
at Socket.EventEmitter.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:746:14)
at Socket.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
I am running redis-cli 2.6.10
Is this a known issue?
cheers
Hi guys,
Do you use any redis tools to monitor queues / jobs?
BEST,
Afshin
I took the example about Message Queue, but nothing happens...
Queue(queueName. should be the same for sendQueue and receiveQueue ?
thnx btw for a great lib
I had an issue where i had 900K jobs in the backlog and it was very, very slow to clear. It was taking me 20sec to process 200 jobs. When i removed the call to job.remove i was processing 500 jobs per second.
Has anyone else experienced such an issue?
I'm currently using Kue for my production app, but I'd ideally like to switch to using Bull instead.
The issue I'm having at the moment is that I need the ability to query for jobs based on an id (either actualy job id, or custom uuid in the data).
The app currently uses express for posting new jobs and getting job status, but I'm also planning on implementing cluster support with Bull.
The app works as follows:
I know that I could probably just store all the data in an array as custom job objects and mark them as complete when the job completes, but I'm not sure how that would work if the app was, for example, restarted.
I'm not sure how confusing that sounds, but I'm hoping that either Bull supports querying like this or that someone could point me in the right direction on how to implement this kind of logic. :)
When using Queue.add()
what is the purpose of the opts argument?
opts {PlainObject} A plain object with arguments that will be passed
to the job processing function in job.opts
I see I can access a stringified version of the object I parse in through job.opts
- Is this purposely stringified? What is the usecase?
Cheers,
Gareth
It would be really useful if you could start tagging each release in git. This would make it really easy to see what's changed between versions.
https://github.com/OptimalBits/bull/compare/v0.1.5...v0.1.6
Also a change-log would also make an extremely useful addition to the project.
To give base info, I am working on a rest api/socket wrapper for the bull job queue using restify. I have forked the repo and have started working on it. I also plan to write a separate express based frontend for it, once i am done with the rest api, but that will be a separate repo.
I am trying to add a listener for the get jobs function you have in Queue.js. It returns fine for completed, wait, failed but when i send request for active jobs (and there is an active job running using setTimeout i have put a delay on the done() call) i get the error given below
videoQueue.process(function(job, done){
// transcode video asynchronously and report progress
job.progress(42);
// call done when finished
setTimeout(function(){
console.log("done!");
done();
}, 20000);
});
Error:
RejectionError: WRONGTYPE Operation against a key holding the wrong kind of value
My server handler function for get jobs (Note: I am skipping start and end vars)
var url = '/' + self.queue.name + "/:type";
self.server.get(url, function (req, res, next) {
var type = req.params.type;
if(self.debug){
console.log("GET request recieved with params " + JSON.stringify(req.params));
}
var jobs = self.queue.getJobs(type).then(function(jobs){
if(jobs == undefined)
{
jobs = [];
}
//Make the jobs array serialize
var jobsArray = [];
for(var i=0;i<jobs.length;i++){
jobsArray.push({jobId: jobs[i].jobId, paused: jobs[i].queue.paused, data: jobs[i].data, opts: jobs[i].opts, progress: jobs[i]._progress});
}
if(self.debug){
console.log(jobsArray);
}
res.send(jobsArray);
return next();
}, function(err){
if(self.debug){
console.log(self.queue.name + " received a new job request via PUT");
console.log("An error occurred with adding new job. Error: " + err);
}
res.writeHead(200, {'Content-Type': 'application/json; charset=utf-8'});
res.end(JSON.stringify({message:"An error occured. Check console for error message."}));
return next();
});
});
Thanks!
Hi Guys,
I have regular (not happens fast) exceptions in one of my modules that uses Pause & Resume methods on queues!
BEST,
/opt/opxi2/node_modules/bull/node_modules/redis/index.js:582
throw err;
^
TypeError: Object #<Object> has no method 'emit'
at /opt/opxi2/node_modules/bull/lib/queue.js:94:13
at /opt/opxi2/node_modules/bull/node_modules/redis/index.js:981:13
at try_callback (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:579:9)
at RedisClient.return_reply (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:664:13)
at HiredisReplyParser.<anonymous> (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:312:14)
at HiredisReplyParser.emit (events.js:95:17)
at HiredisReplyParser.execute (/opt/opxi2/node_modules/bull/node_modules/redis/lib/parser/hiredis.js:43:18)
at RedisClient.on_data (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:535:27)
at Socket.<anonymous> (/opt/opxi2/node_modules/bull/node_modules/redis/index.js:91:14)
at Socket.emit (events.js:95:17)```
Moin,
var queue1 = Queue('quue1', 6379, '127.0.0.1');
console.log(queue1.count());
returns
[object Object]
what is wrong on my code ?
thx for your answer ;)
Sven
I was reading through the code and I can't find where jobs are getting pushed onto the paused queue.
I ran into this because I don't understand why you return the length of the longest list in Queue#count
Queue.prototype.count = function(){
var multi = this.multi();
multi.llen(this.toKey('wait'));
multi.llen(this.toKey('paused'));
return multi.execAsync().then(function(res){
return Math.max.apply(Math, res);
});
}
I'd expect this to be the length of paused
+ the length of wait
. But since its unclear to me how jobs end up on the wait list. I thought I'd just ask.
I'm new to bull and uncertain regarding one of the examples. Under Useful Patterns in the README.md, with the Message Queue example, there is a send and receive queue defined. They use two different strings for the queue name definitions. The example would seem to imply that one is sending and receiving from the same queue with the sample code. I find this example confusing unless the code sample is supposed to convey that it is sending and receiving from the same queue.
I am new to bull so forgive me if I miss something obvious.
To start playing around with bull, I created following test.js:
var Queue = require('bull');
var queue = Queue('test');
queue.add({ a:1 });
Run node test.js
and notice the script doesn't exit. I assume the second line is waiting for redis; but since I am using it as a client here (to add job), would it be a good idea to close queue? say:
queue.add({ a:1 }).then(function() {
queue.close();
});
Moin,
when or who is bull delete completed jobs ?
regards
Sven
Consider the following code:
var Queue = require('bull'),
queue = new Queue('test', 6379, '127.0.0.1');
queue.on('completed', function(job){
console.log('[Job#%d] Complete', job.jobId)
})
.on('failed', function(job, err){
console.log('[Job#%d] Failed', job.jobId)
})
.on('progress', function(job, progress){
console.log('[Job#%d] %d%', job.jobId, progress)
})
queue.createJob('monkey');
_Expected output:_ None
_Output:_
» node server.js
[Job#1] Complete
Receiving this error:
Possibly unhandled Error: ERR wrong number of arguments for 'set' command
at ReplyParser.<anonymous> (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/index.js:308:31)
at ReplyParser.EventEmitter.emit (events.js:103:17)
at ReplyParser.send_error (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/lib/parser/javascript.js:296:10)
at ReplyParser.execute (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/lib/parser/javascript.js:181:22)
at RedisClient.on_data (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/index.js:535:27)
at Socket.<anonymous> (/Users/projName/Dropbox/Sites/projNameV2/projNameProcessor/node_modules/bull/node_modules/redis/index.js:91:14)
at Socket.EventEmitter.emit (events.js:103:17)
at readableAddChunk (_stream_readable.js:156:16)
at Socket.Readable.push (_stream_readable.js:123:10)
at TCP.onread (net.js:508:20)
Adding to queue:
ImportFromS3Queue.add({msg: "test");
Processing queue:
ImportFromS3Queue.process(function(job, done){
console.log("ImportFromS3Job :: Processing Job", job.jobId, job.data);
job.progress(50);
done();
});
Package.json
{
"name": "MyProject",
"version": "0.0.1",
"private": true,
"scripts": {
"start": "node app.js"
},
"dependencies": {
"bull": "~0.1.4",
"nconf": "~0.6.9",
"mongoose": "~3.8.4"
}
}
Using RedisToGo.com with Redis version: 2.4.17
Another thing I noticed, was that in package.json
the License is BSD, and in the README
the license is stated as MIT.
I'm not an expert in the legal implications of a situation like this, but it should be cleared up to only use one license.
can you please show us the 'resume' example in README? can queue be resumed after ubuntu shut down and restart? thank you.
Hello there,
Nice package! What happen when a job fails? Will it go back to que queue as the next job or it will fail permanently?
Thx.
Add a way to specify max number of concurrent handlers allowed. It's now always 1 as I understand.
I see bull make use of promise(bluebird) extensively.
It can be confusing for devs(including me) who are not familiar with this pattern.
Showing a bit of example code with use of promise chain would be helpful.
queue.add({ foo: 'bar' }).then(function(job) {
// do something
}, function(err){
// handle err
});
Hi there,
Is there a schedule for implementing priority feature on jobs?
Best!
What is the good reason why neither you or Kue support returning data in done()?
A pain in the ass needing to persist a response through another mechanism.
We (myself and @albertjan) use bull (and oxen; a bull implementation in f#) and would like to add support for "topics". With "topic" I mean an identifier for a grouping of queues I can add a job to. Each queue would have the same semantics as it currently does.
We were thinking along the following lines:
bull:queuename:wait
etc. identifiers or bull:topicname:queuename:wait
etc. identifiersbull:queuename:id
or bull:topicname:queuename:id
identifiers; this means copying the jobs themselves instead of referencing the same job in all queues in the topic, but otherwise we could get into trouble updating the job's progess.queue.add()
where the queue
was constructed with ii. will add the job to all queue's that have the same topicname; however if the Queue instance only adds jobs an unused wait
set would be created. This could be prevented if you construct a queue with i. and add a topic: true
option to the opts
in Queue.add
; the queuename parameter would then be used as the topicname.queuename
will be added to a bull:topicname:queues
set of queue ids. Normal queue semantics apply, no existing jobs are copied.Does anyone have similar use-cases? What would be requirements? Are we missing anything?
We intend to write an implementation and submit a pull request to bull en to oxen.
@manast would you consider accepting such a pull request?
Bull should support returning promises in the processor callbacks besides a simple done function.
It's occasionally a requirement for a job to not be done for at least N seconds.
There are two obvious API methods for doing this:
(For Kue, the second option makes sense because Job objects can be used from within their own processor, so failed jobs can have a delay before retrying. With Bull it doesn't make much sense, i think?)
How to implement this? Essentially, delayed jobs are put into a ZSET with the score being the time that they should be processed after. Then, the system polls ZRANGEBYSCORE (once Queue.process is called, or once Queue.processDelayed() or something like that is called) from 0 to now, and puts those jobs onto the active queue as before.
If you want me to hammer out a PR once we have discussed the API choices a bit then I'd be happy to - something with a bit more care than Kue would be great!
Of course, it might also be a nice distinct library / extension for bull that can be added distinctly.
Support for processing multiple jobs at the same time would be nice!
As a workaround, is it safe to run multiple instances with the same database?
Hi,
Im running redis-cli 2.6.17
And I am still receiving
Possibly unhandled Error: ERR wrong number of arguments for 'set' command
at ReplyParser. (/home/app/critique-notification/node_modules/bull/node_modules/redis/index.js:308:31)
We are using each redis instance for various purposes divided by utilizing SELECT
command.
With kue, we were able to do SELECT
by overriding kue.redis.createClient
function.
Currently, manually calling queue.client.select()
and queue.bclient.select()
seems the only way.
It would be very helpful if bull supports this pattern.
Adding argument for using select from Queue()
initializer, or making createClient
override possible will do.
Thanks.
Hi,
In brief, Is it possible to add a callback function to the add method?
Actually I experienced a situation that I needed to add one job data to several queues with some slightly modifications! I found that my modification after the previous add invocation could change the last one! I hope my explanation to be clear :)
BEST,
-- Afshin
The getWaiting
, getActive
etc methods all have start and end parameters, but these aren't passed to Queue.getJobs
Hi,
I have a setup with multiple worker servers.
Each running a worker-app instance on each of its cores (using pm2). They take jobs from a bull queue, two types of jobs.
Besides this I have an other applications who creates the jobs.
But, when I restart a server it starts working on a job that is already being worked on.
Is this expected or am I doing it wrong :)
Depending on the queue name, only the second job gets processed (or two jobs, if number of added jobs is even). See code below.
"use strict";
var Queue = require('bull');
var videoQueue = Queue('video transcoding', 6379, '127.0.0.1');
videoQueue.process(function(job, done){
console.log('video job %d started.', job.jobId);
done();
});
videoQueue.on('completed', function(job) {
console.log('video job %d completed.', job.jobId);
});
videoQueue.add({video: 'http://example.com/video1.mov'});
videoQueue.add({video: 'http://example.com/video1.mov'});
videoQueue.add({video: 'http://example.com/video1.mov'});
Outputs:
video job 2 started.
video job 2 completed.
When changing the queue name from video transcoding
just video
, I'm getting the following output (after flushing Redis):
video job 1 started.
video job 1 completed.
video job 2 started.
video job 2 completed.
video job 3 started.
video job 3 completed.
To add some more weird behavior, when I add a fourth job to the queue, run it as video transcoding
, I'm getting:
video job 5 started.
video job 5 completed.
video job 7 started.
video job 7 completed.
Same with 6 jobs, but not with 5 (only one processed again).
Note that videoQueue.getCompleted()
lists always all entries, independently from the queue title.
Running on Ubuntu 14.04, Node v0.10.29 and Redis 2:2.8.4-2. I wasn't able to reproduce this on my Win8x64 box running Node v0.10.28 and Redis 2.8.12
Comments?
queue.stop()
and queue.abort()
would be nice!
stop()
= Won't start more jobs
abort()
= Aborts current jobs, ie. setting the status to aborted
I was wondering whether i could add some context information to the job, which will allow me to restart and better handler restarted jobs. For instance it would be handy if i could add data to job.data, that would be persistent in redis along with the job
Hi,
you did great job with bull library, congrats.
I have tested it a little, and I found out that queue events ('completed', 'failed', etc ...) on some message are only emitted to the process that is processing those message. This is code where it works:
var Queue = require('bull');
var messageQueue = Queue('message', 6379, '127.0.0.1');
// Processing
messageQueue.process(function(job, done){
setTimeout(done, 1000);
});
// Events
messageQueue.on('completed', function(job){
console.log("Completed " + JSON.stringify(job.data));
});
// Producer
for(var i = 0; i < 50; i++) {
messageQueue.add({count: i});
}
In this code I will receive event 'completed' for each completed job. But if I move event listener to another process which will only listen to the message queue events then no events will be emitted to that process.
var Queue = require('bull');
var messageQueue = Queue('message', 6379, '127.0.0.1');
// Processing (not here, it is in another process)
// Events
messageQueue.on('completed', function(job){
console.log("Completed " + JSON.stringify(job.data)); // NO EVENT!
});
Is that the flow you wanted to make or?
I doesn't make sense for me that only process that is processing message can receive events from queue.
var Queue = require('bull');
var videoQueue = Queue('video transcoding', 6379, '127.0.0.1');
videoQueue.process(function(job, done){
// job.data contains the custom data passed when the job was created
// job.jobId contains id of this job.
console.log('start process');
console.log('job: '+JSON.stringify(job.data))
Simplepromise(job, function() {
console.log('job done simplepromise');
done();
});
});
function Simplepromise(job,cb) {
console.log('start simplepromise');
//intentional uncaught error
ERROR1
throw (Error('some unexpected error'));
cb();
}
videoQueue.on('completed', function(job){
console.log('job completed')
})
videoQueue.add({video: '1'});
videoQueue.add({video: '2'});
To save memory, it should be possible to restrict the max number of saved completed or failed jobs in Redis.
This can be easily achieved using ZSET with a timestamp instead of standard SETS. As long as all the clocks of the queue instances are more or less synchronized everything should work fine.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.