GithubHelp home page GithubHelp logo

mubsub's People

Contributors

autopulated avatar daffl avatar ebourmalo avatar faleij avatar kof avatar saintedlama avatar scttnlsn avatar vladotesanovic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mubsub's Issues

Issues on starting up

I try to leverage the library in conjunction with mongoose.

For the non-existing collection I get:
Caught exception: TypeError: Cannot read property '_id' of undefined

When interacting with it subsequently (when collection became created) thereby:

  • when using newly created separate connection for mubsub:
Caught exception: MongoError: Unable to execute query: error processing query: ns=phone_tokens.socketmessages limit=0 skip=0
Tree: _id $gt ObjectId('55e719aa4bd427be722898e7')
Sort: {}
Proj: {}
 tailable cursor requested on non capped collection

where 55e719aa4bd427be722898e7 is dummy record's id, so I persuaded it exists

  • when using existing connection:
Caught exception: MongoError: Tailable cursor doesn't support sorting

All in all looks like I do something wrong because it fails on start and works for other people, but can't figure out what exactly.

Has anyone succeeded with running it with mongoose?

Can not create connection to MongoDB version 3.4.0

Was not able to create connection

client = mubsub("#{env.mongodb_url}/mubsub")
channel = client.channel('alerts', {size: 10000, max: 100})

Was able to connect after retryInterval and recreate options were remove in lib/channel.js

Crash with "MongoError: tailable cursor requested on non capped collection"

I have an example here where if I lose connection with a collection, I get an error.

Here is the code:

/*jslint indent: 4, node: true, nomen:true, maxlen: 100*/
'use strict';

var DATABASE_URL = 'mongodb://127.0.0.1:27017/purple',

    util = require('util'),
    mubsub = require('mubsub'),
    dbOptions = {
        db: {
            w: 1
        },
        server: {
            auto_reconnect: true,
            poolSize: 5
        }
    },
    notificationClient,
    notificationChannel,
    channelName = 'testChannel',

    consoleLog = function (msg) {
        console.log((new Date()).toISOString() + ': ' + msg);
    },

    msChannelOnError = function msChannelOnError(err) {
        // If this is a broken cursor error, I don't need to worry about it because
        // I set the 'recreate' property when I created the channel so I know
        // this is a temporary situation.
        if (err.message === 'Mubsub: broken cursor.') {
            consoleLog('Broken cursor. Continuing.');

        } else {
            consoleLog('mubsub channel error event on channel ' +
                channelName + ', error: ' + util.inspect(err));
            throw err;
        }
    };


// Configure the notification channel.
notificationClient = mubsub(DATABASE_URL, dbOptions);

// Wait for connection.
notificationClient.on('connect', function (db) {

    notificationChannel = notificationClient.channel(channelName, { recreate: true });
    notificationChannel.on('error', msChannelOnError);

    // When everything is ready, then the initialization is done.
    notificationChannel.once('ready', function () {

        // If there is a broken cursor momentarily in the channel, for debug
        // reasons I want to be notified when it does come back online.
        notificationChannel.on('ready', function () {
            consoleLog('**** Channel ' + channelName + ' is ready again.');
        });

        //--------------------------------------------------------------------
        // Start the real work. I will publish every 0.5 seconds and drop the
        // collection once every 2 seconds. It will crash the app.
        //--------------------------------------------------------------------
        function publishSomething() {
            var myEvent = {
                random: Math.random()
            };

            consoleLog('publishing ' + JSON.stringify(myEvent));
            notificationChannel.publish('myEvent', myEvent);
        }
        setInterval(publishSomething, 500);

        function dropTheCollection() {
            consoleLog('dropping collection ' + channelName);
            db.dropCollection(channelName);
        }
        setInterval(dropTheCollection, 2000);
    });
});

When running the example above, I get the following:

2013-08-29T18:41:22.380Z: publishing {"random":0.5171210195403546}
2013-08-29T18:41:22.881Z: publishing {"random":0.7188834154512733}
2013-08-29T18:41:23.382Z: publishing {"random":0.6176881860010326}
2013-08-29T18:41:23.836Z: dropping collection testChannel
2013-08-29T18:41:23.883Z: publishing {"random":0.6535363004077226}
2013-08-29T18:41:24.384Z: publishing {"random":0.0007822008337825537}
2013-08-29T18:41:24.870Z: Broken cursor. Continuing.
2013-08-29T18:41:24.873Z: **** Channel testChannel is ready again.
2013-08-29T18:41:24.874Z: mubsub channel error event on channel testChannel, error: { [MongoError: tailable cursor requested on non capped collection] name: 'MongoError' }

events.js:71
        throw arguments[1]; // Unhandled 'error' event
                       ^
MongoError: tailable cursor requested on non capped collection
    at Object.toError (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/utils.js:110:11)
    at Cursor.nextObject.self.queryRun (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/cursor.js:634:54)
    at Cursor.close (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/cursor.js:903:5)
    at Cursor.nextObject.commandHandler (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/cursor.js:634:21)
    at Db._executeQueryCommand (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/db.js:1670:9)
    at Server.Base._callHandler (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/connection/base.js:382:41)
    at Server.connect.connectionPool.on.server._serverState (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/connection/server.js:472:18)
    at MongoReply.parseBody (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
    at Server.connect.connectionPool.on.server._serverState (/home/dev/GitHub/appserver-FF/node_modules/mongodb/lib/mongodb/connection/server.js:430:20)
    at EventEmitter.emit (events.js:96:17)

It seems that the collection used by my channel is created when it does not exist, but is not recreated if it is lost. I understand this is a pretty far out case. Normally, we should not drop a collection in the middle of running. Anyway, I was testing that my code recovered from a broken cursor and I got this.

I clearly see the call to createCollection in your code ( channel.js ) and I also see that the options are correct when you call it again after the broken cursor ( meaning that there is an option capped which is true. This may be something related to the creation of the cursor ( the collection may not be entirely ready by the time the cursor is created ? ). Not sure at this point...

If you have any insight, they would be appreciated.

MongoDB 2.6 error: already fixed in master, but needs new release tag

I'm using mubsub via a dependency from mong.socket.io.

I have recently upgraded my database to MongoDB 2.6 and I am now receiving a cannot canonicalize query error: "BadValue unknown top level operator: $gt"

This comes from the collection.find({ $gt: latest._id }, options).sort({ $natural: 1 }); query in the 0.2.1 release.

The query has since been fixed in the master branch:

        var cursor = collection
                .find(
                    { _id: { $gt: latest._id }},
                    { tailable: true, numberOfRetries: -1, tailableRetryInterval: self.options.retryInterval }
                )
                .sort({ $natural: 1 });

However mong.socket.io declares mubsub version 0.2.x as a dependancy and since the latest release is 0.2.1, the fixed query is not included.

Could you tag a new release of mubsub and publish it to npm?

Thanks!

Collection Already Exists

I am trying to use mubsub in my nodejs application where a user will get notified if he receives a chat message. This is the partial code:

var client = mubsub('mongodb://'+configuration.mongoDBUrl);
var channelConnection = client.channel('connectionstreams');
var channelMessages = client.channel('messagestreams');
var channelSparkTeams = client.channel('sparkteamstreams');
var channelChatMsg = client.channel('chatmsgstreams');

client.on('error', console.error);
channelConnection.on('error', console.error);
channelMessages.on('error', console.error);
channelSparkTeams.on('error', console.error);
channelChatMsg.on('error', console.error);

// --------------- New Chat Message
channelChatMsg.on('document',function(msg){
	console.log('Document inserted ->' + msg);
	sockets_users.forEach(function(socketUser,i){

		socketUser.emit('chat_' + msg.user1_id + "_" + msg.user2_id, msg);
		socketUser.emit('chat_' + msg.user2_id + "_" + msg.user1_id, msg);
		socketUser.emit('chat_ping_' + msg.user2_id,msg);
	});
});

Upon starting the application. I get the Mongodb Error that these collections already exists.

 MongoError: a collection 'TestDB.messagestreams' already exists
    at toError (g:\xampp\htdocs\skyloop\skyloop_backend_v2\node_modules\mubsub\node_modules\mongodb\lib\mongodb\utils.js:110:11)
    at g:\xampp\htdocs\skyloop\skyloop_backend_v2\node_modules\mubsub\node_modules\mongodb\lib\mongodb\utils.js:157:55
    at g:\xampp\htdocs\skyloop\skyloop_backend_v2\node_modules\mubsub\node_modules\mongodb\lib\mongodb\db.js:1806:9
    at Server.Base._callHandler (g:\xampp\htdocs\skyloop\skyloop_backend_v2\node_modules\mubsub\node_modules\mongodb\lib\mongodb\connection\base.js:442:41)
    at g:\xampp\htdocs\skyloop\skyloop_backend_v2\node_modules\mubsub\node_modules\mongodb\lib\mongodb\connection\server.js:485:18
    at MongoReply.parseBody (g:\xampp\htdocs\skyloop\skyloop_backend_v2\node_modules\mubsub\node_modules\mongodb\lib\mongodb\responses\mongo_reply.js:68:5)
    at .<anonymous> (g:\xampp\htdocs\skyloop\skyloop_backend_v2\node_modules\mubsub\node_modules\mongodb\lib\mongodb\connection\server.js:443:20)
    at emitOne (events.js:96:13)
    at emit (events.js:188:7)
    at .<anonymous> (g:\xampp\htdocs\skyloop\skyloop_backend_v2\node_modules\mubsub\node_modules\mongodb\lib\mongodb\connection\connection_pool.js:191:13)
  name: 'MongoError',
  note: 'the autoIndexId option is deprecated and will be removed in a future release',
  ok: 0,
  errmsg: 'a collection \'TestDB.messagestreams\' already exists',
  code: 48,
  codeName: 'NamespaceExists' }

Even though i removed them manullay at first. Upon restarting the application I am still getting the error
How do I resolve this error ???

default poolSize will block EVERYTHING

I was fighting the whole day with an issue of very slow publishes, multiple seconds.

Now I found the issue.

Default poolSize of mongo driver is 1.

nextObject + awaitdata:true blocks a connection completely until it gets an answer.

This means if you are subscribed and you try to publish something, it will need mostly 1-3 SECONDS until it gets published.

So in mubsub case default poolSize should never be less than 2, better 5.

Publish new version to npm that uses mongodb 1.4.x driver

Current version on npm still uses mongodb 1.3.x driver which is not compatible with mongodb server version 3.0 while the github version was already upgraded to mongodb driver 1.4.x which is compatible to mongodb server version 3.0

Problem with Mongo 3.4.3

Hello,

I move to Mongo 3.4.3 (previously 2.x). Since migration, Mubsub has a bug.

When I launch the project I have the following error :

Development/core.node/node_modules/mongodb/lib/utils.js:98
    process.nextTick(function() { throw err; });
                                  ^
MongoError: The field 'strict' is not a valid collection option. Options: { capped: true, autoIndexId: true, size: 5242880, strict: false }
    at Function.MongoError.create (Development/core.node/node_modules/mongodb-core/lib/error.js:31:11)
    at Development/core.node/node_modules/mongodb-core/lib/topologies/server.js:778:66
    at Callbacks.emit (Development/core.node/node_modules/mongodb-core/lib/topologies/server.js:95:3)
    at Connection.messageHandler (Development/core.node/node_modules/mongodb-core/lib/topologies/server.js:249:23)
    at Socket.<anonymous> (Development/core.node/node_modules/mongodb-core/lib/connection/connection.js:266:22)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at readableAddChunk (_stream_readable.js:176:18)
    at Socket.Readable.push (_stream_readable.js:134:10)
    at TCP.onread (net.js:548:20)

Error say that strict: false is responsible of the error.
When I remove the line in mubsub, problem is solved.

This is the line I talk about :
https://github.com/scttnlsn/mubsub/blob/master/lib/channel.js#L24

What is the purpose of this line ?
Why is there a problem ?
Is it really specific to version of MongoDb ?

Thanks !

error with mongo-native 3.0+

2018-05-15T15:00:02.328Z [1628] - error:  TypeError: collection.find(...).sort(...).limit(...).nextObject is not a function
    at Channel.onCollection (c:\Users\Baris\Desktop\github\nodebb_vscode\NodeBB\node_modules\mubsub\lib\channel.js:204:14)
    at Object.onceWrapper (events.js:315:30)
    at emitOne (events.js:116:13)
    at Channel.emit (events.js:211:7)
    at c:\Users\Baris\Desktop\github\nodebb_vscode\NodeBB\node_modules\mubsub\lib\channel.js:119:22

Performance issue of tailable cursor + awaitdata

I have one more issue:

1 query with tailable cursor taks up to 10% cpu load on mongod process on my machine.
Every additionall query adds same amount of load. At the end using 5 of them cossts 40-50% of cpu.

I don't see any answers on the web ....

get ID

tell me what you need to set up, so that together with the data was ID

auto_reconnect problem ?

Sorry to disturb you with this, but is it possible that in the constructor of Connection, the following line is mistaken:

options.auto_reconnect == null || (options.auto_reconnect = true);

This seems to say that if auto_reconnect is null, then nothing will be done. But, for any other value than null, it will be replaced and re-assigned to true.

So, this does not set the default value, in fact, it sets it to true if anything else other than null is passed as argument.

Is that the expected behavior ?

Don't rely on _id order

Originally from #24

_id's are currently created on the publisher machines, so there is no guarantee that order of _ids corresponds the natural inserts order.

We rely on _id order when using latest doc id in Channel#listen every time we create a cursor.

It is possible that there are documents inserted before this one, for which we have triggered an event already, but its _id is higher in the sort order then the one from "latest".

Just Want to Use it to Subscribe,How?

Here is the case. I've got a capped collection,with data flowing in.I want to use mubsub to subscribe it(like the codes below)

var mongoClient = mubsub('mongodb://localhost:27017/escache');
function run() {
var cacheChannel = mongoClient.channel('cache');
mongoClient.on('error', console.error);
cacheChannel.on('error', console.error);
cacheChannel.subscribe([],function (message) {
msgCnt += 1;
console.log(message);
if(msgCnt % 10000 == 0){
console.log("recv Msg:",msgCnt);
}
});
cacheChannel.on('message',function(message){
console.log(message);
})
}

But no message in.
mongostat info like below
insert query update delete getmore command % dirty % used flushes vsize res qr|qw ar|aw netIn netOut conn time
8500 *0 *0 *0 0 1|0 1.4 7.1 0 4.8G 4.5G 0|0 1|0 11m 16k 143 16:32:46
11278 *0 *0 *0 0 1|0 1.4 6.9 0 4.8G 4.5G 0|0 1|1 15m 16k 143 16:32:47
9092 *0 *0 *0 0 18|0 1.6 7.1 0 4.8G 4.5G 0|1 1|0 12m 19k 143 16:32:48
8630 *0 *0 *0 0 1|0 1.8 7.3 0 4.8G 4.5G 0|0 1|0 13m 16k 143 16:32:49
11000 2 *0 *0 0 5|0 2.2 7.7 0 4.8G 4.5G 0|0 2|0 16m 19k 148 16:32:50
10301 *0 *0 *0 0 1|0 1.3 14.3 0 4.8G 4.5G 0|0 2|2 14m 16k 148 16:32:51
7699 *0 *0 *0 0 1|0 1.3 23.7 0 4.8G 4.5G 0|0 2|0 11m 16k 148 16:32:52
7500 *0 *0 *0 0 1|0 1.3 33.6 0 4.8G 4.5G 0|0 2|0 9m 16k 148 16:32:53
10545 *0 *0 *0 0 1|0 1.7 43.7 0 4.8G 4.5G 0|0 2|1 15m 16k 148 16:32:54
12137 *0 *0 *0 0 2|0 1.3 52.3 0 4.8G 4.5G 0|0 2|1 16m 17k 148 16:32:55

Must I use pub method to insert data to mongo?or I can format data in some pattern so that I can get it out with mubsub.

thx in adwance

Missing events when publishing on one channel with 60 servers.

I am sorry, but this is a difficult problem to describe so I'll need to be long, but I'll try to make it as easy to follow as possible.

The setup
A customer asked us to be ready for a large number of visitors and for the first time we ran 60 servers for a few hours. Every single server is a publisher of events using mubsub.

We have a problem where our web app did not receive all published events.

I have an hypothesis related to this problem and it comes down to the tailable cursors that mubsub uses. Unfortunately, this is difficult to test so I am relaying my hypothesis to you to see if you have ideas about it.

My understanding of tailable cursors
First, let me tell you my understanding of tailable cursors.

From what I understand, a tailable cursor will wait for a limited amount of time for documents to appear in the collection it monitors. Upon hitting its time limit, the cursor returns some kind of error or other message indicating that it stopped attempting to read any more document.

My understanding of mubsub
From what I can understand, mubsub will react to a tailable cursor timeout and recreate it ( or maybe mongo does it, but one way or the other, mubsub can be used without anybody outside of it worrying about this ).

My hypothesis about my problem
Here I get to the heart of the matter.

The cursor is created with this code inside mubsub:

.find(
    {_id: {$gt: latest._id}},
    {tailable: true, numberOfRetries: -1, tailableRetryInterval: self.options.retryInterval}
)
.sort({$natural: 1});

The problem is that the _id are not guarantied to be incremental. As far as I can find, an ObjectId is

4-byte value representing the seconds since the Unix epoch,
3-byte machine identifier,
2-byte process id, and
3-byte counter, starting with a random value.

In the case where multiple machines publish at the same time, the first 4 bytes representing the time are the same on all machines, so the ordering comes down to the 3-byte machine identifier.

If you lose the cursor at time X on a particular machine with a high value machine identifier, the `.find({_id: {$gt: latest._id}}`` with start from a point potentially beyond IDs published by other machines while the cursor was being reconstructed.

So, to be clearer, I think my problem comes from the fact that the _id are not sequential and that mubsub missed one when recreating a cursor.

Do you think that is possible ?

Sorry for the long question, hopefully, you can make some sense of it.

find a bug

'cb' may be 'callback' in channel.js at line 208

Maximum channels

You have a "WARNING: You should not create lots of channels because Mubsub will poll from the cursor position.", what do you mean by lots? Are you aware of an upper limit before this becomes a problem?

Channel "after all" hook: clear test failing

I'm trying to run the tests but one keeps failing. I realize the warning is related to emitter.setMaxListeners() and am not concerned about that. I am curious whether the test response times are inline with design limits and whether the after all hook should be failing. Did anyone else notice this? Should I just increase the timeout? Am I missing something?

Thank you! ๐Ÿป

Environment:
Debian 7.4 w/ kernel 3.2.0-4-amd64
MongoDB version 2.4.10 running on localhost
Node.js version 0.11.12

Log:

$ git clone https://github.com/scttnlsn/mubsub.git && cd mubsub/ && npm install && make test
Cloning into 'mubsub'...
remote: Reusing existing pack: 274, done.
remote: Total 274 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (274/274), 50.89 KiB | 29 KiB/s, done.
Resolving deltas: 100% (124/124), done.
npm http GET https://registry.npmjs.org/mongodb
npm http GET https://registry.npmjs.org/mocha
npm http 304 https://registry.npmjs.org/mocha
npm http 200 https://registry.npmjs.org/mongodb
npm http GET https://registry.npmjs.org/growl
npm http GET https://registry.npmjs.org/jade/0.26.3
npm http GET https://registry.npmjs.org/diff/1.0.7
npm http GET https://registry.npmjs.org/debug
npm http GET https://registry.npmjs.org/mkdirp/0.3.5
npm http GET https://registry.npmjs.org/glob/3.2.3
npm http GET https://registry.npmjs.org/commander/2.0.0
npm http GET https://registry.npmjs.org/bson/0.2.5
npm http GET https://registry.npmjs.org/kerberos/0.0.3
npm http 304 https://registry.npmjs.org/commander/2.0.0
npm http 304 https://registry.npmjs.org/jade/0.26.3
npm http 304 https://registry.npmjs.org/debug
npm http 304 https://registry.npmjs.org/glob/3.2.3
npm http 304 https://registry.npmjs.org/growl
npm http 304 https://registry.npmjs.org/diff/1.0.7
npm http 304 https://registry.npmjs.org/bson/0.2.5
npm http 304 https://registry.npmjs.org/kerberos/0.0.3

> [email protected] install /home/<removed for privacy>/mubsub/node_modules/mongodb/node_modules/kerberos
> (node-gyp rebuild 2> builderror.log) || (exit 0)

make: Entering directory `/home/<removed for privacy>/mubsub/node_modules/mongodb/node_modules/kerberos/build'
  SOLINK_MODULE(target) Release/obj.target/kerberos.node
  SOLINK_MODULE(target) Release/obj.target/kerberos.node: Finished
  COPY Release/kerberos.node
make: Leaving directory `/home/<removed for privacy>/mubsub/node_modules/mongodb/node_modules/kerberos/build'

> [email protected] install /home/<removed for privacy>/mubsub/node_modules/mongodb/node_modules/bson
> (node-gyp rebuild 2> builderror.log) || (exit 0)

make: Entering directory `/home/<removed for privacy>/mubsub/node_modules/mongodb/node_modules/bson/build'
  CXX(target) Release/obj.target/bson/ext/bson.o
  SOLINK_MODULE(target) Release/obj.target/bson.node
  SOLINK_MODULE(target) Release/obj.target/bson.node: Finished
  COPY Release/bson.node
make: Leaving directory `/home/<removed for privacy>/mubsub/node_modules/mongodb/node_modules/bson/build'
npm http 200 https://registry.npmjs.org/mkdirp/0.3.5
npm ERR! registry error parsing json
npm http GET https://registry.npmjs.org/graceful-fs
npm http GET https://registry.npmjs.org/minimatch
npm http GET https://registry.npmjs.org/inherits
npm http GET https://registry.npmjs.org/commander/0.6.1
npm http GET https://registry.npmjs.org/mkdirp/0.3.0
npm http 304 https://registry.npmjs.org/graceful-fs
npm http 304 https://registry.npmjs.org/minimatch
npm http 200 https://registry.npmjs.org/mkdirp/0.3.0
npm http 304 https://registry.npmjs.org/commander/0.6.1
npm http GET https://registry.npmjs.org/mkdirp/-/mkdirp-0.3.0.tgz
npm http 304 https://registry.npmjs.org/inherits
npm http GET https://registry.npmjs.org/lru-cache
npm http GET https://registry.npmjs.org/sigmund
npm http 200 https://registry.npmjs.org/mkdirp/-/mkdirp-0.3.0.tgz
npm http 304 https://registry.npmjs.org/sigmund
npm http 304 https://registry.npmjs.org/lru-cache
[email protected] node_modules/mongodb
โ”œโ”€โ”€ [email protected]
โ””โ”€โ”€ [email protected]

[email protected] node_modules/mocha
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected] ([email protected], [email protected], [email protected])
โ””โ”€โ”€ [email protected] ([email protected], [email protected])
./node_modules/.bin/mocha --reporter list

  โ€ค Channel unsubscribes properly: 501ms
  โ€ค Channel unsubscribes if channel is closed: 501ms
  โ€ค Channel unsubscribes if client is closed: 501ms
  โ€ค Channel can subscribe and publish different data: 38ms
    Channel gets lots of subscribed data fast enough: (node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at Channel.EventEmitter.addListener (events.js:176:15)
    at Channel.EventEmitter.once (events.js:201:8)
    at Channel.ready (/home/<removed for privacy>/mubsub/lib/channel.js:247:14)
    at Channel.publish (/home/<removed for privacy>/mubsub/lib/channel.js:63:10)
    at Context.<anonymous> (/home/<removed for privacy>/mubsub/test/channel.js:134:21)
    at Test.Runnable.run (/home/<removed for privacy>/mubsub/node_modules/mocha/lib/runnable.js:196:15)
    at Runner.runTest (/home/<removed for privacy>/mubsub/node_modules/mocha/lib/runner.js:374:10)
    at /home/<removed for privacy>/mubsub/node_modules/mocha/lib/runner.js:452:12
    at next (/home/<removed for privacy>/mubsub/node_modules/mocha/lib/runner.js:299:14)
    at /home/<removed for privacy>/mubsub/node_modules/mocha/lib/runner.js:309:7
    at next (/home/<removed for privacy>/mubsub/node_modules/mocha/lib/runner.js:247:23)
    at /home/<removed for privacy>/mubsub/node_modules/mocha/lib/runner.js:271:7
    at done (/home/<removed for privacy>/mubsub/node_modules/mocha/lib/runnable.js:185:5)
    at /home/<removed for privacy>/mubsub/node_modules/mocha/lib/runnable.js:199:9
    at /home/<removed for privacy>/mubsub/test/channel.js:20:13
    at /home/<removed for privacy>/mubsub/node_modules/mongodb/lib/mongodb/utils.js:152:38
    at /home/<removed for privacy>/mubsub/node_modules/mongodb/lib/mongodb/db.js:1806:9
    at Server.Base._callHandler (/home/<removed for privacy>/mubsub/node_modules/mongodb/lib/mongodb/connection/base.js:442:41)
    at /home/<removed for privacy>/mubsub/node_modules/mongodb/lib/mongodb/connection/server.js:485:18
    at MongoReply.parseBody (/home/<removed for privacy>/mubsub/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
    at null.<anonymous> (/home/<removed for privacy>/mubsub/node_modules/mongodb/lib/mongodb/connection/server.js:443:20)
    at EventEmitter.emit (events.js:104:17)
    at null.<anonymous> (/home/<removed for privacy>/mubsub/node_modules/mongodb/lib/mongodb/connection/connection_pool.js:191:13)
    at EventEmitter.emit (events.js:107:17)
    at Socket.<anonymous> (/home/<removed for privacy>/mubsub/node_modules/mongodb/lib/mongodb/connection/connection.js:418:22)
    at Socket.EventEmitter.emit (events.js:104:17)
    at readableAddChunk (_stream_readable.js:156:16)
    at Socket.Readable.push (_stream_readable.js:123:10)
    at TCP.onread (net.js:520:20)
  โ€ค Channel gets lots of subscribed data fast enough: 2955ms
  1) Channel "after all" hook: clear
  โ€ค Connection emits "error" event: 4ms
  โ€ค Connection emits "connect" event: 8ms
  โ€ค Connection states are correct: 10ms

  8 passing (8s)
  1 failing

  1) Channel "after all" hook: clear:
     Error: timeout of 2000ms exceeded
      at null.<anonymous> (/home/<removed for privacy>/mubsub/node_modules/mocha/lib/runnable.js:139:19)
      at Timer.listOnTimeout (timers.js:133:15)



make: *** [test] Error 1

.subscribe performance

Hi

Current implementation is pretty low level. If a user subscribes, he starts a polling etc... which is not really performant.

So either a user should be warned about this and he should not make lots of subscriptions or mubsub should make a single subscription per channel und verify if every new received document corresponds the query.

The last one means we need to have a kind of query matcher like mongodb which could be tricky.

Another solution would be to create a higher level module, which enables to publish and subscribe using an event name instead of query and will do just 1 polling per client.

using _id on capped collection?

if you look in stdout of mongod, you will see:

warning: unindexed _id query on capped collection, performance will be poor collection: collection name

Any ideas?

Best,
Oleg

Subscriber Outages

Hi All, loving mubsub, so thanks :)

I couldn't find any discussion on what to do in the event of a subscriber outage.

A typical scenario might be a website taking signups, where a mubsub subscriber listens for new signups and sends a confirmation email. If this confirmation email subscriber suffers some downtime, there will be signups that published a new signup message that the email subscriber won't have received.

How do others handle this scenario?

TypeError: Cannot read property '0' of undefined

TypeError: Cannot read property '0' of undefined
    at /node_modules/mubsub/lib/channel.js:198:39
    at /node_modules/mubsub/node_modules/mongodb/lib/mongodb/collection/core.js:210:9
    at __executeInsertCommand (/node_modules/mubsub/node_modules/mongodb/lib/mongodb/db.js:1829:12)
    at Db._executeInsertCommand (/node_modules/mubsub/node_modules/mongodb/lib/mongodb/db.js:1930:5)
    at insertAll (/node_modules/mubsub/node_modules/mongodb/lib/mongodb/collection/core.js:205:13)
    at Collection.insert (/node_modules/mubsub/node_modules/mongodb/lib/mongodb/collection/core.js:35:3)
    at /node_modules/mubsub/lib/channel.js:197:28
    at /node_modules/mubsub/node_modules/mongodb/lib/mongodb/cursor.js:738:35
    at Cursor.close (/node_modules/mubsub/node_modules/mongodb/lib/mongodb/cursor.js:959:5)
    at Cursor.nextObject (/node_modules/mubsub/node_modules/mongodb/lib/mongodb/cursor.js:738:17) 
$npm ls --depth=0 | grep mubsub
npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
โ”œโ”€โ”€ [email protected]

Code that generates this:

// mubsubExample.js
'use strict';

var mubsub = require('mubsub'),
  channelNames = ['printjobs'],
  channels = {},
  client;

var l = require('../log')('config:mubsub'), log = l.log, error = l.error;

function init() {
  client = mubsub('mongodb://localhost:27017/mubsub_example');
  client.on('error', function (err) {
    error('mubsub client error ', err);
  });

  channelNames.forEach(function (n) {
    channels[n] = client.channel(n);
    channels[n].on('error', function (err) {
      error('mubsub channel[%s] error ', n, err);
    });
  });
}

function close() {
  client.close();
}

module.exports = {
  init: init,
  close: close,
  mubsub: mubsub
};

Then just require('mubsubExample').init()

Any thoughts?

mubsub causes `retryInterval` error

I tried to integrate several socket.io to MongoDB adapters to my Node.js application, like:
https://www.npmjs.com/package/socket.io-adapter-mongo
https://www.npmjs.com/package/socket.io-mongodb
https://www.npmjs.com/package/socket.io-adapter-mongo-replica

All of them uses 'mubsub' module as dependency. At the connection, mubsub gives error

Caught exception: { MongoError: The field 'retryInterval' is not a valid collection option. Options: { capped: true, size: 5242880, retryInterval: 200, recreate: true }

Can you fix it, please?

process.nextTick vs. setImmediate

Hi,

I am using socket.io-mongo module in my application which relies on this module.

I have recently upgraded the application to 0.10.x from 0.8.8 and the application started to throw warnings and crash.

I debugged the issue and found the solution relies on fixing channel.js to use setImmediate when node is >= 0.10.x. but also it crashed when we load too large data (10000 records of s simple model)

Can you confirm that the module cannot run in node 0.10.x yet? or you have a fix?

(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.
(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.

Thanks!

input mongo much faster than mubsub get data out issue

Is it possible that writing to fast into mongo and mubsub channel get data out and handle not fast enough.Then at some point the cursor of capped collection become invalid,the channel then become invalid,data flow stops?
I read the code in lib/channel.js and assume it should hanpped. Please tell me about it if I'm wrong

subscribe scalability question

I'm doing a subscribe when a request comes in which looks something like this

    var subscription = channel.subscribe(id, function (response) {
      subscription.unsubscribe();
      res.jsonp(response);
    });

Any thoughts as to the number of subscribes one can have concurrently? My concern is I'm going to eat up all the connections in the pool.

auto_reconnect --- another issue

Sorry, me again.

I see in the constructor of Connection that you now do:

options.auto_reconnect != null || (options.auto_reconnect = true);
...
MongoClient.connect(uri, options, function(err, db) {

However, according to the mongoDb driver documentation, the auto_reconnect options should be declared as thus:

options = {
   server: {
      auto_reconnect: true
   }
}

As you can seem the property is options.server.auto_reconnect and not simply options.auto_reconnect.

I am sorry of not seeing this earlier, I am trying to understand all this and to figure out exactly how to set myself up to make this work as I want it and I am stumbling along slowly...

Right now, I am trying to go inside the mongodb code and figure out where the auto_reconnect goes. Maybe your code is supposed to work because it is a legacy way of doing it. If I am wrong about this, you have my apologies for disturbing you again.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.