GithubHelp home page GithubHelp logo

vert-x3 / vertx-mongo-client Goto Github PK

View Code? Open in Web Editor NEW
58.0 26.0 98.0 1.67 MB

Mongo Client for Eclipse Vert.x

Home Page: http://vertx.io

License: Apache License 2.0

Java 100.00%
reactive mongo vertx async non-blocking java

vertx-mongo-client's Introduction

Mongo Client

Build Status (5.x) Build Status (4.x)

An asynchronous client for interacting with a MongoDB database

Please see the main documentation on the web-site for a full description:

The following docker CLI can be used for running mongo tests:

docker run --rm --name vertx-mongo -p 27017:27017 mongo

vertx-mongo-client's People

Contributors

bfreuden avatar cescoffier avatar etourdot avatar ha-shine avatar harshitpthk avatar jbtrystram avatar jo5ef avatar johnoliver avatar karianna avatar kostya05983 avatar liuchong avatar llfbandit avatar mazizesa avatar meggarr avatar meshuga avatar myrenyang avatar nscavell avatar onebite avatar pmlopes avatar purplefox avatar ruslansennov avatar siwee avatar slnowak avatar sschmittbt avatar thiagogcm avatar tsegismont avatar vietj avatar zenios avatar zigzago avatar zyclonite avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vertx-mongo-client's Issues

Cannot upsert document with save() vertx 3.0.0

Hi,

It looks like the save() method is not inserting document is the _id is populated and a document with such id doesnt exist in mongodb.

Sample code to replicate

import io.vertx.ext.mongo.MongoClient;
import io.vertx.core.json.JsonObject;
import io.vertx.core.AbstractVerticle;

public class Server extends AbstractVerticle {

public void start() {

JsonObject config = new JsonObject();
config.put("host", "127.0.0.1");
config.put("port", 27017);
config.put("db_name", "bug_test");
MongoClient client = MongoClient.createShared(vertx, config);
JsonObject document = new JsonObject().put("title", "The Hobbit");
document.put("_id", 123456);

//insert() works. save() is not working
client.save("bug_test", document, res -> {
    if (res.succeeded()) {
        String id = res.result();
        System.out.println("Inserted with id " + id);
    } else {
        res.cause().printStackTrace();
    }
});

}
}

Support Write Concern Results

Starting with version 2.6, Mongo returns write concern results of write operations. The Mongo driver returns things like UpdateResult and InsertResult. We should return something similar.

Support SSL

Currently SSL is configurable, but not supported as it uses a different StreamFactory (Netty). We could have version conflicts with the Netty that mongo uses and the one vertx uses.

Update doesn't work with ObjectId

Hi Team!
option userObjectId is true.
we have document with id : ObjectId("56e417abab85b51b8b9743ce").
in case we are trying to find it - it works fine:
mongo.find(collection, new JsonObject().put("_id" : "56e417abab85b51b8b9743ce"), null, handler).
mongo.save- works as well,
but update is not wrapping objectId string into $oid, and as a result performing empty update.
Fixed by adding manual wrapper to $oid before executing the query.

Is this expected behaviour?

Can not save document.

I try it, but can not save document.
import io.vertx.groovy.ext.mongo.MongoClient;
def options = [
config:[
"host": "127.0.0.1",
"port": 27017,
"pool_size": 15,
"db_name": "test_bug"
]
]

def client = MongoClient.createShared(vertx, options.config);
def document = [
_id: 1234,
title: "Language"
]
client.insert("bug_test", document, { res ->
if (res.succeeded()) {
def id = res.result()
println("Inserted book with id ${id}")
} else {
res.cause().printStackTrace()
}
});

Need findBatch method to support operation end handler

Problem is, that interface MongoClient provides method findBatch with handler executed for every document. But how do we know, that all documents have been processed. Mongodb async driver support end handler . but this information lost in findBatch method.

SingleResultCallback callbackWhenFinished = (result, throwable) -> {

  if (throwable != null) {
    resultHandler.handle(Future.failedFuture(throwable));
  }
/////need end handler here
};

Save method does not insert a new document when the _id field is specified

When I try to insert a new document specifiying the _id field, the document will not be saved.

document = {
"title": "New document",
"_id": "12345"
}
mongoClient.save("collectionName", document, callback);

upsert default value is false, and the mongoClient.save method calls replaceOne because the id is not null, and replaceOne will create a UpdateOption object without arguments => upsert is false => the document will not be created.

Thanks.

Performance concerns of Async

OP: http://stackoverflow.com/questions/38303739/mongodb-java-driver-sync-async

OP2:

as an example, I have taken the vertx-examples project and specifically this verticle : https://github.com/vert-x3/vertx-examples/blob/master/web-examples/src/main/java/io/vertx/example/web/mongo/Server.java

I have tried to implement another Server.java with mongo sync driver :

package io.vertx.example.web.mongo;

import com.mongodb.MongoClientOptions;
import io.vertx.core.*;
import io.vertx.core.http.HttpHeaders;
import io.vertx.core.json.JsonArray;
import io.vertx.core.json.JsonObject;
import io.vertx.ext.mongo.MongoClient;
import io.vertx.ext.web.Router;
import io.vertx.ext.web.handler.BodyHandler;
import io.vertx.ext.web.handler.StaticHandler;
import io.vertx.ext.web.templ.JadeTemplateEngine;
import org.jongo.Jongo;
import org.jongo.MongoCollection;
import org.jongo.MongoCursor;

public class ServerSync extends AbstractVerticle {

    // Convenience method so you can run it in your IDE
    public static void main(String[] args) {
        Vertx.vertx().deployVerticle(ServerSync.class.getName(), new DeploymentOptions().setInstances(8));
    }

    @Override
    public void start() throws Exception {

        // Create a mongo client using all defaults (connect to localhost and default port) using the database name "demo".
        final MongoClient mongo = MongoClient.createShared(vertx, new JsonObject().put("db_name", "demo"));

        Jongo jongo = new Jongo(new com.mongodb.MongoClient("localhost", new MongoClientOptions.Builder().connectionsPerHost(250).cursorFinalizerEnabled(false).build()).getDB("demo"));
        MongoCollection usersCollection = jongo.getCollection("users");

        // In order to use a JADE template we first need to create an engine
        final JadeTemplateEngine jade = JadeTemplateEngine.create();

        // To simplify the development of the web components we use a Router to route all HTTP requests
        // to organize our code in a reusable way.
        final Router router = Router.router(vertx);

        // Enable the body parser to we can get the form data and json documents in out context.
        router.route().handler(BodyHandler.create());

        // Entry point to the application, this will render a custom JADE template.
        router.get("/").handler(ctx -> {
            // we define a hardcoded title for our application
            ctx.put("title", "Vert.x Web");

            // and now delegate to the engine to render it.
            jade.render(ctx, "templates/index", res -> {
                if (res.succeeded()) {
                    ctx.response().putHeader(HttpHeaders.CONTENT_TYPE, "text/html").end(res.result());
                } else {
                    ctx.fail(res.cause());
                }
            });
        });

        // and now we mount the handlers in their appropriate routes

        // Read all users from the mongo collection.
        router.get("/users").handler(ctx -> {
            vertx.executeBlocking(event -> {
                // issue a find command to mongo to fetch all documents from the "users" collection
                MongoCursor<JsonObject> map = usersCollection.find().map(dbObject -> new JsonObject()
                        .put("username", (String) dbObject.get("username"))
                        .put("email", (String) dbObject.get("email"))
                        .put("fullname", (String) dbObject.get("fullname"))
                        .put("location", (String) dbObject.get("location"))
                        .put("age", (String) dbObject.get("age"))
                        .put("gender", (String) dbObject.get("gender")));
                event.complete(map);
            }, new Handler<AsyncResult<MongoCursor<JsonObject>>>() {
                @Override
                public void handle(AsyncResult<MongoCursor<JsonObject>> event) {
                    final JsonArray json = new JsonArray();
                    for (JsonObject entries : event.result()) {
                        json.add(entries);
                    }

                    // since we are producing json we should inform the browser of the correct content type.
                    ctx.response().putHeader(HttpHeaders.CONTENT_TYPE, "application/json");
                    // encode to json string
                    ctx.response().end(json.encode());
                }
            });
        });

        // Create a new document on mongo.
        router.post("/users").handler(ctx -> {
            // since jquery is sending data in multipart-form format to avoid preflight calls, we need to convert it to JSON.
            JsonObject user = new JsonObject()
                    .put("username", ctx.request().getFormAttribute("username"))
                    .put("email", ctx.request().getFormAttribute("email"))
                    .put("fullname", ctx.request().getFormAttribute("fullname"))
                    .put("location", ctx.request().getFormAttribute("location"))
                    .put("age", ctx.request().getFormAttribute("age"))
                    .put("gender", ctx.request().getFormAttribute("gender"));

            // insert into mongo
            mongo.insert("users", user, lookup -> {
                // error handling
                if (lookup.failed()) {
                    ctx.fail(lookup.cause());
                    return;
                }

                // inform that the document was created
                ctx.response().setStatusCode(201);
                ctx.response().end();
            });
        });

        // Remove a document from mongo.
        router.delete("/users/:id").handler(ctx -> {
            // catch the id to remove from the url /users/:id and transform it to a mongo query.
            mongo.removeOne("users", new JsonObject().put("_id", ctx.request().getParam("id")), lookup -> {
                // error handling
                if (lookup.failed()) {
                    ctx.fail(lookup.cause());
                    return;
                }

                // inform the browser that there is nothing to return.
                ctx.response().setStatusCode(204);
                ctx.response().end();
            });
        });

        // Serve the non private static pages
        router.route().handler(StaticHandler.create());

        // start a HTTP web server on port 8080
        vertx.createHttpServer().requestHandler(router::accept).listen(8080);
    }
}

When I play some benchmarks on the url localhost:8080/users for both Servers by default with 1 instance, I have far better results with the async driver. However, when I play the benchmarks with setInstances(8) on the verticle deployment in order to use all my laptop cores, I have basically the same results for both drivers, and even a bit better results for the sync one. And I can still observe that we are using more more connections with the async driver (101 connections) whereas the sync driver is doing the job with few connections only (17 connections). And I am testing with just 2 users inside my mongo users collection. It is a bit surprising that we are using so many connections for so light requests with the async driver.

So, I am a bit worry that if I would test with more heavy requests (with 1000 users for example or in a real life project), the use of so many connections could be the reason of the deterioration of results. What do you think ?

Async driver

wrk -t16 -c1000 -d30s http://localhost:8080/users
Running 30s test @ http://localhost:8080/users
  16 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    71.06ms   26.88ms 468.36ms   74.78%
    Req/Sec   780.30    194.56     1.84k    80.17%
  364887 requests in 30.10s, 120.05MB read
  Socket errors: connect 0, read 975, write 0, timeout 0
Requests/sec:  12123.77
Transfer/sec:      3.99MB

Sync Driver

wrk -t16 -c1000 -d30s http://localhost:8080/users
Running 30s test @ http://localhost:8080/users
  16 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    59.28ms   15.91ms 265.25ms   73.07%
    Req/Sec     0.87k   248.13     1.71k    78.64%
  416229 requests in 30.09s, 110.75MB read
  Socket errors: connect 0, read 1011, write 0, timeout 0
Requests/sec:  13833.76
Transfer/sec:      3.68MB

Need MongoDB Client to support ReadPreference at the Operation level.

MongoDB supports the WriteConcern and ReadPreference at the DB, Collection, and Operation level.
The setting at the operation level overwrites the setting at the Collection level which overwrites the setting at the DB level.

The current Vertx MongoDB Client supports the WriteConcern at the DB, Collection and Operation level.

For Write operation, one can use the *WithOption() methods to specify the writeConcern. This is the operation level of WriteConcern which overwrites the collection and DB setting of the MongoClient.

For Query operation, one is limited to the readpreference setting when the MongoClient is created. To use different ReadPreference, one needs to create another MongoClient with different readPreference.

Result handlers not run on calling vertx context

Hi,

it seems to me that the result handlers for the several methods in MongoClient.java / MongoClientImpl.java are not being run on the context of the calling thread, i.e. the eventloop.

Instead the call to vertx.runOnContext creates a new context for the thread the result handler runs in (mongo client threads). This causes the result handlers to run on a different thread than the verticles eventloop and makes the handlers run in parallel.

Is this wanted behavior? It caused me some race condition headache because i thought the result handlers would run on the same eventloop (and thread)..

A solution would be to save the context of the calling thread via vertx.getOrCreateContext() and use that context for runOnContext() in the several result handler wrappers.

support user defined document id

I want to persist domain objects to MongoDb overriding the name of the ID_FIELD from "_id to a domain object specific property (e.g. "customerId"). My current workaround looks like this:

    public Observable<Customer> addCustomer(Customer customer) {
        JsonObject document = customer.toJson().put("_id", customer.getId().toValue());
        return mongoService.insert(collection, document);
    }

Creating a new document with an _id field fails

The current logic of saveWithOptions, says that if the document to save has an _id then the query gets changed to a "replaceOne", if the db then does not have an existing entry with that _id no document will be saved.

AbstractJsonCodec throws UnsupportedOperationException when reading timestamps

Using the vertx-mongo-client in a MongoDB using replica sets, in some come database operations I get a java.lang.UnsupportedOperationException.
The reason seems to be that the replica set returns a date (or timestamp) and non-replicated doesn't. It seems that Vertx BSON support can't handle date types yet. The parser isn't complete and so returns an java.lang.UnsupportedOperationException.

Example stack trace:

java.lang.UnsupportedOperationException
at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.readTimeStamp(AbstractJsonCodec.java:387)
at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.readValue(AbstractJsonCodec.java:71)
at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.readDocument(AbstractJsonCodec.java:230)
at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.decode(AbstractJsonCodec.java:23)
at com.mongodb.connection.CommandResultCallback.callCallback(CommandResultCallback.java:54)
at com.mongodb.connection.CommandResultCallback.callCallback(CommandResultCallback.java:29)
at com.mongodb.connection.CommandResultBaseCallback.callCallback(CommandResultBaseCallback.java:39)
at com.mongodb.connection.ResponseCallback.onResult(ResponseCallback.java:48)
at com.mongodb.connection.ResponseCallback.onResult(ResponseCallback.java:23)
at com.mongodb.connection.DefaultConnectionPool$PooledConnection$2.onResult(DefaultConnectionPool.java:446)
at com.mongodb.connection.DefaultConnectionPool$PooledConnection$2.onResult(DefaultConnectionPool.java:440)

Save object with _id null should generate one automatically?

Hello,

i'm trying vertx for a real project, and it's my first time using mongo db, so maybe i'm asking something noob, but anyway, i'm currently using java in my project, in a save, i first convert the request to a pojo, do some validations on the pojo, and if everything is OK i then save it with mongoClient.save, the problem is, my pojo has a field _id that 'll be the mongo id, but since it's null and the code only check if it's null not if the key exists, it send to mongo as null and then save as null, instead of not sending or generating automatically.

I noticed this behavior is in this piece of code, from the cliente:

@Override
public io.vertx.ext.mongo.MongoClient saveWithOptions(String collection, JsonObject document, WriteOption writeOption, Handler<AsyncResult<String>> resultHandler) {
  requireNonNull(collection, "collection cannot be null");
  requireNonNull(document, "document cannot be null");
  requireNonNull(resultHandler, "resultHandler cannot be null");

  MongoCollection<JsonObject> coll = getCollection(collection, writeOption);
  Object id = document.getValue(ID_FIELD);
  if (id == null) {
    coll.insertOne(document, convertCallback(resultHandler, wr -> useObjectId ? document.getJsonObject(ID_FIELD).getString(JsonObjectCodec.OID_FIELD) : document.getString(ID_FIELD)));
  } else {
    JsonObject filter = new JsonObject();
    JsonObject encodedDocument = encodeKeyWhenUseObjectId(document);
    filter.put(ID_FIELD, encodedDocument.getValue(ID_FIELD));

    com.mongodb.client.model.UpdateOptions updateOptions = new com.mongodb.client.model.UpdateOptions()
        .upsert(true);

    coll.replaceOne(wrap(filter), encodedDocument, updateOptions, convertCallback(resultHandler, result -> null));
  }
  return this;
}

It do a Object id = document.getValue(ID_FIELD); but don't check if the key ID_FIELD exists, i know i can generate an ID and set it, but i think this should be handled out of the box by the mongoClient, like checking if the key exists and then checking if it's null, if the key exists and is null then it should be removed or autogenerated.

What you guys think? I'm doing something that i shouldn't? I could send a PR improving it if is suitable, or if there's a simple alternative i would like to know.

Thanks in advance,
Kennedy Oliveira

iso 8601 is used for $date, but it lacks milliseconds support.

the latest solution for $date handling leverages java.time.format.DateTimeFormatter.ISO_OFFSET_DATE_TIME (iso 8601) which doesn't support milliseconds.
while dates in format '2011-12-03T10:15:30+01:00' are accepted , attempt to send something like
'2015-06-08T12:10:16.148+0300' throws parsing exception.

Nested Objects and Arrays materialized in their Java form

The language bindings are intended to present JSON documents in their "native" form to each language. In the Groovy binding (and perhaps others) this is working for the root object, but not for any nested objects and/or arrays. These are instead presented as JsonObject and/or JsonArray instances.

See the following thread for more info https://groups.google.com/forum/#!topic/vertx-dev/StuQ8pv3kWQ.

This gist demonstrates the issue -- https://gist.github.com/sfitts/73d4a2e2229466f5ccd3

Cannot use aggregation framework

Currently it's not possible to use aggregation framework with service. According to the documentation (http://docs.mongodb.org/manual/reference/command/aggregate/#dbcmd.aggregate) the structure of the command is as follows:

{
  aggregate: "<collection>",
  pipeline: [ <stage>, <...> ],
  explain: <boolean>,
  allowDiskUse: <boolean>,
  cursor: <document>
}

The thing is that the first parameter of JSON object is aggregate keyword and MongoDB (3.0.2 tested) mandates this structure. Unfortunately, the aggregation request passed to the client is as follows:

{
  pipeline: [ <stage>, <...> ],
... other commands
  aggregate: "<collection>",
}

Because of that MongoDB returns error:

io.vertx.core.eventbus.ReplyException: Command failed with error 59: 'no such command: pipeline' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "no such command: pipeline", "code" : 59, "bad cmd" : { "pipeline" : [], "aggregate" : "collection_name" } }

It doesn't matter in which order I put fields to the proxy. I don't know the internals of the core vert.x, but from my research there is a problem with vertx-core which messes with the order of fields.

Make use of internal Vert.x EventLoopGroup

Mongo client can make use of Netty event loop group for async request processing, so it might be possible to pass a vert.x group instance using io.vertx.core.Vertx#nettyEventLoopGroup() to https://github.com/mongodb/mongo-java-driver/blob/master/driver-async/src/main/com/mongodb/async/client/MongoClientSettings.java#L272 like below:

builder.streamFactoryFactory(
  new NettyStreamFactoryFactory(vertx.nettyEventLoopGroup(), ByteBufAllocator.DEFAULT))
  .build();

According to http://normanmaurer.me/presentations/2014-facebook-eng-netty/slides.html#25.0 it's a good idea ๐Ÿ˜‰

Exeption in MongoClient.update

When i am trying to update a document into Mongo by using the method MongoClient.update( ... ) i am getting an Exception like:

java.lang.IllegalArgumentException: Invalid BSON field name intValue
at org.bson.AbstractBsonWriter.writeName(AbstractBsonWriter.java:494)
at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.lambda$writeDocument$1(AbstractJsonCodec.java:254)
at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec$$Lambda$35/951031848.accept(Unknown Source)
at io.vertx.ext.mongo.impl.codec.json.JsonObjectCodec.lambda$forEach$3(JsonObjectCodec.java:97)
at io.vertx.ext.mongo.impl.codec.json.JsonObjectCodec$$Lambda$36/1408482749.accept(Unknown Source)
at java.lang.Iterable.forEach(Iterable.java:75)
...

on the first field inside the document. Renaming doesn't matter.
If i'm switching to MongoClient.save( ... ) everything is working fine

Avoid mutating arguments passed into various methods

Currently we will mutate JsonObjects that are passed in to some methods, most notably we will add _id fields to objects when calling a save. This will happen even if the save fails, I believe this could potentially lead to dangerous behaviour.

My previous logic was that if a save failed I would retry, however by this point an _id field had already been placed on the document, so the second call would have subtly different behaviour as now I was saving an object that had an _id field on it.

I would propose making a copy of all arguments passed in to avoid mutating them.

Auth should probably set the "authSource" to match "db_name"

By default unless an authSource is defined it will auth against "admin". Encouraging apps to use admin db accounts as their application user is not great, and also the fact that it did not authenticate against the user of the db was unexpected and took some debugging to find.

In my opinion if no authSource is provided we should set it to the db the user is using, provided via "db_name"

JAVA high frequent gc - com.mongodb.connection.AsynchronousSocketChannelStream$BasicCompletionHandler EATs CPU

i found a bug using "io.vertx.ext.mongo.MongoClient". If MongoDB is unavailable for a minute or less a minute the VertxInstance runs into many small GCs and eats CPU. While MongoDB is back only restart helps.

mongoeatingcpu

while profiling this problem locally i found "com.mongodb.connection.AsynchronousSocketChannelStream$BasicCompletionHandler" causing the GCs and this uses up to 40% CPU usage. Before disconnection CPU usage was near 0% because i only read periodic from empty mongodb.

TestCase:
MongoClient client = MongoClient.createShared(vertx, this.mongoConfig);
// config is: {"mongo": {"host": "127.0.0.1","port": 27017,"db_name": "notification"}}
Verticle start contains
"this.timerID = vertx.setPeriodic(TimeUtils.MINUTE_IN_MILLISECONDS, INotificationHandler.create(vertx, config()));" and handler reads data by using client

seems to be a problem with

Possible Solution

Groovy: update converts inner map/list to JsonObject/JsonArray

In the Mongo client API update has the side effect of reserializing the updated data from the DB in place. This means that the interior property values are repopulated via the installed Codec which in our case means that they are recreated as JsonObject/JsonArray instances. In the case of select we explicitly convert the objects using InternalHelper.wrapObject, but that doesn't happen in the update case. Fixing this may be a bit tricky as the conversion needs to occur after the operation has completed, but the update callback doesn't have access to the orignally updated object. I think this will invovle wrapping the supplied handler with one that has access to the original object and can perform the wrap call. However, I'm not sure that the code generator is capable of expressing this (not to mention the fact that this is specific to the way the mongo API works, so we may not want to do this generally).

I'll work up a gist which demonstrates the issue.

Support Read Preference config value

The old Vert.x 2 MongoDB client supports the Read Preference config option, http://docs.mongodb.org/manual/core/read-preference/, which tells the Mongo client to which replica set members it should send its requests, for example the nearest one. Perhaps the Vert.x 3 MongoDB client would like to support it as well? We're using Vert.x 2 right now and do make use of the Read Preference config option.

(Here's the related code in the Vert.x 2 module: https://github.com/vert-x/mod-mongo-persistor/blob/master/src/main/java/org/vertx/mods/MongoPersistor.java#L68 )

Best regards, KajMagnus

Groovy code not included in sources jar.

It looks like it is supposed to be, but isn't. I took a look and tried to figure out why this might be, but I don't know enough about the build ordering to really know for sure what is going on.

Not sure if it is related, but I see some errors relating to building the groovydocs.

config assumes that both host and port are set and gives a NPE otherwise

When setting only a hostname in the config, I would expect the port to be set to the default, however currently it either assumes localhost:27017 when host and port are not set or both have to be set. Otherwise you get a NPE since Port is null when creating the ServerAddress object.

  @Test
  public void portDefault() {
    JsonObject config = new JsonObject();
    config.put("host", "127.0.0.1");
    Vertx vertx = Vertx.vertx();
    MongoClient mongoClient = MongoClient.createNonShared(vertx, config);
    assertNotNull(mongoClient);
  }

Nested Objects and Arrays materialized in their Java form

This is a re-opening of #12 since the changes that were targetted to fix it were reverted.

The language bindings are intended to present JSON documents in their "native" form to each language. In the Groovy binding (and perhaps others) this is working for the root object, but not for any nested objects and/or arrays. These are instead presented as JsonObject and/or JsonArray instances.

See the following thread for more info https://groups.google.com/forum/#!topic/vertx-dev/StuQ8pv3kWQ.

This gist demonstrates the issue -- https://gist.github.com/sfitts/73d4a2e2229466f5ccd3

Support for indexes

It would be great if the client would support managing indexes (getIndexes(), createIndex() and dropIndex(), and maybe even ensureIndex()). Any plan to implement this?

useObjectId Doesn't work

Setting useObjectId to true in the configuration results in the _id being saved as a hex string rather than an ObjectId.

ObjectId support

ObjectId support seems still basic in the 3.0.0 implementation, I noticed that conversion in the codec is still a todo. The behavior is now not completly consistent, a date value is (like with Mongo tools) is converted to a { "$date" : "dat-val"} value, while the objectIds are converted to the hex string values, instead of a more consistent { "$oid" : "id-val" } value.

I made some small adjustments to enable ObjectId support, but it has an important issue. The result callbacks have to switch to a JsonObject return type, instead of a more generic String, since the code generator will not allow for a generic Object which you could check afterwards. But an important advantage is that ObjectIds can be properly used (certainly with existing Mongo databases).

QueryBuilder missing

You can build queries in Json and use them in e.g.:

JsonObject query = new JsonObject("{"$and": [ " +
" {"points": {"$gt": 30}}, " +
" {"points": {"$lt": 60}} " +
"]}");
mongoClient.find("books",` query, res -> { ... })

This works fine. I cant find the QueryBuilder class mentioned here:
https://groups.google.com/forum/#!topic/vertx/VOtTa_cEDnM

It is gone in the official Mongo Client 3.3. Is there any other solution?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.