GithubHelp home page GithubHelp logo

azure / azure-cosmosdb-node Goto Github PK

View Code? Open in Web Editor NEW
142.0 58.0 105.0 9.36 MB

We recently announced deprecation of JS v1 SDK and this repo. Starting September 2020 Microsoft will not provide support for this library. Existing applications using library will continue to work as-is. We strongly recommend upgrading to @azure/cosmos library.

Home Page: https://github.com/Azure/azure-sdk-for-js

License: MIT License

JavaScript 100.00%

azure-cosmosdb-node's People

Contributors

1n50mn14 avatar a8m avatar aliuy avatar ato9000 avatar bchong95 avatar bryant1410 avatar bterlson avatar bursteg avatar dmitriykirakosyan avatar dwhieb avatar fkollmann avatar igorklopov avatar joaomoreno avatar kapil-ms avatar khdang avatar kirankumarkolli avatar lmaccherone avatar matiassingers avatar microsoft-github-policy-service[bot] avatar moderakh avatar molant avatar nomiero avatar pacific202 avatar rnagpal avatar ronaldewatts avatar ryancrawcour avatar shipunyc avatar southpolesteve avatar ytechie avatar ziyel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-cosmosdb-node's Issues

`Hash` directory should be renamed to `hash`

I'm getting the following error when running my tests on TravisCI.

Error: Cannot find module './hash/hashPartitionResolver'
    at Function.Module._resolveFilename (module.js:327:15)
    at Function.Module._load (module.js:278:25)
    at Module.require (module.js:355:17)
    at require (internal/module.js:13:17)
    at Object.<anonymous> (/home/travis/build/kenhowardpdx/documentdb-orm/node_modules/documentdb/lib/index.js:27:12)
    at Module._compile (module.js:399:26)
    at Object.Module._extensions..js (module.js:406:10)
    at Module.load (module.js:345:32)
    at Function.Module._load (module.js:302:12)
    at Module.require (module.js:355:17)
    at require (internal/module.js:13:17)
    at Object.<anonymous> (/home/travis/build/kenhowardpdx/documentdb-orm/node_modules/documentdb/index.js:24:18)
    at Module._compile (module.js:399:26)
    at Object.Module._extensions..js (module.js:406:10)
    at Module.load (module.js:345:32)
    at Function.Module._load (module.js:302:12)
    at Module.require (module.js:355:17)
    at require (internal/module.js:13:17)
    at Object.<anonymous> (/home/travis/build/kenhowardpdx/documentdb-orm/lib/index.js:2:18)
    at Module._compile (module.js:399:26)
    at Object.Module._extensions..js (module.js:406:10)
    at Module.load (module.js:345:32)
    at Function.Module._load (module.js:302:12)
    at Module.require (module.js:355:17)
    at require (internal/module.js:13:17)
    at Object.<anonymous> (/home/travis/build/kenhowardpdx/documentdb-orm/index.js:4:13)
    at Module._compile (module.js:399:26)
    at Object.Module._extensions..js (module.js:406:10)
    at Module.load (module.js:345:32)
    at Function.Module._load (module.js:302:12)
    at Module.require (module.js:355:17)
    at require (internal/module.js:13:17)
    at Object.<anonymous> (/home/travis/build/kenhowardpdx/documentdb-orm/test/utils.js:4:13)
    at Module._compile (module.js:399:26)
    at Object.Module._extensions..js (module.js:406:10)
    at Module.load (module.js:345:32)
    at Function.Module._load (module.js:302:12)
    at Module.require (module.js:355:17)
    at require (internal/module.js:13:17)
    at Object.<anonymous> (/home/travis/build/kenhowardpdx/documentdb-orm/test/connection.test.js:1:15)
    at Module._compile (module.js:399:26)
    at Object.Module._extensions..js (module.js:406:10)
    at Module.load (module.js:345:32)
    at Function.Module._load (module.js:302:12)
    at Module.require (module.js:355:17)
    at require (internal/module.js:13:17)
    at /home/travis/build/kenhowardpdx/documentdb-orm/node_modules/mocha/lib/mocha.js:216:27
    at Array.forEach (native)
    at Mocha.loadFiles (/home/travis/build/kenhowardpdx/documentdb-orm/node_modules/mocha/lib/mocha.js:213:14)
    at Mocha.run (/home/travis/build/kenhowardpdx/documentdb-orm/node_modules/mocha/lib/mocha.js:453:10)
    at Object.<anonymous> (/home/travis/build/kenhowardpdx/documentdb-orm/node_modules/mocha/bin/_mocha:393:18)
    at Module._compile (module.js:399:26)
    at Object.Module._extensions..js (module.js:406:10)
    at Module.load (module.js:345:32)
    at Function.Module._load (module.js:302:12)
    at Function.Module.runMain (module.js:431:10)
    at startup (node.js:141:18)
    at node.js:977:3

Here's a link to a failing build: https://travis-ci.org/cannonjs/cannon/builds/99786337

Documentation should show how to connect to an existing database

I'm working through the available node.js documentDB documentation today. Creating a database is clearly demonstrated, but nowhere that I can find is connecting to an existing database described.

There are mentions of needing the databaseLink property, but again it is not described how to get this from Azure.

Edit:

This is one solution that I've been able to piece together, though I am by no means certain this is the best way.

var ddbName = 'testDb';

var documentClient = require('documentdb-q-promises').DocumentClientWrapper;
var ddb = new documentClient(ddbUrl, {masterKey: ddbPrimaryKey});

var db = ddb.queryDatabases('SELECT * FROM root r WHERE r.id = "' + ddbName + '"');

db.nextItemAsync().then(function nextItem (result) {
  if (db.hasMoreResults() && result['id'] !== ddbName) {
    db.nextItemAsync().then(nextItem);
  } else {
    // has a match, do stuff
    console.log(result);
  }
});

Provide utility for query parameter escaping

As it sits today this library requires you to do your own escaping when building SQL strings. A fluent dynamic SQL building api would be nice, but, a simple escape utility would be all I'd need when building queries based on user input. All the samples seem to ignore escaping parameters. I'm not really sure what all Bad Things users could do by manipulating queries as this is pretty new, but, I imagine there is something at least related to getting unauthorized access to data. 🎱

Casing of enums TriggerType and TriggerOperation

TriggerType and TriggerOperation are all lower case in this sdk (https://github.com/Azure/azure-documentdb-node/blob/master/source/lib/documents.js#L234)

However, if you actually compile a trigger referencing these enums, the Script Explorer in the Azure Portal doesn't show the compiled option:

var DocumentBase = require('documentdb').DocumentBase;

module.exports = {
    id: "upperCaseName",
    body: function () {
        var req = getContext().getRequest();
        var item = req.getBody();
        item.name = item.name.toUpperCase();
        req.setBody(item);
    },
    triggerType: DocumentBase.TriggerType.Pre,
    triggerOperation: DocumentBase.TriggerOperation.All,
};

image

If you manually type them in with title case, it works:

    triggerType: "Pre",
    triggerOperation: "All",

image

The trigger still works, though. I can't tell if this is an SDK glitch or a portal glitch. Which app defines the canonical casing? Should this SDK be corrected?

update documentation with details of .toArray()

I was concerned that the .toArray() method of the queryIterator wasn't pulling all the results, because the documentation made it sound as though you had to use .executeNext() to retrieve batches of results. Eventually I found this note in the sample code though:

https://github.com/Azure/azure-documentdb-node/blob/ea5df70635751f6dc8f4eeb7587f820a61ce9c67/samples/DocumentDB.Samples.IndexManagement/app.js#L322

It looks like .toArray() will always return all the results, so .executeNext() isn't needed in the default scenario unless you have some other reason for using it. Do I have this right? If so, could the documentation for the .toArray() method be updated to reflect this?

Auto-retry after XXXms on 429

It will be great if there were an option to autoretry if a 429 error happens. The documentation about this is a bit difficult to find and AFAIK the C# SDK is capable of handle this.

ECONNRESET error in Azure Continuous WebJob

Hi everyone!

I am writing a continuous WebJob in Azure WebApp with DocumentDB access. But I get an ECONNRESET error 2 minutes after starting webjob, aborting webjob execution and forcing webjob restart. This happens in every run. This is a minimal example for this reproducible behavior.

const DocumentClient = require('documentdb').DocumentClient;
var documentClient = new DocumentClient("https://[COLLECTION_NAME].documents.azure.com:443/", {masterKey: [MASTER_KEY]});

documentClient.readDatabases().toArray((err, databases) => {
    if(err) return;
    console.log(databases);
});

runContinuousJobWithTimeouts();

function runContinuousJobWithTimeouts() {
    console.log("runContinuousJobWithTimeouts");
    setTimeout(runContinuousJobWithTimeouts, 10000);
}

WebJob output is:

[08/24/2016 10:55:55 > 00c0b0: SYS INFO] Status changed to Running
[08/24/2016 10:55:56 > 00c0b0: INFO] [ { id: '...',
[08/24/2016 10:55:56 > 00c0b0: INFO]     _rid: '...',
[08/24/2016 10:55:56 > 00c0b0: INFO]     _self: '...',
[08/24/2016 10:55:56 > 00c0b0: INFO]     _etag: '"..."',
[08/24/2016 10:55:56 > 00c0b0: INFO]     _ts: ...,
[08/24/2016 10:55:56 > 00c0b0: INFO]     _colls: '...',
[08/24/2016 10:55:56 > 00c0b0: INFO]     _users: '...' } ]
[08/24/2016 10:55:56 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:56:06 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:56:16 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:56:26 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:56:36 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:56:46 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:56:56 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:57:06 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:57:16 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:57:26 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:57:36 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:57:46 > 00c0b0: INFO] runContinuousJob
[08/24/2016 10:57:54 > 00c0b0: ERR ] events.js:141
[08/24/2016 10:57:54 > 00c0b0: ERR ]       throw er; // Unhandled 'error' event
[08/24/2016 10:57:54 > 00c0b0: ERR ]       ^
[08/24/2016 10:57:54 > 00c0b0: ERR ] 
[08/24/2016 10:57:54 > 00c0b0: ERR ] Error: read ECONNRESET
[08/24/2016 10:57:54 > 00c0b0: ERR ]     at exports._errnoException (util.js:874:11)
[08/24/2016 10:57:54 > 00c0b0: ERR ]     at TLSWrap.onread (net.js:544:26)
[08/24/2016 10:57:54 > 00c0b0: SYS ERR ] Job failed due to exit code 1
[08/24/2016 10:57:54 > 00c0b0: SYS INFO] Process went down, waiting for 60 seconds
[08/24/2016 10:57:54 > 00c0b0: SYS INFO] Status changed to PendingRestart

Azure WebApp is offering node 4.2.3 in its cloud runtime environment.

Could this be a bug?

Headers no longer available in 1.10.0

I just upgraded from 1.8.0 to 1.10.0 and the headers are no longer returned in this query:
client.queryDocuments(collectionLink, querySpec, options).toArray((err, items, headers) => {

I had a look in the code and don't understand the logic completely, but it seems to recursively iterate over the results and overwrite the headers in the second recursion:
https://github.com/Azure/azure-documentdb-node/blob/master/source/lib/queryIterator.js#L144

Added a quick patch to validate that headers are defined which solved the issue, but I don't think this is the ideal solution. Is there a reason to recurse over the results? Is it that the iteration may trigger additional server requests?

client.createDocumentAsync error?

I might just be very tired, but I noticed some unexpected behavior I though I would share to check my sanity/lack of sleep.

If I call : client.createDocumentAsync(collection._self, doc); the promise chain is broken and I get an error that automatidIdgeneration can't be read on undefined. I'm not passing in options as you can see.

However, if I call: client.createDocumentAsync(collection._self, doc,{}); it works fine and continues.

The original call, client.createDocument(collection._self, doc, function (err,doc){}); does not return an error with the same params. From the tests in the core SDK I can't see that {disableAutomaticIdGeneration: false} is required.

Anyways, great job- thanks for wrapping up the client neatly. Thumbs up!

RequestEntityTooLarge

I'm trying to update a document in my DocumentDB, but i've got this error

RequestEntityTooLarge

{"code":"RequestEntityTooLarge","message":"Message: {"Errors":["Request size is too large"]}\r\nActivityId: guid, Request URI: /apps/guid/services/guid/partitions/guid/replicas/id"}

How can i solve this?

Thanks!

base#getAttachmentIdFromMediaId base64 Conversion issue

This is a NodeJS project using When uploading attachments occasionally we get the following error when fetching the image:

{"code":"Unauthorized",
"message":"The input authorization token can't serve the request. 
    Please check that the expected payload is built as per the protocol, 
    and check the key being used. Server used the following payload to sign: 
    'get\nmedia\n6hl2aldwbqcxagaaaaaaac4-1vo=
    \nwed, 28 oct 2015 03:48:05 gmt\n\n'\r\nActivityId: 1160f842-e8b4-4e28-aa27-a080f4d76424"}

Which is strange because the same master key is used throughout and many other objects, including attachments are fetched just fine. Looking deeper into the base.js#getAttachmentIdFromMediaId I noticed it is converting my mediaId of 6hl2ALdWbQCxAgAAAAAAAC4-1VoB to 6hl2ALdWbQCxAgAAAAAAAC4+1Vo=. What is of note here is the conversion of the "-" to a "+". Writing a line of code to replace this back solves the problem. I started looking at other SDKs for docdb and noticed that there is an inconsistency in this method across them. Python for example calls out a situation between -, + and /. Java deals with just - and /. The Node and JS projects don't handle any of those (https://github.com/Azure/azure-documentdb-node/blob/master/source/lib/base.js#L299).

Could somebody take a look at this? I can provide credentials to a live example if needed.

Support for Direct mode

It would be nice for DocumentDB Node.js SDK to support Direct mode via TCP to speed up communication with the DB. Thank you!

client.queryCollections returns HTTP 400

I am trying to get a collection using client.queryCollections, but I'm getting a bad request error. I think I'm doing it the same way as the tutorial. My client vary is already initialized

    var client = new DocumentClient(host, { masterKey: masterKey });

And previously I'm able to create a database, but when I execute this code:

    client.queryCollections(databaseLink, querySpec).toArray(function (err, results) {
            if (err) {
                callback(err);
            } else {
                   ...
    }

With

querySpec={
        "query": "SELECT * FROM root r WHERE r.id=@id",
        "parameters": [
            {
                "name": "@id",
                "value": "User0"
            }
        ]
    }

And

 "databaseLink"={
        "id": "Dungeons",
        "_rid": "MQsvAA==",
        "_ts": 1436202750,
        "_self": "dbs/MQsvAA==/",
        "_etag": "\"00000500-0000-0000-0000-559ab6fe0000\"",
        "_colls": "colls/",
        "_users": "users/"
    }

I get this error:

{"code":400,"body":"<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\"\"http://www.w3.org/TR/html4/strict.dtd\">\r\n<HTML><HEAD><TITLE>Bad Request</TITLE>\r\n<META HTTP-EQUIV=\"Content-Type\" Content=\"text/html; charset=us-ascii\"></HEAD>\r\n<BODY><h2>Bad Request</h2>\r\n<hr><p>HTTP Error 400. The request is badly formed.</p>\r\n</BODY></HTML>\r\n"}

Am I doing something wrong?

MaxDegreeOfParallelism missing in node sdk

I have a partitioned collection. I have to execute a query using node js that actually spans across partitions. using options i set EnableCrossPartitionQuery to true, but i see in node js sdk the MaxDegreeOfParallelism property is missing (I saw this in the .net sdk). i want to query all my partitions in parallel. Any help. I am writing the code using nodejs

current property conflicts with method current()

It is not possible to call the current() method of DocumentClient. If you try to do that an exception is thrown saying that current is not a method. The current () method should be renamed e.g. to currentElement()

Request path contains unescaped characters.

I'm getting this error when trying to query for my collection.

_http_client.js:50
    throw new TypeError('Request path contains unescaped characters.');
          ^
TypeError: Request path contains unescaped characters.
    at new ClientRequest (_http_client.js:50:11)
    at Object.exports.request (http.js:30:10)
    at Object.exports.request (https.js:115:15)
    at createRequestObject (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/request.js:36:30)
    at Object.RequestHandler.request (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/request.js:131:32)
    at DocumentClient.Base.defineClass.post (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/documentclient.js:1481:35)
    at DocumentClient.Base.defineClass.queryFeed (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/documentclient.js:1545:32)
    at null.fetchFunction (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/documentclient.js:697:32)
    at Base.defineClass._fetchMore (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/queryIterator.js:192:18)
    at Base.defineClass._toArrayImplementation (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/queryIterator.js:149:22)
    at Base.defineClass.toArray (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/queryIterator.js:114:18)
    at getOrCreateCollection (/home/ulrik/projects/stridhmedia/dist/server/modules/article/index.js:125:79)
    at /home/ulrik/projects/stridhmedia/dist/server/modules/article/index.js:25:9
    at /home/ulrik/projects/stridhmedia/dist/server/modules/article/index.js:111:28
    at Base.defineClass._toArrayImplementation (/home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/queryIterator.js:159:17)
    at /home/ulrik/projects/stridhmedia/node_modules/documentdb/lib/queryIterator.js:155:26

The code that generates the error is pretty much taken from the guide at https://azure.microsoft.com/sv-se/documentation/articles/documentdb-nodejs-application/

    function getOrCreateCollection(databaseLink, collectionId, callback) {
        var querySpec = {
            query: "SELECT * FROM root r WHERE r.id='articles'"
        };

        client.queryCollections(databaseLink, querySpec).toArray(function (err, results) {
            if (err) {
                callback(err);
            } else {        
                if (results.length === 0) {
                    var collectionSpec = {
                        id: collectionId
                    };

                    client.createCollection(databaseLink, collectionSpec, function (err, created) {
                        callback(null, created);
                    });

                } else {
                    callback(null, results[0]);
                }
            }
        });
    }

specifically it's this line:
client.queryCollections(databaseLink, querySpec).toArray(function (err, results) {
Am I doing something wrong or is this a bug?

NPM package : Include only runtime useful files

Right now when running npm install documentdb, lots of useless files are installed and that adds considerable size to the package :

8.0K    DocumentDB.Node.master.njsproj ----> USELESS
4.0K    DocumentDB.Node.master.sln ----> USELESS
4.0K    Gruntfile.js ----> USELESS
4.0K    changelog.md
4.0K    index.js
276K    lib
4.0K    package.json
4.0K    readme.html
340K    test ----> USELESS
4.0K    .editorconfig ----> USELESS
4.0K    .eslintrc ----> USELESS
4.0K    .npmignore

you should add the files field to the package.json :

{
  "files" : ["lib"]
}

Can't run test suite

I followed the instructions in source/test/readme.md but only get this:

→ mocha -t 0 -R spec
module.js:327
    throw err;
    ^

Error: Cannot find module 'documentdb'
    at Function.Module._resolveFilename (module.js:325:15)
    at Function.Module._load (module.js:276:25)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at Object.<anonymous> (/Users/joao/Work/azure-documentdb-node/source/test/test.js:26:12)
    at Module._compile (module.js:397:26)
    at Object.Module._extensions..js (module.js:404:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at /Users/joao/.nvm/versions/node/v5.4.1/lib/node_modules/mocha/lib/mocha.js:216:27
    at Array.forEach (native)
    at Mocha.loadFiles (/Users/joao/.nvm/versions/node/v5.4.1/lib/node_modules/mocha/lib/mocha.js:213:14)
    at Mocha.run (/Users/joao/.nvm/versions/node/v5.4.1/lib/node_modules/mocha/lib/mocha.js:453:10)
    at Object.<anonymous> (/Users/joao/.nvm/versions/node/v5.4.1/lib/node_modules/mocha/bin/_mocha:393:18)
    at Module._compile (module.js:397:26)
    at Object.Module._extensions..js (module.js:404:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Function.Module.runMain (module.js:429:10)
    at startup (node.js:139:18)

Continuation token is not passed properly on query

I think DocumentDbClient does not properly send the continuation token on request even if has been set to queryOptions.continuation property. I'm having a problem querying "next page" of items by passing a continuation token to documentDbClient.queryDocuments in query options. I always get the same result set back, with iterator.continuation always reporting the same token after the call. I've tested this with latest NPM package, v.1.5.5.

I've created a full repro at https://gist.github.com/htuomola/620f13f572a2dfd4618a (apart from credentials and DB info). I also tried to debug it but my javascript-fu is not strong enough to understand what's wrong. FWIW, based on debugging the error seems to be in queryIterator._fetchMore(), where the continuation token gets somehow lost. I modified it so it logs this.options.continuation both before and after setting it (queryiterator.js, line 219), like this:

_fetchMore: function(callback){
   var that = this;
   console.log('continuation 1: '+this.options.continuation)
   this.options.continuation = this.continuation;
   console.log('continuation 2: '+this.options.continuation)
   [..]

The result is:

continuation 1: +RID:<redacted>#RT:1#TRC:1#RTD:<redacted>
continuation 2: null

ps. Contribution guidelines link in contributing.md is broken.

Unauthorized, The input authorization token can't serve the request

Localhost everything works fine, but when a i made a deploy i'm getting this error.

ActivityId: 39d58ef7-0df9-436d-a803-06991ee62424
{ code: 401,
body: '{"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'post\ndbs\n\nthu, 29 oct 2015 18:52:39 gmt\n\n'\r\nActivityId: 39d58ef7-0df9-436d-a803-06991ee62424"}' }

any guess?

Thanks!

Stream support ?

Hi,

Is there stream support ? Is it planned ?
It's really easy to get a "process out of memory" exception without it.

Why does the callback function of the forEach method of class QueryIterator take two arguments?

The callback function of the QueryIterator has two arguments an error object and an element object. It does not make sense to pass an error object for each element. Either the query has been successful the always null or undefined would be passed as error object or the query failed then there are no objects to return. In the case of a failed query forEach could throw an exception.

When first using the forEach method I implemented a callback method with only one parameter which was the expected element. I wondered why I always received undefined. After stepping through the debigger I found that there are two arguments passed to the callback.

EventEmitter listener leak warning

(node) warning: possible EventEmitter memory leak detected. 11 timeout listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at TLSSocket.addListener (events.js:239:17)
    at TLSSocket.Readable.on (_stream_readable.js:665:33)
    at ClientRequest.<anonymous> (...node_modules/documentdb/lib/request.js:95:16)
    at emitOne (events.js:82:20)
    at ClientRequest.emit (events.js:169:7)
    at tickOnSocket (_http_client.js:496:7)
    at onSocketNT (_http_client.js:508:5)
    at nextTickCallbackWith2Args (node.js:474:9)
    at process._tickCallback (node.js:388:17)

Looking at the code it definitely looks like there is a problem. Since keepalive is used, the timeout event will be listened to several times on the same socket.

Include headers in error callback

Hi,

When a 429 response is received we need to check the 'RetryAfter' header. At the moment the headers are not included in the callback when an error occurs:

            this.post(urlConnection, path, params, headers, function (err, result, headers) {
                if (err) return callback(err);

                callback(undefined, result, headers);
            });

Make samples simplier by removing Self-Links

For someone like me that is learning documentDB, I think that the samples should be updated as they does not reflects farewell to Self-Links.

Self-Links are confusing and adds so much overhead (specially for someone learning documentDB.. I lost precious time figuring how to deal with Self-Links.. and finally not using them)

Running stored procedure fails with "listener must be a function"

I am trying to run a stored procedure with the following code:

client.executeStoredProcedure(sprocLink, options, callback);

However, the app crashes with the error "listener must be a function". I am using SDK 1.5.6.

Here is the relevant stack track:

DOMAIN ERROR CAUGHT TypeError: listener must be a function
    at ClientRequest.once (events.js:251:11)
    at createRequestObject (/home/murdockcrc/repos/hdv/node_modules/documentdb/lib/request.js:102:18)
    at Object.RequestHandler.request (/home/murdockcrc/repos/hdv/node_modules/documentdb/lib/request.js:154:32)
    at DocumentClient.Base.defineClass.post (/home/murdockcrc/repos/hdv/node_modules/documentdb/lib/documentclient.js:2031:35)
    at DocumentClient.Base.defineClass.executeStoredProcedure (/home/murdockcrc/repos/hdv/node_modules/documentdb/lib/documentclient.js:1821:18)
    at AzureDbHelper.executeStoredProcedure 

Weird partial encoding errors

I am using documentdb through a8m/doqmentdb and I am running into this weird encoding issue.

I have a document that contains some properties with a word with some characters with umlauts (mainly ä and ö, which are quite common in the finnish alphabeth) as the value.

Occasionally (let's say 5% of the time) when a FIND a document, certain letter, ö, is not correctly encoded, which results in this:

[ 
 {
 foo: [
   "Läht��laukaus"
  ]
 }
]

When the correct form should be:

[ 
 {
 foo: [
   "Lähtölaukaus"
  ]
 }
]

I couldn't replicate it with Query Explorer on https://portal.azure.com/ and doqmentdb doesn't seem to modify the responses in any way, so something funky seems to be happening between DocumentClient and the DocumentDB-database itself. Using documentdb version 1.2.0

C# client SDK and %%%% characters in resource id

We're using 1.9.2 version.
If I create a document with id like 'Tests-%%%%Device123' and then try to get the document like:

client.ReadDocumentAsync(
                    UriFactory.CreateDocumentUri(this.databaseName, this.collectionName, documentId),
                    DocumentDbExtensions.CreateRequestOptions(null, partitionid))

then I get exceptio like:
The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get
docs
dbs/test-db/colls/1test-collection/docs/Tests-%%%Þvice123
sat, 10 sep 2016 01:12:04 gmt

'
ActivityId: dfc1a969-0063-4b44-871a-cd661778dede

FYI: UriFactory.CreateDocumentUri(this.databaseName, this.collectionName, documentId) escapes the documentId part

cannot access collection named "media"

Hi guys!

it seems a collection with "media" in its name is not accessible!

documentdb/lib/request.js line 53 is suspicious :)

var isMedia = ( requestOptions.path.indexOf("media") > -1 );

Bonus question: how do I name my media collections?

hasMoreResults() gives wrong answer when (items mod maxItemCount) is 0

Let's say I have 4000 items in my database and I use readDocuments().executeNext with a maxItemCount of 1000. After I've gotten back all four pages, hasMoreResults() still returns true. Calling executeNext() one more time causes hasMoreResults() to return false so everything works, but it adds an unneeded round trip.

Typescript definitions

Typescript definitions for this library would be really benificial. Are these available?

Socket keep-alive and server reset

The HTTPS agent used to request DocumentDB is using Keep-Alive for its pool of sockets.

  • issue 1: after ~2 minutes of inactivity, the remote DocumentDB server closes the connection (although the socket is kept-alive), resulting in the socket throwing an ECONNRESET error
  • issue 2: no listener is set for socket’s error event, therefore the error is thrown

Result: after ~2 minutes of inactivity, the sdk throws an ECONNRESET error, although from an sdk client point-of-view it’s being used in normal conditions.

I would suggest handling socket errors at the sdk level, specially since it’s the sdk stand to keep sockets alive, resulting in this 2 minutes "time-out".
(I’m referencing this part of the request code)

A quick and dirty fix for this 2 minutes issue could be to "mute" ECONNRESET errors:

socket.on('error', function(err) {
  if (err.code !== 'ECONNRESET') throw err;
});

But again, it would mean all other socket errors would still be thrown, which might not be great.

"Cannot read property 'partitionKey' of null" on createDocument

I get this error
"Cannot read property 'partitionKey' of null"
when trying to create a new document with...
this.client.createDocument(this.link, item, null, this.resultHandler(callback));

I had to pass an empty object for options like this...
this.client.createDocument(this.link, item, {}, this.resultHandler(callback));
to make it work.

api documentation does not include description of Indexing policy members

The createCollection method takes an Object of type IndexingPolicy as the second parameter. In the documentation there is link listing the members of IndexingPolicy but the members are not explained in detail. For example the docu says that IncludedPaths is an array of IndexPath. This structure has a Path member saying only that this is a string containg the path to be indexed. But there are several constraints to those string.
A good explanation of those constraints can be found here: https://github.com/Azure/azure-content/blob/master/articles/documentdb-indexing-policies.md . So this link should be also included into the api documentation.

Cannot iterate over large collections - server refuses to accept queries with continuation token after a very short while

I want to update my very large collection (close to 10G) with JavaScript SP for performance reasons. It's a very simple operations so that I decide to run it on server instead of reading all of it to my client workstation which will take long time.

And here's the problem: after a very short while the queryDocuments returns NULL instead of iterator. I understand this is because of the artificially imposed performance limits.

I tried very different page size, from 1000 down to 1, and even with 1 this does NOT work: after 70-90 queries I am kicked out with my query no longer accepted.

I tried different workarounds. But I could not find a way to pause execution: the standard setTimeout function is NOT implemented ... so I cannot pause for a while... I am forced to ask for the next portion of docs in callback immediately as there's no way to pause, and because of this I am kicked out.

The code:

function(source, continuationToken) {
var context = getContext();
var collection = context.getCollection();
var response = context.getResponse();
var query = "SELECT * FROM c WHERE c.recType = 'xxx' AND NOT IS_DEFINED(c.to)";
if (source) {
query += " AND c.source='" + source + "'";
}

    var hexStrToIPv4 = function(hex_ip) {
        var nAddr = parseInt("0x" + hex_ip);
        return nAddr;
    };

    var debug = function(msg) {
        if (console && console.debug)
            console.debug(msg);
    };

    var options = { pageSize: 1000, };
    if (continuationToken) {
        options.continuation = continuationToken;
    }

    var getDocs = function() {
        var accepted = collection.queryDocuments(collection.getSelfLink(), query, options,
            function (err, documents, responseOptions) {
                if (err)
                    throw new Error("Error " + err.message);

                if (!documents || !documents.length)
                {
                    __.response.setBody({ "message": "DONE" });
                }
                else
                {
                    documents.forEach(function(xxx) {
                        if (!xxx.from || !xxx.to || !xxx.len) {
                            xxx.from = hexStrToIPv4(xxx.start);
                            xxx.to = hexStrToIPv4(xxx.end);
                            xxx.len = xxx.to - xxx.from;
                            collection.replaceDocument(xxx._self, xxx, function(err, replaced) {
                                    if (err) throw "Error replacing document: " + err;
                            });
                        }
                    });

                    // fetch and process next portion:
                    options.continuation = responseOptions.continuation;
                    continuationToken = responseOptions.continuation;
                    getDocs();
                }
            });

        if (!accepted) {
            // If the execution limit reached; run the script again with the continuationToken as a script parameter.
            __.response.setBody({
                "message": "Execution limit reached.",
                "continuationToken": continuationToken,
            });
        }; 

    } // getDocs

    // entry point:
    getDocs();
}

SyntaxError: Unexpected token u

Hi,

When returning undefined from a stored procedure as follows:

 function sp(){
        var context = getContext();
        var collection = context.getCollection();
        var response = context.getResponse();
        response.setBody(undefined);
}

this gets translated to "undefined" in the HTTP respose. The code in request.js then fails to parse this as JSON:

            var result;
            try {
                if (isMedia) {
                    result = data;
                } else {
                    result = data.length > 0 ? JSON.parse(data) : undefined;
                }
            } catch (exception) {
               callback(exception);
               return;
            }

With the result that an exception is reported, when there wasn't an error.

The expected behaviour is for undefined to be returned via the callback.

Promises wrapper does not include headers with error

We're getting 429s from methods as various as createDocument, queryCollections, and queryStoredProcedures. Until #39 is addressed, we would like to handle the retry ourselves, but we're using the promises flavor, which doesn't return the headers with the error. It would be great to have headers returned on error in the promises flavor. Currently, all of the deferred.reject() calls pass only the error, and swallow the headers, so we don't get the x-ms-retry-after-ms header or any other headers. See e.g. https://github.com/Azure/azure-documentdb-node/blob/master/q_promises_sdk/documentclientwrapper.js#L16

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.