CouchDB documentation was moved to the main repository
apache / couchdb-documentation Goto Github PK
View Code? Open in Web Editor NEWApache CouchDB Documentation
Home Page: http://docs.couchdb.org/en/latest
License: Apache License 2.0
Apache CouchDB Documentation
Home Page: http://docs.couchdb.org/en/latest
License: Apache License 2.0
CouchDB documentation was moved to the main repository
Url: http://docs.couchdb.org/en/2.2.0/api/database/find.html
Where: Request JSON object
The configuration option [couchdb] enable_database_recovery
is not documented. It is used in CouchDB here.
The configuration option, when set to true, causes database deletion to rename the database file rather than actually delete it, allowing an admin to restore the database from the backup file.
The Mango Condition Operators documentation does not specify how equality matches are performed with respect to argument type. Is any coercion (between e.g. string and number, or number and boolean) ever performed?
Testing this in practice with a document {_id:"test", value: 1}
:
value: {$eq: 1}
matchesvalue: {$eq: "1"}
does not matchSo the current behavior appears strict, which may not be assumed by developers coming from languages like JS/PHP that can include looser comparison operations.
In the Mango selector documentation, the Condition Operators section has a table for each of the operators. I think some clarification of these is warranted.
For example, the purpose of $ne
is currently listed as:
The field is not equal to the argument.
However, it appears that this definition is incomplete. The way it behaves in practice is more like:
The field exists, and is not equal to the argument.
See for example this Stack Overflow question on CouchDB/PouchDB selector for deleted docs where the OP was surprised when _deleted: {$ne: true}
did not match any documents, even though _deleted: {$exists: false}
does.
The documentation should make the behavior clearer, since naïvely I too would have assumed $ne
works like e.g. !=
or !==
in JavaScript, where doc._deleted !== true
would match documents where the 'deleted_'
field is missing.
I suspect this currently-implicit "field exists, and…" may be true of all the inequality operators, but only tested for $ne
specifically.
apache/couchdb#1454 states:
The documentation should contain a description of the $text
operator, or the operator should be removed from the examples if it is not supported.
The documentation for the selectors syntax uses $text
in the examples, but the operator is never described.
This operator depends on Lucene and isn't available in base CouchDB.
The examples should be reworked to not include the $text
operator.
http://docs.couchdb.org/en/2.1.1/experimental.html?highlight=query%20server#nodejs-query-server
This is reported not to work.
If anyone have any documentation for how to use this, I would be interested in testing it.
Nodejs server via proxy can be used in combination with rewrite as a function, when this bug is fixed apache/couchdb#1407
See apache/couchdb#1032 and apache/couchdb#820 for detail. There are new parameters for:
/{db}_all_docs
/{db}/_design_docs
/{db}/_local_doc
Wait until apache/couchdb#1032 lands before merging any PR on this, please.
Mozilla SpiderMonkey (1.8.5) dependency link in (src/install/unix.rst) should point to its documentation.
But currently it is pointing to "404 page not found".
Instead of this "https://www.mozilla.org/en-US/js/spidermonkey/" it sholud point to "https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey".
I was trying to find about dependencies of couchdb.
Hello, the POST _bulk_get endpoint is implemented by CouchDB since version 2.0.0 (at least, this is what the release notes say), but I don't find any documentation about it on the API reference. It'd be nice to document it.
I think it follows the same format as described in https://github.com/couchbase/sync_gateway/wiki/Bulk-GET.
C.f. apache/couchdb#1556, details are there
http://docs.couchdb.org/en/2.1.1/api/ddoc/rewrites.html#rewrite-section-a-is-stringified-function
This page says 'Rewrite section a is stringified function'. Is that supposed to be 'Rewrite section is a stringified function'?
I'm also not sure what a 'section' is. Is that just the value of the rewrite field?
In section 2.8.3.1.1. Overview there is a step "copy the .couch files from your 1.x installation". It might be helpful to also mention that the copied files have to be writeable by the "couchdb" user.
From apache/couchdb#1809 👍
According to the docs, purge_seq is still reported as a "number". Reality has it - it's now ( and probably other fields as well) are now "strings". This is probably related to the "Clustered purge" feature. IBM's cloudant-java client is currently the only maintained java client for couchdb, and it cannot properly work with db-info queries (by the look of the stacktrace, it also affects the lightcouch driver, meaning both of these drivers will fail to work.
This leaves the couchdb-java ecosystem in a problematic state for 2.3 and future releases.
While I opened this issue for the java driver, it will best to update the documentation for 2.3 and above release. and perhaps cover additional breaking changes.
/cc @jiangphcn
In the Mango documentation, neither the Condition Operators section nor the Sort Syntax section specify the treatment of strings as far as comparison and sorting behave.
For example, is any normalization performed on the strings when comparing equality? Does string sorting follow the Unicode Collation Algorithm, or simply a binary sort on either code units or code points, or something else (e.g. byte-by-byte sort on a UTF-8 encoding)?
The link to couch_replicator_utils.erl at Generate Replication ID is pointing to https://git-wip-us.apache.org/repos/asf?p=couchdb.git;a=blob;f=src/couch_replicator/src/couch_replicator_utils.erl;h=d7778db;hb=HEAD that is currently broken.
The new URL seems to be https://github.com/apache/couchdb/blob/master/src/couch_replicator/src/couch_replicator_ids.erl
The GET and POST /db/_local_docs is implemented in CouchDB 2.x and provides a mechanism for retrieving the list of local docs created in a DB.
The description of the local docs at Local (non-replicating) Documents is not consistent with the existence of this operation.
It is stated that "You cannot obtain a list of local documents from the database" but it is possible via the /db/_local_docs operation.
The endpoint is referenced in the 2.0.0 release notes and it is defined as similar to _all_docs endpoint
I've checked couchdb code and the implementation is based on the _all_docs handler
As I'm finding myself contributing more and more PRs to this documentation, I wish I had some guidelines, other than the existing docs, for what's acceptable.
whatsnew
documentation?And I'm sure many more questions will come up.
Are these things documented somewhere? I'd be happy to turn them into a CONTRIBUTING.md document and submitting a PR. :)
In Chapter 11.1.2 that explains how to use the Cluster Setup Wizard you can find following lines
When you click “setup cluster” you are asked for admin credentials again and then to add nodes by IP address. To get more nodes, go through the same install procedure on other machines.
I am unsure what the term "same install procedure" exactly means. During my cluster setup I noticed that I had to run the wizard only on one of the 2 cluster nodes. Could you please explain/document exactly which steps should on each of the other maschines?
The current docs for 2.0 onwards show that it is possible to get to the _stats via /_stats but it actually requires /_node//_stats
As of 2.1.1 we also now support /_node/_local/_stats
which was documented in the release notes only.
Documentation says: In version 2.2, the session plugin is considered experimental and is not enabled by default.
As explicited in couchdb#1550 couchdb#1561 this is not the case, the session plugin is actually enabled by default.
require_valid_user
until a fix is found.I am currently writing a little script that helps me to setup a replication between two couchdbs. Therefore I want to read the current replication jobs, check if my desired job is already there. If not, I would setup a replication. If there are jobs that are no longer required (doesnt match my criteria) I wanted to remove the those.
While doing so I was falling in some traps:
In the Replication Section of the documentation there are _replication
and _replicate
. It took me a while to realize that those two are actually not the same names.
First I send my "replication requests" into _replicate
instead of _replication
. It took several minutes before something showed up in _schedule/jobs
. I was confused that there was no way to show my configuration immediatly. What is the expected behaviour when you configure a replication to _replicate
and what is the expected behaviour when you write it to _replication
?
I would like to have some information in the documentation
that there are actually those two, and explain what is used for which usecase. (I dont know it, could you give me to write a PR, please)
that explains the expected behaviour in way like: When you do X you can see Y after Z happend.
(Like When you write to _replicate
it takes at least x seconds x secons and then you can find the results in _schedule/jobs
- not sure if that is correct)
If you could provide me some information I will gladly write a PR. :-)
Documentation should explain how _design/auth
works on a database.
As far as I can tell, _design/auth
isn't mentioned anywhere in our docs - a clear oversight.
$ cd src/docs/
$ grep -r _design/auth *
$
Should go in the Design Documents chapter since it's a special design document.
I've seen some interest in different CouchDB related projects in the use of seq_interval parameter for changes retrieval optimization.
Cloudant Sync - cloudant/sync-android#534
PouchDB - pouchdb/pouchdb#6643
I don't find the description of this parameter in the documentation.
@willholley describes this parameter in this way:
"The seq_interval parameter instructs Cloudant / CouchDB to skip generation of sequence numbers - i.e. return a seq for every N results. last_seq is always populated. When fetching changes in a batch, it should be possible to effectively set seq_interval= to improve performance and reduce load on the remote database."
In src/api/database/bulk-api.rst
:
CouchDB supports two different modes for updating (or inserting) documents using the bulk documentation system. Each mode affects both the state of the documents in the event of system failure, and the level of conflict checking performed on each document. The two modes are:
Yet there is only one mode in 2.x.
I've fielded quite a few questions on this recently in IRC/Slack. Here's what I want to do:
We have a 1.6.x branch for legacy 1.6 documentation, which occasionally gets corrections (at least from me). Now that 2.1.x is nearing release, and 2.0.x is no longer getting new features, we will have a similar situation with the 2.0.x documentation.
My recent PR (#148) appears to touch new features which don't exist in 2.0.x, but it would be nice to update the 2.0.x documentation to reflect the correction made in this PR.
Is creating a 2.0.x documentation branch the best way to do this? And is this a good time to do it?
currently http://docs.couchdb.org/en/latest/maintenance/compaction.html shows automatic compaction as the last option on the page. Since 2.1.1 it is enabled by default and since 2.2.0 it is even running smoothly, so we should start out the compaction page with an introduction about that compaction is a thing, but normal users don’t have to do anything.
The rest of the content can be left as-is, but framed as “advanced usage”
On http://docs.couchdb.org/en/stable/install/unix.html, RHEL needs a different thing from Centos when installing epel-release
: https://access.redhat.com/discussions/3140721
Checking the doc I've noticed that all links to the content in the old wiki seems to be broken.
Currently, there are the following references to the wiki.
src/ddocs/ddocs.rst: <http://wiki.apache.org/couchdb/Formatting_with_Show_and_List#Showing_Documents>`_
src/ddocs/ddocs.rst: <http://wiki.apache.org/couchdb/Formatting_with_Show_and_List#Listing_Views_with_CouchDB_0.10_and_later>`_
src/ddocs/ddocs.rst: <http://wiki.apache.org/couchdb/Document_Update_Handlers>`_
src/ddocs/ddocs.rst: <http://wiki.apache.org/couchdb/Replication#Filtered_Replication>`_
src/ddocs/ddocs.rst: <http://wiki.apache.org/couchdb/Document_Update_Validation>`
src/install/unix.rst: * `Installing CouchDB <https://cwiki.apache.org/confluence/display/COUCHDB/Installing+CouchDB>`_
src/replication/protocol.rst:* `CouchDB documentation <http://wiki.apache.org/couchdb/Replication>`_
In this situation could be better to remove them from the doc.
There is any reason for keeping them in the doc?
I've been advised that all nodes in a cluster should be given the same UUID, but the description of the uuid
setting in the docs leads me to believe that each node should have a unique uuid:
Unique identifier for this CouchDB server instance.
It would also be helpful to have some information in the docs about how the UUID is used.
There are no docs for this new 2.1.0 endpoint available.
The instructions for contributing to the CouchDB documentation (I'm reading this here: http://docs.couchdb.org/en/2.0.0/contributing.html) reference the main repo but the docs are now in (this!) separate repo. The source here has already been updated, so would it be possible to regenerate the documentation to reflect this and any other recent changes?
Oops, we left it in for 2.x, as reported at apache/couchdb#1047.
(Technically, you can still reach this endpoint on port 5986 for a single node, but we should remove it from the documentation entirely.)
The GET and POST /db/_design_docs is implemented in CouchDB 2.x and provides a mechanism for retrieving the list of design documents created in a DB.
The endpoint is referenced in the 2.0.0 release notes and it is defined as similar to _all_docs endpoint
I've checked couchdb code and the implementation is based on the _all_docs handler
New attribute create_target_params for replication payload need to be documented.
Related issues:
There are two links pointing to Apache Git CouchDB repo that are broken.
at src/query-server/javascript.rst
Broken link - https://git-wip-us.apache.org/repos/asf?p=couchdb.git;a=blob;f=share/server/json2.js
New link - https://github.com/apache/couchdb/blob/master/share/server/json2.js
at src/ddocs/views/collation.rst
Broken link - https://git-wip-us.apache.org/repos/asf?p=couchdb.git;a=blob;f=share/www/script/test/view_collation.js;hb=HEAD
New link - https://github.com/apache/couchdb/blob/master/test/javascript/tests/view_collation.js
Should be in the HTTP API reference. It's not.
http://docs.couchdb.org/en/2.1.1/whatsnew/2.1.html This is kind of unreadable. We need newline breaks between each bullet point. Is it possible to do it while still keeping our dependency on the upstream?
Actual description of the parameters that handle the replication scheduler is a bit obscure, especially when it comes to explaining the difference between max_jobs, max_churns and how those parameters interact every "interval".
I'm sending an improvement proposal in a few minutes.
Moved from apache/couchdb#965 to here.
The _purge API in the 2.x is not correct. If called the return code is 501 Not Implemented
Seems purge was left out intentionally according to https://issues.apache.org/jira/browse/COUCHDB-2851 and will be implemented for 2.x in a future release https://issues.apache.org/jira/browse/COUCHDB-3326
As well as the this doco being incorrect there is also no mention of it not being implemented in the breaking changes or known issues for 2.0 or 2.1
#177 mentions in the 'whatsnew' doc that max_document_size
has been renamed to max_http_request_size
, but the main documentation still refers to the old name.
I was preparing to update that in #180, so I looked at the default.ini
as suggested in the whatsnew doc, and became more confused.
The default.ini
does indeed reference the max_http_request_size
option, but does not mention single_max_doc_size
, and does still mention the (old?) max_document_size
option.
I'd like to make this all consistent (before the 2.1.1 release if possible). If I need to UTSL to find the canonical answer, I will, but it's my hope that somebody can enlighten me as to the current state of things as of now.
default.ini
correct? (In which case whatsnew needs to be updated not to refer to single_max_doc_size
)whatsnew
correct (in which case the default.ini and main docs need to be updated)?Is the story more complex, and full of additional nuance? :)
As of 2.1+ the compacter is enabled by default, but the documents still indicate it is disabled by default.
Instead, it should indicate that the compacter is enabled by default.
The text indicates the compacter is disabled by default.
I have a PR to resolve this issue that I will submit right after I finish this.
Just guiding some folks through using the compaction daemon for automatic compacting.
This is an issue with the text, not a particular browser or environment, so I've left this section more or less empty.
Hi! I try to set n parameter per database creation, like I read in "12.4.1.1. Shards and Replicas" article.
These settings can be modified to set sharding defaults for all databases, or they can be set on a per-database basis by specifying the q and n query parameters when the database is created. For example:
$ curl -X PUT "$COUCH_URL:5984/database-name?q=4&n=2"
That creates a database that is split into 4 shards and 2 replicas, yielding 8 shard replicas distributed throughout the cluster.
But n parameter was not changed(not overrides local.ini settings). After executing such request, only q parameter was changed.
Is it a mistake or incomplete information in documentation?
@cl-parsons writes in apache/couchdb#1759:
On this page of the documentation there is no information about the keys we should use to specify these functions.
In my case I was looking how to custom the data submitted to the view by adding a timestamp to the
object. I found how to do this in the "best practice" page, but I think that this should be also specified in the API reference.
I agree, this is a shortcoming. If you'd like to help us, we'd ❤️ a pull request from a passionate user.
I've just been through the process of setting up a CouchDB development cluster for the first time and got stuck for some time before realising that node names must be FQDNs.
The cluster setup documentation does indeed specify "IP or FQDN" in many of the examples, but to my mind, some references de-emphasise the need to use FQDNs for node names.
For example these blocks give the impression that simple hostnames might be sufficient in a local network:
couchdb-documentation/src/setup/cluster.rst
Lines 261 to 272 in 8655ee6
couchdb-documentation/src/setup/cluster.rst
Lines 346 to 357 in 8655ee6
In my case all nodes were connected to a Docker network, and ping
-able by hostname within the network.
It was only on reading an issue comment on the main GitHub project by @wohali that I realised my issue was with using the simple hostnames.
In particular, points 4 and 5 clarified the issue, so it might be good to emphasise these in the documentation:
- If DNS is configured, use the FQDN only: -name [email protected]
- Tricks with /etc/hosts don't work as Erlang bypasses libresolv.
Comes from apache/couchdb#1740:
I am having some trouble with use_index and partial indexes in CouchDB 2.2.0. This part of the _find documentation is misleading:
Technically, we don’t need to include the filter on the "status" field in the query selector - the partial index ensures this is always true - but including it makes the intent of the selector clearer and will make it easier to take advantage of future improvements to query planning (e.g. automatic selection of partial indexes).
The documentation should add that an index with fields will never be used unless the selector includes all the fields indexed. (
"fields": []
works around this problem ). This is different than a typical SQL database that would use an index for queries against any prefix of the list of indexed fields.I was also confused by using the index's
name
inuse_index
. Fauxton should putddoc
in its example index definitions to make it very clear that the indexname
cannot be used inuse_index
unless it is accompanied byddoc
.Given a design doc like this one,
{"ddoc": "_design/obsolete-created", "type": "json", "name": "foo-json-index", "def": {"fields": [{"meta.state": "asc"}, {"meta.created": "asc"}], "partial_filter_selector": {"$and": [{"type": "form"}, {"meta.state": "obsolete"}]}}}
In [172]: db.explain({"use_index":"obsolete-created", "selector":{"meta.state":"obsolete"}})
Out[172]:
{u'dbname': u'db',
u'fields': u'all_fields',
u'index': {u'ddoc': None,
u'def': {u'fields': [{u'_id': u'asc'}]},
u'name': u'_all_docs',
u'type': u'special'},
u'limit': 25,
u'mrargs': {u'conflicts': u'undefined',
u'direction': u'fwd',
u'end_key': u'<MAX>',
u'include_docs': True,
u'reduce': False,
u'stable': False,
u'start_key': None,
u'update': True,
u'view_type': u'map'},
u'opts': {u'bookmark': u'nil',
u'conflicts': False,
u'execution_stats': False,
u'fields': u'all_fields',
u'limit': 25,
u'r': [49],
u'skip': 0,
u'sort': {},
u'stable': False,
u'stale': False,
u'update': True,
u'use_index': [u'obsolete-created']},
u'selector': {u'meta.state': {u'$eq': u'obsolete'}},
u'skip': 0}
In [175]: db.explain({"use_index":"obsolete-created", "selector":{"meta.state":"obsolete", "meta.created": "foobar"}})
Out[175]:
{u'dbname': u'db',
u'fields': u'all_fields',
u'index': {u'ddoc': u'_design/obsolete-created',
u'def': {u'fields': [{u'meta.state': u'asc'}, {u'meta.created': u'asc'}],
u'partial_filter_selector': {u'meta.state': {u'$eq': u'obsolete'}}},
u'name': u'obsolete-created',
u'type': u'json'},
u'limit': 25,
u'mrargs': {u'conflicts': u'undefined',
u'direction': u'fwd',
u'end_key': [u'obsolete', u'foobar', u'<MAX>'],
u'include_docs': True,
u'reduce': False,
u'stable': False,
u'start_key': [u'obsolete', u'foobar'],
u'update': True,
u'view_type': u'map'},
u'opts': {u'bookmark': u'nil',
u'conflicts': False,
u'execution_stats': False,
u'fields': u'all_fields',
u'limit': 25,
u'r': [49],
u'skip': 0,
u'sort': {},
u'stable': False,
u'stale': False,
u'update': True,
u'use_index': [u'obsolete-created']},
u'selector': {u'$and': [{u'meta.created': {u'$eq': u'foobar'}},
{u'meta.state': {u'$eq': u'obsolete'}}]},
u'skip': 0}
Joan writes:
@dholth Some of these rules have changed with checkins since 2.2.0 was released: apache/couchdb@8e28fd2
I agree the documentation could be improved, and am moving this issue to our couchdb-documentation repository.
We'd love a pull request if you are interested in helping out! 😉
We need this for our docs.couchdb.org site which is built off of tags on this repo.
We need to ensure the docs are updated with any fixes that went into 1.7.x.
A git bisect
or similar on the main apache/couchdb repo will tell you a lot.
Needs a few hints after installation:
with deltas for installing by package (and, maybe, using the Docker images, though this is still not recommended)
In the [couch_peruser] config section, introduced new config field q for peruser-created databases. The change was reflected on default.ini and local.ini in PR apache/couchdb#1030. However, this also needs to be documented.
See also apache/couchdb#875.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.