couchdb-couch-replicator.git
22 months agoPrevent replicator manager change feeds from getting stuck master 63/head
Nick Vatamaniuc [Fri, 10 Mar 2017 06:15:47 +0000 (01:15 -0500)] 
Prevent replicator manager change feeds from getting stuck

Switch them them from `longpoll` to `normal`

This would prevent them being stuck. That could happen if more than one
`resume_scan` message arrives for the same shard. The first time a longpoll
changef feed would finish and end sequence is checkpointed. But if another
resume_scan arrives and database hasn't changed then the longpoll change
feed would hang until db is updated.

The reason there would be multiple `resume_scan` messages is because there
is a race condition between db update handler and scanner component. They are
both started asynchronously roughly at the same. Scanner finds new shard while
db handler notices changes for those shards. If shards are modified quickly
after they are discovered by the scanner both of those components would issue
a resume_scan.

The effect of this would be more pronounced if there are a large number of
_replicator shards and constant db creation/deletion/updates.

COUCHDB-2964

22 months agoRevert "Don't scan empty replicator databases"
Nick Vatamaniuc [Fri, 10 Mar 2017 06:13:28 +0000 (01:13 -0500)] 
Revert "Don't scan empty replicator databases"

This reverts commit 46aa27fa674a4c1e590aeecd76123e4f91d78fd5.

22 months agoDon't scan empty replicator databases 62/head
Robert Newson [Thu, 9 Mar 2017 18:03:14 +0000 (18:03 +0000)] 
Don't scan empty replicator databases

Every account gets a _replicator database created by default, the
burden of scanning them all is considerable.

Don't start a changes reader if the database is empty (excluding the
injected _design/replicator design document)

BugzID: 84311

22 months agoMerge branch '64229-add-new-request-parameter'
Nick Vatamaniuc [Wed, 8 Mar 2017 00:17:26 +0000 (19:17 -0500)] 
Merge branch '64229-add-new-request-parameter'

Closes #61

22 months agoFix unit test after renaming max_document_size config parameter
Nick Vatamaniuc [Tue, 7 Mar 2017 21:42:43 +0000 (16:42 -0500)] 
Fix unit test after renaming max_document_size config parameter

`couchdb.max_document_size` was renamed to `httpd.max_http_request_size`

The unit tests was testing how replicator behaves when faced with reduced
request size configuration on the target.

COUCHDB-2992

22 months agoMerge remote-tracking branch 'cloudant/couchdb-2992-remove-dead-code'
Nick Vatamaniuc [Tue, 7 Mar 2017 19:45:59 +0000 (14:45 -0500)] 
Merge remote-tracking branch 'cloudant/couchdb-2992-remove-dead-code'

Closes #60

22 months agoRemove unused mp_parse_doc function from replicator 60/head
Nick Vatamaniuc [Tue, 7 Mar 2017 19:38:29 +0000 (14:38 -0500)] 
Remove unused mp_parse_doc function from replicator

It was left accidentally when merging Cloudant's dbcore work.

COUCHDB-2992

22 months agoMerge remote-tracking branch 'cloudant/couchdb-3316'
Nick Vatamaniuc [Fri, 3 Mar 2017 15:49:54 +0000 (10:49 -0500)] 
Merge remote-tracking branch 'cloudant/couchdb-3316'

Closes #59

22 months agoMake sure to log db as well as doc in replicator logs. 59/head
Nick Vatamaniuc [Fri, 3 Mar 2017 00:12:47 +0000 (19:12 -0500)] 
Make sure to log db as well as doc in replicator logs.

COUCHDB-3316

22 months agofix crashes when replicator db is deleted 58/head
Robert Newson [Wed, 1 Mar 2017 11:14:02 +0000 (11:14 +0000)] 
fix crashes when replicator db is deleted

BugzID: 83663

22 months agoRevert "Restore adding some jitter-ed sleep to shard scanning code."
Robert Newson [Wed, 1 Mar 2017 11:15:55 +0000 (11:15 +0000)] 
Revert "Restore adding some jitter-ed sleep to shard scanning code."

This reverts commit 45d739af3fcf8b4f8e3ccca152cb3c2d781dc2fc.

22 months agoRestore adding some jitter-ed sleep to shard scanning code. 57/head
Nick Vatamaniuc [Tue, 28 Feb 2017 19:00:22 +0000 (14:00 -0500)] 
Restore adding some jitter-ed sleep to shard scanning code.

Otherwise a large cluster will flood replicator manager with potentially
hundreds of thousands of `{resume, Shard}` messages. For each one, it
would try to open a changes feed which can add significant load and has
been seen in production to hit varios system limits.

This brings back the change from before the switch to using mem3 shards
for replicator db scans.

Also adds a few tests.

COUCHDB-3311

23 months agoMerge branch 'couchdb-3291-use-infinity'
Nick Vatamaniuc [Wed, 8 Feb 2017 18:11:47 +0000 (13:11 -0500)] 
Merge branch 'couchdb-3291-use-infinity'

Closes #55

23 months agoMerge branch 'couchdb-3291-better-formatting'
Nick Vatamaniuc [Wed, 8 Feb 2017 17:50:36 +0000 (12:50 -0500)] 
Merge branch 'couchdb-3291-better-formatting'

Closes #56

23 months agoUse string formatting to shorten document ID during logging. 56/head
Nick Vatamaniuc [Wed, 8 Feb 2017 17:02:34 +0000 (12:02 -0500)] 
Use string formatting to shorten document ID during logging.

Previously used an explicit lists:sublist call but value was never used
anywhere besides the log message.

COUCHDB-3291

23 months agoSwitch replicator max_document_id_length config to use infinity 55/head
Nick Vatamaniuc [Wed, 8 Feb 2017 16:46:13 +0000 (11:46 -0500)] 
Switch replicator max_document_id_length config to use infinity

Default value switched to be `infinity` instead of 0

COUCHDB-3291

23 months agocloses #54
Nick Vatamaniuc [Tue, 7 Feb 2017 00:46:26 +0000 (19:46 -0500)] 
closes #54

Merge branch 'couchdb-3291-limit-doc-id-size-in-replicator'

23 months agoAllow configuring maximum document ID length during replication 54/head
Nick Vatamaniuc [Sat, 4 Feb 2017 01:49:32 +0000 (20:49 -0500)] 
Allow configuring maximum document ID length during replication

Currently due to a bug in http parser and lack of document ID length
enforcement, large document IDs will break replication jobs. Large IDs
will pass through the _change feed, revs diffs,  but then fail
during open_revs get request. open_revs request will keep retrying until
it gives up after long enough time, then replication task crashes and
restart again with the same pattern. The current effective limit is
around 8k or so. (The buffer size default 8192 and if the first line
of the request is larger than that, request will fail).

(See http://erlang.org/pipermail/erlang-questions/2011-June/059567.html
for more information about the possible failure mechanism).

Bypassing the parser bug by increasing recbuf size, will alow replication
to finish, however that means simply spreading the abnormal document through
the rest of the system, and might not be desirable always.

Also once long document IDs have been inserted in the source DB. Simply deleting
them doesn't work as they'd still appear in the change feed. They'd have to
be purged or somehow skipped during the replication step. This commit helps
do the later.

Operators can configure maximum length via this setting:
```
  replicator.max_document_id_length=0
```

The default value is 0 which means there is no maximum enforced, which is
backwards compatible behavior.

During replication if maximum is hit by a document, that document is skipped,
an error is written to the log:

```
Replicator: document id `aaaaaaaaaaaaaaaaaaaaa...` from source db  `http://.../cdyno-0000001/` is too long, ignoring.
```

and `"doc_write_failures"` statistic is bumped.

COUCHDB-3291

23 months agoFix shards db name typo from previous commit 53/head
Nick Vatamaniuc [Wed, 25 Jan 2017 04:17:26 +0000 (23:17 -0500)] 
Fix shards db name typo from previous commit

Previous commit which switched to using mem3 for replicator shard
discovery introduced a typo.

 `config:get("mem3", "shard_db", "dbs")`

should be:

 `config:get("mem3", "shards_db", "_dbs")`

COUCHDB-3277

23 months agoUse mem3 to discover all _replicator shards in replicator manager 52/head
Nick Vatamaniuc [Tue, 24 Jan 2017 14:31:39 +0000 (09:31 -0500)] 
Use mem3 to discover all _replicator shards in replicator manager

Previously this was done via recursive db directory traversal, looking for
shards names ending in `_replicator`. However, if there are orphanned shard
files (not associated with a clustered db), replicator manager crashes. It
restarts eventually, but as long as the orphanned shard file
without an entry in dbs db is present on the file system, replicator manager
will keep crashing and never reach some replication documents in shards which
would be traversed after the problematic shard. The user-visible effect of this
is some replication documents are never triggered.

To fix, use mem3 to traverse and discover `_replicator` shards. This was used
Cloudant's production code for many years it is battle-tested and it doesn't
suffer from file system vs mem3 inconsistency.

Local `_replicator` db is a special case. Since it is not clustered it will
not appear in the clustered db list. However it is already handled as a special
case in `init(_)` so that behavior is not affected by this change.

COUCHDB-3277

2 years agoLet "error" replication document updates bypass the VDU function. couchdb-3199 50/head
Nick Vatamaniuc [Fri, 14 Oct 2016 19:13:47 +0000 (15:13 -0400)] 
Let "error" replication document updates bypass the VDU function.

This is necessary in the case where an software upgrade happens with
a more restrictive VDU function. Replicator db might end up having documents
which would not pass validation anymore, leading to a replicator manager
crash when it updates the document with an "error" state.

So in case of an "error" state allow malformed document so the user can
see the error.

COUCHDB-3199

2 years agoMerge branch '3010-port-429' into apache
Tony Sun [Wed, 5 Oct 2016 17:12:55 +0000 (10:12 -0700)] 
Merge branch '3010-port-429' into apache

COUCHDB-3010

2 years agoMake backoff macros configurable 48/head
Tony Sun [Mon, 3 Oct 2016 17:29:14 +0000 (10:29 -0700)] 
Make backoff macros configurable

COUCHDB-3010

2 years agoAdd tests which check small values of max_document_size setting on the target 49/head apache/master
Nick Vatamaniuc [Mon, 3 Oct 2016 21:01:52 +0000 (17:01 -0400)] 
Add tests which check small values of max_document_size setting on the target

A low max_document_size setting on the target will interact with the replicator,
this commit adds a few tests to check that interaction.

There are 3 test scenarios:

 * A basic test checks that individual document sizes can be smaller than
  max_document_size yet, when batched together by the replicator they exceed,
  the maximum size. Replicator in that case should split document batches into
  halves down to individual documents, such that the replication should succeed.

 * one_large_one_small test checks that a large single document should be
  skipped such that it doesn't end on the target and it doesn't crash the
  replication job (so the small document should reach the target).

 * The third test is currently disable because of COUCHDB-3174. Once that
  issue is fixed, it will test a corner case in replicator when it
  switches from using batches and POST-ing to _bulk_docs to using individual
  PUT's with multipart/mixed Content-Type. Those PUT request can also return
  413 error code, so this tests it explicitly.

Jira: COUCHDB-3168

2 years agoFix handling of 413 responses for single document PUT requests
Nick Vatamaniuc [Tue, 4 Oct 2016 04:18:25 +0000 (00:18 -0400)] 
Fix handling of 413 responses for single document PUT requests

When replicator finds a document which has an attachment size greater than 64k,
or has more than 8 attachments, it switches to a non-batching mode and posts
each document separately using a PUT request with a multipart/related
Content-Type.

Explicitly handle the case when the response to the PUT request is a 413. Skip
the document and dump `doc_write_failures` count, just like in the case of the
413 response for a _bulk_docs POST request.

Jira: COUCHDB-3168

2 years agoFix replicator handling of max_document_size when posting to _bulk_docs
Nick Vatamaniuc [Mon, 3 Oct 2016 19:30:23 +0000 (15:30 -0400)] 
Fix replicator handling of max_document_size when posting to _bulk_docs

Currently `max_document_size` setting is a misnomer, it actually configures
maximum request body size. For single document requests it is a good enough
approximation. However, _bulk_docs updates could fail the total request size
check even if individual documents stay below the maximum limit.

Before this fix during replication, `_bulk_docs` reqeust would crash, which
eventually leads to an infinite cycles of crashes and restarts (with a
potential large state being dumped to logs), without replicaton job making
progress.

The is to do binary split on the batch size until either all documents will
fit under max_document_size limit, or some documents will fail to replicate.

If documents fail to replicate, they bump the `doc_write_failures` count.
Effectively `max_document_size` acts as in implicit replication filter in this
case.

Jira: COUCHDB-3168

2 years agoFix timeout clause in backoff retry
Tony Sun [Thu, 21 Jul 2016 04:55:26 +0000 (21:55 -0700)] 
Fix timeout clause in backoff retry

The second clause for a timeout will never be reached because the first
will always match before the second clause. Switching the clauses to
fix this.

BugzId:70400
COUCHDB-3010

2 years agoRetry when connection_closed is received during a streamed response
Tony Sun [Thu, 21 Jul 2016 04:45:14 +0000 (21:45 -0700)] 
Retry when connection_closed is received during a streamed response

The changes_reader uses a streamed response. During the stream, it's
possible to receive a connection_closed error due to timeouts or
network issues. We simply retry the request because for streamed
responses a connection must be established first in order for the
stream to begin. So if the network is truly down, the initial request
will fail and the code path will go through the normal retry clause
which decrements the number of retries. This way we won't be stuck
forever if it's an actual network issue.

BugzId: 70400
COUCHDB-3010

2 years agoHandle 429
Tony Sun [Thu, 23 Jun 2016 17:26:57 +0000 (10:26 -0700)] 
Handle 429

When we encounter a 429, we retry with a different set of retries and
timeout. This will theoretically reduce client replication overload.
When 429s have stopped, it's possible that a 500 error could occur.
Then the retry mechanism should go back to the original way for
backwards compatibility.

BugzId:60007
COUCHDB-3010

2 years agoValidate boolean parameters in /_replicate payload 3118-validate-_replicate-payload 47/head
Eric Avdey [Thu, 1 Sep 2016 18:59:13 +0000 (15:59 -0300)] 
Validate boolean parameters in /_replicate payload

2 years agoMerge remote branch 'cloudant:3102-fix-config_subscription'
ILYA Khlopotov [Tue, 23 Aug 2016 21:59:43 +0000 (14:59 -0700)] 
Merge remote branch 'cloudant:3102-fix-config_subscription'

This closes #46

Signed-off-by: ILYA Khlopotov <iilyak@ca.ibm.com>
2 years agoUpdate handle_config_terminate API 46/head
ILYA Khlopotov [Fri, 19 Aug 2016 23:10:24 +0000 (16:10 -0700)] 
Update handle_config_terminate API

COUCHDB-3102

2 years agoFix passing epoch in correctly with rep_db_checkpoint message. 3104-fix-replicator-manager-changes-feed-checkpoint 45/head
Nick Vatamaniuc [Mon, 15 Aug 2016 07:20:44 +0000 (03:20 -0400)] 
Fix passing epoch in correctly with rep_db_checkpoint message.

This bug was hidden previously because this code never ran due to
another bug in handling the stop callback message from change feed.

Jira: COUCHDB-3104

2 years agoFix replicator manager `stop` change feed callback
Nick Vatamaniuc [Mon, 15 Aug 2016 06:50:56 +0000 (02:50 -0400)] 
Fix replicator manager `stop` change feed callback

```
changes_reader_cb({stop, EndSeq, _Pending}, ...) ->
   ...
```

at one point used to handle changes from `fabric:changes`. It was later
optimized to use shard change feeds, but shard change feed callbacks don't get
pending info with the `stop` message.

As a result replicator manager would always rescan all the changes in a shard
on any new change.

For reference, where `couch_changes.erl` calls the callback:
 https://github.com/apache/couchdb-couch/blob/master/src/couch_changes.erl#L654

Jira: COUCHDB-3104

2 years agoMerge remote branch 'cloudant:69914-insert-random-delays'
ILYA Khlopotov [Mon, 1 Aug 2016 18:04:31 +0000 (11:04 -0700)] 
Merge remote branch 'cloudant:69914-insert-random-delays'

This closes #44

Signed-off-by: ILYA Khlopotov <iilyak@ca.ibm.com>
2 years agoInject random delays in scan_all_dbs 44/head
ILYA Khlopotov [Fri, 29 Jul 2016 21:32:02 +0000 (14:32 -0700)] 
Inject random delays in scan_all_dbs

couch_replication_server scans filesystem to find all _replication
databases. For every database found it does

    gen_server:cast(Server, {resume_scan, DbName})

Extract independent process where we do gen_server:cast after a random delay.
This effectively removes stampede and randomizes the order in which we
process _replication databases.

COUCHDB-3088

2 years agoReplication manager's rep_start_pids now contains only {Tag, Pid} items 3082-replication-manager-rep_start_pids-fix 43/head
Nick Vatamaniuc [Tue, 26 Jul 2016 19:56:42 +0000 (15:56 -0400)] 
Replication manager's rep_start_pids now contains only {Tag, Pid} items

Previously the local change feed was added to rep_start_pids as Pid only. So if
replication manager stopped and terminate/2 was called before that change
feed died, then

```
foreach(fun({_Tag, Pid}) -> ... end, [StartPids])
```

would crash with a function clause error.

Make sure add the replicator db name to the changes feed.

Jira: COUCHDB-3082

2 years agoReplace hard-coded instances of <<"_replicator">> dbs with a macro
Nick Vatamaniuc [Tue, 26 Jul 2016 19:22:52 +0000 (15:22 -0400)] 
Replace hard-coded instances of <<"_replicator">> dbs with a macro

There are 3 of those. For now replace only those which refer to db names, not
role.

Jira: COUCHDB-3082

2 years agoCheck if worker is alive for clean_mailbox
Tony Sun [Tue, 28 Jun 2016 01:54:43 +0000 (18:54 -0700)] 
Check if worker is alive for clean_mailbox

When a connection:close header is sent from the server, we handle it
by calling ibrowse:stop on the worker and release it from the worker
pool. But our clean_mailbox tries to clean the mailbox of this worker
when it's already dead, leading to a timeout that crashes the
changes_reader process and subsequently the replicator process. So
we check to ensure that the Worker is still alive before we call
ibrowse:stream_next.

BugzId:69053

2 years agoChange process_response clause
Tony Sun [Tue, 28 Jun 2016 01:43:57 +0000 (18:43 -0700)] 
Change process_response clause

An older version of ibrowse would throw a {error, {'EXIT', Reason}},
when a connection:close header was received. In the newer version
of ibrowse, it throws {error, req_timedout} instead. This leads
to a maybe_retry function call because we do not have a clause
that handles this error. Which inevitably leads to the replication
process dying once it exhausts the retry limit. So we change
the process_response clause to address this bug. However, this also
means we could end up trying forever for real timeouts.

BugzId:69053

2 years agoEnsure _design/_replicator VDU is updated
Robert Newson [Thu, 9 Jun 2016 14:39:04 +0000 (15:39 +0100)] 
Ensure _design/_replicator VDU is updated

2 years agoMerge remote branch 'cloudant:fix-some-type-errors'
ILYA Khlopotov [Wed, 25 May 2016 02:00:34 +0000 (19:00 -0700)] 
Merge remote branch 'cloudant:fix-some-type-errors'

This closes #39

Signed-off-by: ILYA Khlopotov <iilyak@ca.ibm.com>
2 years agogen_event: handle_call suppose to return `{ok, Reply, State}` 39/head
ILYA Khlopotov [Wed, 25 May 2016 01:39:23 +0000 (18:39 -0700)] 
gen_event: handle_call suppose to return `{ok, Reply, State}`

2 years agoAdd jittered delay during replication error handling 37/head
Nick Vatamaniuc [Wed, 27 Apr 2016 19:21:14 +0000 (15:21 -0400)] 
Add jittered delay during replication error handling

For one-to-many replications, when source fails, it can create a stampede
effect. A jittered delay is used to avoid that. Delay is random, in a range
proportional to current number of replications, with a maximum of 1 minute.

Seed random number generator within each replication process with a
non-deterministic value, otherwise the same sequence of delays is generated
for all replications.

Jira: COUCHDB-3006

2 years agoMerge remote branch 'github/pr/35'
ILYA Khlopotov [Tue, 19 Apr 2016 21:49:32 +0000 (14:49 -0700)] 
Merge remote branch 'github/pr/35'

This closes #35

Signed-off-by: ILYA Khlopotov <iilyak@ca.ibm.com>
2 years agoUse couch_db:dbname_suffix in is_replicator_db 2983-use-dbname_suffix 35/head
ILYA Khlopotov [Fri, 8 Apr 2016 18:17:14 +0000 (11:17 -0700)] 
Use couch_db:dbname_suffix in is_replicator_db

couch_db:dbname_suffix would take shard's suffix into account.

COUCHDB-2983

2 years agoImplement Mango selectors for replication 36/head
Nick Vatamaniuc [Fri, 15 Apr 2016 22:04:17 +0000 (18:04 -0400)] 
Implement Mango selectors for replication

Replication document should have a "selector"
field with a Mango selector JSON object
as the value.

For example:
```
{
    "_id": "r",
    "continuous": true,
    "selector": {
        "_id": {
            "$gte": "d2"
        }
    },
    "source": "http://adm:pass@localhost:15984/a",
    "target": "http://adm:pass@localhost:15984/b"
}
```

This feature underneath uses the _changes feed
Mango selectors capability.

Replicator docs js validation function has been
updated to return an error if it notices user has
specified both `doc_ids` and `selector`. Or
they specified `filter` and either of the other
two.

Replication options parsing also checks for those
mutually exclusive fields, as replications can be
started from the `_replicate` endpoint not just
via the docs in `*_replicator` dbs.

When generating a replication id, Mango selector
object is normalized and sorted (JSON fields
are sorted inside objects only). That is done in order
to reduce the chance of creating two different
replication checkpoints for same Mango selector.

Jira: COUCHDB-2988

2 years agoReduce checkpoint frequency from 5 to 30 seconds 34/head
Nick Vatamaniuc [Thu, 31 Mar 2016 15:50:53 +0000 (11:50 -0400)] 
Reduce checkpoint frequency from 5 to 30 seconds

Use a macro to avoid hard-coding magic number
in two places.

COUCHDB-2979

2 years agoRevert "Merge remote-tracking branch 'cloudant/2975-restart-replications-on-crash'"
Robert Newson [Fri, 25 Mar 2016 15:46:40 +0000 (15:46 +0000)] 
Revert "Merge remote-tracking branch 'cloudant/2975-restart-replications-on-crash'"

This reverts commit 89d57cd10d36eb9a5b300568bad037d99998e241, reversing
changes made to 197950631b8a73a8c36b744fc9eb00debc15ac03.

2 years agoMerge remote-tracking branch 'cloudant/2975-restart-replications-on-crash'
Robert Newson [Thu, 24 Mar 2016 18:31:28 +0000 (18:31 +0000)] 
Merge remote-tracking branch 'cloudant/2975-restart-replications-on-crash'

2 years agoReduce likelihood of a bad replication job taking down the job supervisor 33/head
Robert Newson [Thu, 24 Mar 2016 13:40:14 +0000 (13:40 +0000)] 
Reduce likelihood of a bad replication job taking down the job supervisor

While we can't disable max_restart_intensity, we can make it unlikely
to happen. Ordinarily, we would want this behaviour, but replication
jobs involve human input. A bad password, or malformed url, etc, can
cause repeated and fast crashing.

For now, we require ten crashes within one second before we would
bounce the job supervisor. In future, we should manage replication
jobs with greater care.

COUCHDB-2975

2 years agoUse transient restart type for all replications
Robert Newson [Thu, 24 Mar 2016 11:29:10 +0000 (11:29 +0000)] 
Use transient restart type for all replications

We want replication tasks to be restarted automatically if they crash
abnormally. Replication tasks that complete or are cancelled (by
deleting the backing _replicator doc or issuing an "cancel":true for
non-persistent jobs) should still exit, should not be restarted, and
should not have their child spec linger in the supervisor.

COUCHDB-2975

2 years agoRemove obsoleted R14-era code
Robert Newson [Thu, 24 Mar 2016 11:26:08 +0000 (11:26 +0000)] 
Remove obsoleted R14-era code

We no longer support R14 so we're dropping R14-specific complications
in the codebase.

COUCHDB-2975

2 years agoMerge remote branch 'github/pr/32'
Eric Avdey [Fri, 11 Mar 2016 21:04:55 +0000 (17:04 -0400)] 
Merge remote branch 'github/pr/32'

This closes #32

Signed-off-by: Eric Avdey <eiri@eiri.ca>
2 years agoFix flaky replicator tests. 32/head
Nick Vatamaniuc [Wed, 9 Mar 2016 20:05:46 +0000 (15:05 -0500)] 
Fix flaky replicator tests.

Replication+compaction test periodically times out
when running under CI. Adjust writer timeout from 3 to 9 sec.

Also clean up confusing / unused TIMEOUT_STOP constant.

2 years agoAfter a rescan prevent checkpoints from a previous epoch 31/head
Nick Vatamaniuc [Wed, 9 Mar 2016 00:37:17 +0000 (19:37 -0500)] 
After a rescan prevent checkpoints from a previous epoch

Fix race condition which happens on rescan: rescan
function resets all checkpoints for replicator databases.
However before new change feeds start processing all
documents from sequence 0, a checkpoint could
happen from an existing change feed, which would
effectively result in a range of documents being
skipped over.

Add an `epoch` ref to State. On rescan update
the epoch. Thread epoch through the change feed process
and callbacks, then only allow checkpoints from current
epoch.

JIRA: COUCHDB-2965

2 years agoMerge remote-tracking branch 'cloudant/couchdb-2963'
Benjamin Bastian [Fri, 4 Mar 2016 23:52:11 +0000 (15:52 -0800)] 
Merge remote-tracking branch 'cloudant/couchdb-2963'

2 years agoSwitch replicator manager change feeds to "longpoll" 30/head
Nick Vatamaniuc [Fri, 4 Mar 2016 20:06:41 +0000 (15:06 -0500)] 
Switch replicator manager change feeds to "longpoll"

Fixes replication manager rescans on cluster membership
change.

Replication manager resets all replication db
sequence checkpoints, and starts a new replicator db
background scanner.  Each replicator database is signaled
to rescan from sequence 0. However previous change feeds
for each db have to exit first. If they never exit, because
they are "continuous" new change feeds will never start.

Putting change feeds in "longpoll" mode ensures they will
eventually exit.

JIRA: COUCHDB-2963

2 years agoAdjust minimum number of http connections to 2 29/head
Nick Vatamaniuc [Fri, 4 Mar 2016 04:38:20 +0000 (23:38 -0500)] 
Adjust minimum number of http connections to 2

Replication changes feed and main replicator process could
end up waiting on the http connection to be available, and also
waiting on each other in a gen_server call. So set minimum
number of http connections to 2 to avoid deadlock.

JIRA: COUCHDB-2959

2 years ago Merge remote branch 'github/pr/27'
ILYA Khlopotov [Wed, 2 Mar 2016 20:36:54 +0000 (12:36 -0800)] 
Merge remote branch 'github/pr/27'

    - https://github.com/apache/couchdb-couch-replicator/pull/27

    This closes #27

Signed-off-by: ILYA Khlopotov <iilyak@ca.ibm.com>
2 years agoRemove configurable replicator db name 27/head
Nick Vatamaniuc [Fri, 26 Feb 2016 21:10:27 +0000 (16:10 -0500)] 
Remove configurable replicator db name

JIRA: COUCHDB-2954

2 years agoDo not crash in couch_replicator:terminate/2 if a local dbname is used.
Nick Vatamaniuc [Mon, 29 Feb 2016 23:45:58 +0000 (18:45 -0500)] 
Do not crash in couch_replicator:terminate/2 if a local dbname is used.

Even though local source or target database names are not valid
for replication in CouchDB 2.0, do not crash when trying to
strip credentials. Replicator process has to terminate properly
in order to report the error in the replication document for
user feedback.

JIRA: COUCHDB-2949

This closes #28

Signed-off-by: Mike Wallace <mikewallace@apache.org>
2 years agoFix view filtered replication 26/head
Eric Avdey [Tue, 23 Feb 2016 14:13:06 +0000 (10:13 -0400)] 
Fix view filtered replication

The output for get_view_info function has been normalized
and URL for views' info got fixed.

The ddoc's update_seq is not applicable to a database changes feed
used for the filtering by none seq indexed views, so we'll use database
update_seq instead.

2 years agoAvoid logging creds on couch_replicator termination 25/head
Mike Wallace [Wed, 10 Feb 2016 14:59:50 +0000 (14:59 +0000)] 
Avoid logging creds on couch_replicator termination

When couch_replicator terminates with an error we log the #rep
record which can contain credentials for the source or target
of a replication, either in the url directly or in an Authorization
header.

This commit adds a function to strip credentials from the #httpdb
records in the #rep record and replace them with ****.

Specifically this concerns the url and headers fields of the
 #rep.source and #rep.target #httpdb records.

We also add the format_status/2 callback and strip creds from the
 #rep_state record in the gen_server state to prevent the creds
in the state getting logged in the event of a crash.

Closes COUCHDB-2949

This closes #25

2 years agoAdd filtered with query replication test 24/head
Eric Avdey [Fri, 5 Feb 2016 17:15:33 +0000 (13:15 -0400)] 
Add filtered with query replication test

2 years agoFix filtered replication test
Eric Avdey [Fri, 5 Feb 2016 15:28:49 +0000 (11:28 -0400)] 
Fix filtered replication test

2 years agoMerge branch 'github/pr/15'
Alexander Shorin [Thu, 11 Feb 2016 12:18:39 +0000 (15:18 +0300)] 
Merge branch 'github/pr/15'

2 years agoUpdate Travis config 15/head
Alexander Shorin [Mon, 9 Nov 2015 23:43:52 +0000 (02:43 +0300)] 
Update Travis config

- Add license header
- Clone CouchDB faster
- Test against Erlang 18.1 and 18.2
- Drop R14B04 support
- Use new better way to run specific app tests
- Use containers

2 years agoIntegrate with Travis CI
Alexander Shorin [Sun, 23 Aug 2015 10:28:18 +0000 (13:28 +0300)] 
Integrate with Travis CI

3 years agoFix couch_replicator_manager rescans 23/head
Paul J. Davis [Thu, 5 Nov 2015 00:48:07 +0000 (18:48 -0600)] 
Fix couch_replicator_manager rescans

When couch_replicator_manager starts it scans every _replicator database
looking for replications to start. When it starts the replication it
modifies a document in the _replicator database. This change ends up
sending a message back to couch_replicator_manager to rescan the
database. This message to rescan the database had no protection to be
unique. This would result in many processes re-scanning the same
database over and over.

To fix this we track the DbName for every scanning process so that if we
get a change to a database we can ignore the change because a scanner
pid is already running. However we also have to track if we need to
restart the scanning pid when it finishes so that we ensure that we
process any changes that occurred during the scan.

COUCHDB-2878

3 years agoThrow bad request when doc_ids parameter is not an array (or null)
Jay Doane [Tue, 14 Jul 2015 04:41:18 +0000 (21:41 -0700)] 
Throw bad request when doc_ids parameter is not an array (or null)

BugzID: 48602

Signed-off-by: Alexander Shorin <kxepal@apache.org>
3 years agorevert cdf8949 (couch_util:rfc1123_date)
Robert Kowalski [Fri, 26 Jun 2015 21:44:03 +0000 (23:44 +0200)] 
revert cdf8949 (couch_util:rfc1123_date)

this got fixed in R14B02 as OTP-9087

COUCHDB-627

3 years agoAdd a test case for filtered replication
ILYA Khlopotov [Fri, 22 May 2015 13:04:39 +0000 (06:04 -0700)] 
Add a test case for filtered replication

This closes #10

Signed-off-by: Alexander Shorin <kxepal@apache.org>
3 years agoRaise eunit tests timeout up to 100s
Alexander Shorin [Fri, 16 Oct 2015 20:47:37 +0000 (23:47 +0300)] 
Raise eunit tests timeout up to 100s

Since we don't use delayed_commits anymore, our tests started to do
more intensive disk I/O work and becomes slower.

The value is picked as pessimistic case for slow HDD users which run
some background I/O operations.

3 years agoFix race condition in waiting for compactor in eunit test. 22/head
Nick Vatamaniuc [Fri, 16 Oct 2015 18:05:40 +0000 (14:05 -0400)] 
Fix race condition in waiting for compactor in eunit test.

Monitor waiting for replicator will sometimes fail with noproc
error, because there is a race condition between a running
compactor process and setting up its monitor and waiting on it.

This error appears about once or twice in a 100 runs.
It can be made to appear more often by tweaking 50 and 5 values in:
```
 should_populate_and_compact(RepPid, Source, Target, 50, 5),
```
to something like 1, 20.

This commit fixes race condition by handling noproc.

3 years agoMerge remote-tracking branch 'cloudant/2833-fix-race-condition-during-worker-termination'
Robert Newson [Fri, 16 Oct 2015 09:49:36 +0000 (10:49 +0100)] 
Merge remote-tracking branch 'cloudant/2833-fix-race-condition-during-worker-termination'

3 years agoFix new couch_httpd_multipart:abort_multipart_stream API call
Alexander Shorin [Thu, 15 Oct 2015 19:47:13 +0000 (22:47 +0300)] 
Fix new couch_httpd_multipart:abort_multipart_stream API call

3 years agoFix race condition in worker release on connection_closing state. 21/head
Nick Vatamaniuc [Thu, 15 Oct 2015 17:54:10 +0000 (13:54 -0400)] 
Fix race condition in worker release on connection_closing state.

This is exposed in the replicator large attachments tests case,
replicating from local to remote. In the current test configuration
it appears about once in 20-40 times. The failure manifests as
up as an {error, req_timedout} exception in the logs from one of the
PUT methods, during push replication. Then database comparison fails
because not all documents made it to the target.

Gory details:

After ibrowse receives Connection: Close header it will go into
shutdown 'connection_closing' state.

couch_replicator_httpc handles that state by trying to close
the socket and retrying, hoping that it would pick up a new worker from
the pool on next retry in couch_replicator_httpc.erl:

```
process_response({error, connection_closing}, Worker, HttpDb, Params, _Cb) ->
    ...
```

But it did not directly have a way to ensure socket is really closed,
instead it called ibrowse_http_client:stop(Worker). That didn't wait for
worker to die, also worker was returned back to the pool asynchronously,
in the 'after' clause in couch_replicator_httpc:send_req/3.

This worker which could still be alive but in a dying process,
could have been picked up immediately during the retry.
ibrowse in ibrowse:do_send_req/7 will handle a dead workers
process as {error, req_timedout}, which is what the intermitend
test failure showed in the log:

The fix:

 * Make sure worker is really stopped after calling stop.

 * Make sure worker is returned to the pool synchronously. So that
   on retry, a worker in a known good state is picked up.

COUCHDB-2833

3 years agoMerge remote-tracking branch 'github/pr/4'
Alexander Shorin [Thu, 15 Oct 2015 16:37:26 +0000 (19:37 +0300)] 
Merge remote-tracking branch 'github/pr/4'

3 years agoHandle un-expected closing of pipelined connections better. 19/head
Nick Vatamaniuc [Fri, 2 Oct 2015 13:55:42 +0000 (09:55 -0400)] 
Handle un-expected closing of pipelined connections better.

If during a pipelined connection, server closes its socket,
but http client has more requests to send, ibrowse will
detect that when it sends next request and throw
{error, connection_closing}.

Handle that error better, by closing the socket explicitly
and retrying the pipelined request that failed.

COUCHDB-2833

3 years agoFix crypto deprecations 17/head
Robert Newson [Wed, 23 Sep 2015 18:28:10 +0000 (19:28 +0100)] 
Fix crypto deprecations

COUCHDB-2825

3 years agoInclude originating database 16/head
Robert Newson [Sat, 12 Sep 2015 16:47:13 +0000 (17:47 +0100)] 
Include originating database

Any database whose path ends in /_replicator is considered a
replicator database. Show this name in the active tasks output so that
the doc_id can be easily found in all cases.

3 years agoFix changes worker timeout cleanup
Paul J. Davis [Tue, 9 Jun 2015 16:34:57 +0000 (11:34 -0500)] 
Fix changes worker timeout cleanup

Previously if we timed out waiting for the next message the changes
reader would end up just exiting with an error. Unfortunately the
ibrowse worker doesn't bother noticing that its streaming target has
died and will wait in perpetuity. If the main replication process
happens to be waiting on this HTTP worker it'll block indefinitely and
never make progress in the replication.

This change just ensures that the ibrowse worker is killed which will
cause the main replication pid to restart.

This particular bug has been observed on the Oculus clusters at a fairly
low rate so the cost of restarting a replication shouldn't be an issue.

BugzId: 47971

3 years agoFix stuck changes reader in clean_mailbox
Paul J. Davis [Tue, 21 Jul 2015 21:01:46 +0000 (16:01 -0500)] 
Fix stuck changes reader in clean_mailbox

Due to unfortunate timing issues it was possible for a changes reader to
get stuck in clean_mailbox reading an entire changes feed before
exiting. If the ibrowse call timed out right before ibrowse starts
sending messages then we would see clean_mailbox loop until the changes
feed terminated on the source.

This caps the number of messagses that can be cleaned up to a maximum of
sixteen. This limit is rather arbitrary. The cleanup was intended for when
only a couple messages were lingering. This is much larger than that
without being insanely large.

BugzId: 49717

3 years agoAdd LICENSE file
Alexander Shorin [Sun, 23 Aug 2015 10:26:41 +0000 (13:26 +0300)] 
Add LICENSE file

3 years agoReturn `{error, {illegal_database_name, Name}}` 14/head
ILYA Khlopotov [Fri, 31 Jul 2015 18:37:44 +0000 (11:37 -0700)] 
Return `{error, {illegal_database_name, Name}}`

3 years agoDistinct User-Agent for the replicator
Robert Newson [Thu, 25 Jun 2015 13:17:39 +0000 (14:17 +0100)] 
Distinct User-Agent for the replicator

closes COUCHDB-2728

3 years agoInfinity timeout, just like all the others :( 2707-merge-couch_replicator-fixes-from-cloudant-fork 11/head
Robert Newson [Tue, 26 May 2015 15:04:24 +0000 (16:04 +0100)] 
Infinity timeout, just like all the others :(

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/e947b392db1eb2de22aac4a4fa12da118fe114b3

3 years agocontinue jobs that aren't _replicator docs
Robert Newson [Thu, 21 May 2015 13:08:42 +0000 (14:08 +0100)] 
continue jobs that aren't _replicator docs

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/ff1ab1b840019601c3e3e04a1d931db6f2ccd2d1

3 years agoFix stream cleanup timeouts
Paul J. Davis [Thu, 21 May 2015 15:02:53 +0000 (10:02 -0500)] 
Fix stream cleanup timeouts

The first part of this is adding the `after 0` clause. The issue here is
that ibrowse sends the `ibrowse_async_response_end` message without
waiting for a call to `ibrowse:stream_next/1`. This means that the
continuous changes feed may or may not get this message in
`couch_replicator_httpc:accumulate_messages/3`. If it does then we would
end up on an infinite timeout waiting for it. This was a typo in the
original patch in that I meant to include it but forgot.

The second timeout is so that we don't end up halted waiting for a
changes request to finish. If it takes longer than 30s we just crash the
replication and let the manager restart things.

BugzId: 47306

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/2caf39040e4e50c38a7758d4d09e7a8b22ea92d4

3 years agoBe more explicit on values of ?STREAM_STATUS
Paul J. Davis [Wed, 20 May 2015 22:26:59 +0000 (17:26 -0500)] 
Be more explicit on values of ?STREAM_STATUS

Also I should add a note about how the changes ending due to a throw
when processing the last_seq leads to the un-consumed stream messages.

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/20d11c7d342ea77ffd5384d75e9cd570cbcbf5ba

3 years agoEnsure that ibrowse streams are ended properly
Paul J. Davis [Wed, 20 May 2015 22:04:31 +0000 (17:04 -0500)] 
Ensure that ibrowse streams are ended properly

I found a situation where we had live lock on a running application due
to an ibrowse request that hadn't been properly terminated. This
manifested as a cesation of updates to the _active_tasks information.
Debugging this lead me to see that the main couch_replicator pid was
stuck on a call to get_pending_changes. This call was stuck because the
ibrowse_http_client process being used was stuck waiting for a changes
request to complete.

This changes request as it turns out had been abandoned by the
couch_replicator_changes_reader. The changes reader was then stuck
trying to do a gen_server:call/2 back to the main couch_replicator
process with the report_seq_done message.

Given all this, it became apparent that the changes feed improperly
ending its ibrowse streams was the underlying culprit. Issuing a call to
ibrowse:stream_next/1 with the abandoned ibrowse stream id resulted in
the replication resuming.

This bug was introduced in this commit:
bfa020b43be20c54ab166c51f5c6e55c34d844c2

BugzId: 47306

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/f9db37a9b293f5f078681e7539fd35a92eb3adec

3 years agoCleanly stop replication at checkpoint time if no longer owner
Robert Newson [Fri, 15 May 2015 15:31:24 +0000 (16:31 +0100)] 
Cleanly stop replication at checkpoint time if no longer owner

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/e5ef7c8a0ee2566b9cd4c02397ee94883d015fa0

3 years agoLog when node up/down events occur
Robert Newson [Fri, 15 May 2015 15:30:55 +0000 (16:30 +0100)] 
Log when node up/down events occur

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/d40bb6f2e603a7c81f777cc1c4c200ad34c3db42

3 years agoReturn owner to improve logging output
Robert Newson [Fri, 15 May 2015 15:30:34 +0000 (16:30 +0100)] 
Return owner to improve logging output

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/3287da36a24b5d061a64cc93814f2f4580fdd4f9

Conflicts:
src/couch_replicator_manager.erl

3 years agoEnsure Live node set is consistent with up/down messages
Robert Newson [Wed, 13 May 2015 18:40:55 +0000 (19:40 +0100)] 
Ensure Live node set is consistent with up/down messages

BugzID: 46617

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/2418c26b0fa7cffb97c2d8348654c42d6a0f1a06

Conflicts:
src/couch_replicator_manager.erl

3 years agodelay and splay replication starts
Robert Newson [Wed, 3 Dec 2014 12:14:10 +0000 (12:14 +0000)] 
delay and splay replication starts

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/d279150d959cfd46cbd77c5dd17f14d6dc3d0291

3 years agoVerify that url really points to a database
Robert Newson [Wed, 3 Dec 2014 11:41:48 +0000 (11:41 +0000)] 
Verify that url really points to a database

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/e73714196936c345d54158e674ab36cab20beeec

3 years agoRemove anonymous fun when starting replications
Robert Newson [Wed, 3 Dec 2014 11:30:51 +0000 (11:30 +0000)] 
Remove anonymous fun when starting replications

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/faa28a6e7f5b460b1d3ca2f77b00ab7d5371021d

3 years agoUse randomized, truncated exponential backoff in event of conflict
Robert Newson [Mon, 1 Dec 2014 11:11:00 +0000 (11:11 +0000)] 
Use randomized, truncated exponential backoff in event of conflict

BugzID: 42053

This is a cherry-pick of:

https://github.com/cloudant/couch_replicator/commit/6e8fbb6a3f2622c14ae605c18ec54cbad7d389f3

Conflicts:
src/couch_replicator_manager.erl