-
Notifications
You must be signed in to change notification settings - Fork 66
Fixed #383, #380, #379, #378 #384
Fixed #383, #380, #379, #378 #384
Conversation
Sorry for piling up changes for 4 tickets, but they were very closely related. I can provide a sample ml-config to test this.. |
+1 |
@rlouapre You can pull this locally using upgrade: |
@rlouapre, it hasn't been merged because neither @paxtonhare nor I have had a chance to test it. Have you tried out these changes? |
0b3f884
to
0d939da
Compare
You can relatively easy test this and #366 as well by following these steps:
<assignments xmlns="http://marklogic.com/xdmp/assignments" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://marklogic.com/xdmp/assignments assignments.xsd">
<assignment>
<forest-name>@ml.content-db</forest-name>
<replica-names>
<replica-name>@ml.content-db-rep1</replica-name>
</replica-names>
@ml.forest-data-dir-xml
</assignment>
<assignment>
<forest-name>@ml.content-db-rep1</forest-name>
@ml.forest-data-dir-xml
</assignment>
@ml.test-content-db-assignment
@ml.test-modules-db-assignment
@ml.rest-modules-db-assignment
<assignment>
<forest-name>@ml.modules-db</forest-name>
<replica-names>
<replica-name>@ml.modules-db-rep1</replica-name>
</replica-names>
</assignment>
<assignment>
<forest-name>@ml.modules-db-rep1</forest-name>
</assignment>
@ml.schemas-assignment
@ml.triggers-assignment
</assignments>
The end result should be that all forests have a local replica failover. |
Just see this PR has been merged. Tested it successfully on a simple 3 nodes cluster. A couple of questions:
<forest-name>@ml.content-db</forest-name>
<replica-names>
<replica-name>@ml.content-db-rep1</replica-name>
<replica-name>@ml.content-db-rep2</replica-name>
</replica-names>
|
@rlouapre Keep in mind that for databases with a forests-per-host setting, a forest is created on each host of the target group, and for each forest a replica would be created. It doesn't make sense to create n replicas for each forest. That would completely kill the performance of the cluster. Each replica adds 100% IO for that forest. Yes, Roxy bootstrap is designed to keep things in place as much as possible, and only add changes. If you started with 3 nodes, did a bootstrap, and then add node 4, you can rerun bootstrap to add more forests. I haven't tried how well that works with replica forests. Rep forests might get redistributed, not sure how well that works.. If this still leaves questions or issues, maybe open new tickets.. |
<forests-per-host>
was used. Now also works without.