Yes, if a majority of the replset instances are in DC1 than a loss of DC2
and DC3 will not affect your primary in DC1 as it will still be able to
contact a majority. However if DC1 goes down you will still have to do a
manual reconfiguration to move the primary to DC2 or DC3.
In short, there is no way to guarantee a majority of the instances will
always be available if enough of your nodes go down; all you can do is add
diversity (multiple servers, racks, power supplies, routers, ISPs, data
centers,...) to reduce the possibility of losing contact with a majority of
the replicas. But one node in 3 geographically diverse data centers gets
you 95% of the way there.
I am currently building a sharded cluster made up of 5 member replica sets,
two members at the main data center, two at our alternate location and an
arbiter in a third location. Both members at the main site have priority 9
and the members at the alternate site have priority 1. This causes the main
site members to be preferred primary; I do that because our analytic batch
jobs run at the main site so I want the primary to be there if possible. If
you have a good enough network (we don't) it shouldn't matter where the
primary is, but you will find out it usually does.
With this setup, I can take down either of the members at the main site and
be assured the primary will remain local. If both nodes or the main site go
down, the replicas at the alternate site consult with the arbiter to see if
one of them should become primary. If I lose a node during maintenance the
replica set remains writeable. It's as much availability as I can afford.
You are correct to worry about a failure during maintenance. Our operations
group tends to do all kinds of updates in short outage windows so a
firewall reconfiguration at the same time as an OS patch/reboot at the same
time as I'm upgrading a database is very possible.
Hope this helps
You received this message because you are subscribed to the Google Groups "mongodb-user"