2016-08-03 06:06 AM
suppose I would setup a 4 node cluster in a large datacenter and that I would put a certain distances between the two pairs of node: different racks, different rooms/areas.
If I put the pair of cluster switches CN1610 in the middle how long is the maximum supported distances between nodes and switches?
Solved! SEE THE SOLUTION
2016-08-03 10:44 AM
NetApp sells SFP+ modules that are 10GBase-SR and cables up to 30M...but the standard supports up to 300M with the right cables I believe. Here's the blurb from Cisco on their SFP+ specs:
The Cisco 10GBASE-SR module supports a link length of 26 meters on standard Fiber Distributed Data Interface (FDDI)-grade multimode fiber (MMF). Using 2000 MHz*km MMF (OM3), up to 300-meter link lengths are possible. Using 4700 MHz*km MMF (OM4), up to 400 meter link lengths are possible.
I'd be careful to check with your SE before going over the 30M limit, though, as the longer distances may not be officially supported.
2016-08-04 12:54 AM
your words confirm what I knew about SFP+ types and cables. OM3 is the type normally used among the MM ones.
I'll do another check but I'm quite sure it could be possibile to setup a cluster in this way.
2016-08-04 01:08 AM
Keep in mind that longer distance increases likelyhood of partial cluster. It is far more easy to lose half of your nodes if they are in another building than if they are in another rack. In this case if you lose quorum, your cluster will be in severely restricted mode, and if outage is prolonged you will have a problem.
That is the reason why stretch metrocluster exists. You have to weigh in all pros and cons and understand impact of cluster split.
This is not decision that can be taken based on simple math and cable data sheets.