Fs_size_fixed is filesystem size, not volume size, and is changed automatically to match source next time SnapMirror update completes. After that volume size can be reduced to match filesystem size.
... View more
See https://mysupport.netapp.com/info/communications/ECMP1147237.html for description of 4321 firmware problem. Information that I have says drives are not field upgradeable and must be replaced.
... View more
- if we ignore NetApp side for a moment, on that picture above there would be no link redundancy between ESX host and switches? (just two NICs on two different subnets) Well, I can't speak for TR author, but picture is titled "Storage side ..." so ESX side should not be considered as authoritative guideline. Also, different logical subnets do not yet imply different physical broadcast domains (VLANs). I view it more as conceptual outline. But I agree that it makes things confusing. You have access to fieldportal, right? Go to TR and submit comment ...
... View more
It depends on your objective. It is easier to build redundant failure tolerant and load balanced connection to storage using SAN than NFS. OTOH NFS is easier to integrate with NetApp features (you have one layer less).
... View more
It depends on how thoroughly you want data to be wiped. The simplest approach is to boot in special boot menu and select 4 - this will zero all disks, assigned to controller, and create new configuration, similar to what you get from factory. Your old configuration data will be lost.
... View more
DNS is evaluated during mount request only, so it does not help when connectivity to data store is lost. Even if ESX can transparently remount, it still means anything running off this data store had crashed. Not to mention that DNS server has no idea of interface connectivity on ESX, so it can return the same non-working address. DNS is for load balancing, not for failover.
... View more
I do not see anything about failover in this post, sorry. What they say - you may need to explicitly configure route to data store, to overcome single default route. As long as you use single interface per link, this configuration is not high-available when considering single ESXi server. But nothing prevents you from adding more interfaces to each subnet and pool them.
... View more
No. NFS is associated with IP address. If this address is unavailable, you cannot access NFS server. The only rather rude workaround is to failover when port lost connectivity (NFO). Or switch to cDOT which implements transparent IP address failover between multiple physical ports
... View more
No, it's not true. The only difference is, that if you use SAN in a cluster, supported number of nodes is 8. But those nodes can serve both SAN and NAS.
... View more
Yes, the same considerations for each HA pair apply. You can either use “system node halt -inhibit-takeover” for each node or globally disable takeover using “storage failover modify”. I guess cluster is never expected to be switched off completely, so nobody thought about documenting it … ☺
... View more
You do not even need to set label - you can simply do “snapmirror update -source-snapshot”. Downside is, it makes scripts more complicated - create snapshot, start transfer, monitor for successful transfer, delete snapshot.
... View more
OK, I apologize - I apparently misunderstood how cDOT SnapVault works. As there is just a single schedule which transfers everything since last base snapshot, my idea won’t really work. I was a bit confused by example in documentation that suggests, that somehow it is possible to transfer only some of snapshots. I do not see how it is possible.
... View more
Oh, I thought the belonged to your company. If not, unfortunately, you need to either find someone who owned them or try to ask NetApp or original reseller. Sorry for confusion.
... View more
License information is present on support site, even if no current maintenance agreement exists you still should be able to lookup licenses for your systems.
... View more
Rebuild onto non-zeroed spare is started immediately. There is no need to zero drive which is used as replacement for failed one - replacement drive is going to be rebuilt and completely rewritten anyway.
... View more
Re non-zeroed spares - that's not quote correct. Zeroed spares are relevant only for adding to aggregate (or creating one). Replacement of failed drive starts immediately, it does not try to zero first. So they are pretty much useful as spares.
... View more
Unfortunately there is very little hope to recover RAID now, after drivers were physically replaced. As long as original drives remained, there was hope to try to unfail them; but now data on these drives is gone. It could be still possible to put them back and attempt to unfail as the last resort.
... View more
I think it would be heplful if you explained why standard snapvault configuration is not suitable. So far it appears to do everything you need; the only downside is extra snapshot on source, but you need just one of them and can create as often as needed so it does not grow much.
... View more
You should make sure all disks on the same site are in the same pool. This is to ensure proper hot spare selection. If you do not have any aggregates on these disks, you can simply assign them to correct pool. Otherwise you could do from maintenance mode.
... View more