This all sounds quite sane, but in the end depends on your goal. Local snapshots give you protection against logical errors or operator faults. If all VMs vistual disks are located on the same volume, volume snapshot gives you crash consistent backup. For “not so important” VMs it could be enough. To have more confidence in recoverability you could use something like SMVI to ensure data consistency before snapshots are created. If you need protection against physical data loss, you will need to create secondary data copy on tape or another storage. In case of NetApp the obvious method is to utilize SnapVault, which is D2D backup tool for NetApp environment. Incidentally Commvault have very good support for NetApp, including snapshots of virtual clients and SnapVault, D2D(2T) and is perfectly capable of doing everything you described. What is your goal? Do you intend to free some CV licenses?
... View more
If you did not enable root= for your client, your client's root does not browse files as root, but as user "nobody" on filer. So you at least has to check WAFL credentials cache for this user, not root. Do you have and entry in /etc/usermap.cfg?
... View more
but we are unable to do that due to PDU power and other obstructions at the back of the rack. That's bad and I would try to change it. First, colling is from front to rear. If you have something obstructing filer in the rear, you are risking increased temperature. Second, as you have seen, it makes non-disruptive maintenance impossible.
... View more
I think it should work, but I suggest to open case with NetApp and ask them to provide action plan and to confirm that it can be done non-disruptively.
... View more
If system is under support, open case with NetApp. Reset CFE variables using set-defaults at CFE prompt and see if it helps. Try running diagnostic using boot_diags. If it fails also, there is small chance that boot device is corrupted; try to netboot. If this fails also, I am afraid you have hardware issue; FAS270 is the whole FRU.
... View more
500m is the “official” distance where FC with 1Gb/s and OM2 cable (the one used at the time FC was initially deployed) should work. The real limit is determined not by distance as such, but by signal attenuation which is factor of cable quality, number of connectors, SFP/GBIC sensitivity etc. In real life distance may be better or worse. You could try to use FC and monitor for errors. Even better would be to hire somebody to measure signal loss. Here is NetApp document describing some considerations about FC distance: http://now.netapp.com/NOW/knowledge/docs/san/guides/CFO_cable/cabledistance.shtml
... View more
As I just answered on support forums ☺ X274 came in two variants; both are OK for mk2, but X274A is 1Gb/s only. Need to check basing on model number.
... View more
For how long and how many users need access to “foreign” domain? The following would work – create “shadow user” in Domain A for every user in Domain B that needs access shares in it and let users in Domain B connect to shares as their shadow counterpart. This is a lot of manual administrative work to maintain user list, and there is potential problem with changing passwords for shadow users; but if you need it for short transitional time only, it could work. Another possibility is to create third domain C and let it trust both A and B. But one way only ☺
... View more
I thought I have it but I do not find it. This was mentioned in various discussions several times as well. I may be wrong with “best practice”, because I cannot find any reference to it. I apologize for confusion.
... View more
As is two disk swill be added to RG with 14 disks. First set rg size to 14 and then add disks – they will go to new RG. Setting RG size does not affect existing raid groups.
... View more
Well … I tend to think that this is supported, because KB article https://kb.netapp.com/support/index?page=content&id=3011253 says that optical connection is fully supported up to limits discussed in http://now.netapp.com/NOW/knowledge/docs/san/guides/CFO_cable/cabledistance.shtml The latter clearly shows example of intermediate patch panel. For mid-distances in stretch MetroCluster you again may need to go via patch panel. Which is apparently supported as well. I do not see any difference between standard HA pair and stretch with regards to shelf connection. Of course, I’d love if someone from NetApp confirmed it. Keep in mind, it makes loop troubleshooting a bit more difficult. You will have to proof the problem is not in the part between patch panels ☺
... View more
Yes. You first unassign them on current owner (disk assign -s unowned). It has to be done for each disk separately. Then just assign them on the partner. Be sure to disable disk.auto_assign when doing it, otherwise disks may be magically assigned back
... View more
Actually, I can not understand why compressed data is not cached in PAM. It could potentially save more disk reads comparing with uncompressed data. I can understand why transient decompressed data is not entered in PAM; this is basically CPU vs space decision. But not for compressed blocks ...
... View more
Sorry for not being clear. What I had in mind, was something about lifecycle of data. 1. Client writes data into NetApp. Data comes as uncompressed blocks. They were called "A" in my example. 2. NetApp compresses incoming data. It will result in new blocks, blocks "B" in my example. "B" is written to disks. 3. Client makes read request for data in blocks "B". "B" is fetched from disk into memory, gets uncompressed resulting in blocks C (or A' ) 4. Some blocks from C are sent to client as requested. Hope it clarifies it. So you say that in above "C" will never enter PAM. Is it right? But what about "B" in above? I would expect them to be retained for future reads.
... View more
I realized there is some confusion here. May be you could clarify this. When we speak about compression we have original blocks (let's call them A), blocks that contain compressed data and are physically stored on disks (let's call them B) and transient blocks that contain data, uncompressed from A during read (let's call them C). So - are all A, B, C not cached in PAM, or some types are?
... View more
As far as I understand it should work by creating additional auxiliary copy for existing storage policy and selecting SnapVault copy as source.
... View more
“Adding a second chassis to an existing HA system”: http://now.netapp.com/NOW/knowledge/docs/hardware/filer/215-06156_A0.pdf --- With best regards Andrey Borzenkov Senior system engineer Service operations
... View more
In the mean time, there is From the same FAQ: 3.4 WHAT FIELD UPGRADES ARE AVAILABLE WITH THE FAS/V3200? FAS/V3210 standalone controller can have a second controller added to become FAS/V3210A. FAS/V3240E and FAS/V3270E single-chassis standalone controller can have a second chassis with a controller and IOXM to become FAS/V3240AE and FAS/V3270AE. FAS/V3240A and FAS/V3270A single-chassis standalone controller can have a second chassis and IOXM added to convert to FAS/V3240AE and FAS/V3270AE (September 2011 quote tool update and conversion procedure posted on NOW site).
... View more
I doubt it will work for two reasons 1. If you have 16TB deduplicated volume and copy it to 64 bit aggregate effectively undoing deduplication you are likely to end up with volume exceeding 16TB in the first place. So no further dedup will be possible. 2. Turning dedup off (“sis off”) still leaves volume deduplicated; it just prevents new data from being subject to fingerprint computation. So 16TB limit still applies.
... View more
Sorry, I overlooked “dedupe disabled”, but the answer is the same – any possible method of moving data from 32 to 64 bit will effectively undo deduplication in 8.0.x.
... View more
I would tentatively say “yes” for Data ONTAP 8.1. Here you could VSM from 32 to 64 bit which effectively converts target to 64 bit volume retaining deduplication. Otherwise you would have to do QSM (or similar) which first un-deduplicates data. So you will need to deduplicate target again. As long as we mention 8.1, easier route could be to simply grow source aggregate beyond 16TB limits, thus getting 64 bit deduplicated volume in place.
... View more