I'm sure that this is not supported and will not work, though I can't find any documents at the moment which state this explicitly. -Michael
... View more
try df -sg (or "priv set diag; sis stat; priv set") also you could check "aggr show_space -g" which shows the total used blocks in the aggregate (after deduplication) -Michael
... View more
if your volume is quite full (>90%) you can check your fragmentation ratio by running "reallocate measure". We had a customer with a volume that was filled to 96% over a few months and he had a fragmentation ratio of 25 or something (1 is optimal) -Michael
... View more
You certainly don't *need* thin provisioning for A_SIS on LUNs. If your vmdk's inside the VMFS are correctly aligned the asis-savings are exactly the same, especially once your VMFS datastore has been in use for some time (blocks getting allocated and freed again but not zeroed) -Michael
... View more
If you're running locally (on a workstation for example) you need to communicate with the simulator from another host. However, pinging outside should work, I don't know what might cause this. Maybe your network card doesn't support promiscuous mode, or the switch discards all packets that it receives from a "forged" MAC address (because that is what the switch thinks it sees) -Michael
... View more
We're using RoboCopy for this sort of migration.
robocopy "<sourcepath>\." "<destinationpath>\." /B /S /E /PURGE /COPY:DATSO /XD "System Volume Information"
-Michael
... View more
Hi, I'm working for a NetApp partner and we would like to implement a solution for a customer that could probably be solved very nicely with the full FPolicy API. Is there any way for partners to obtain the full FPolicy API? thanks -Michael
... View more
I'm looking for the remote fpolicy interface. It seems (from some posts in the communities) that what I need is the fprequest.idl and/or fpcompletion.idl. However, I can't find these files. Neither in the NM SDK 4.0 nor in the Manage OnTap SDK 3.5.1 Where can I get these APIs? Also, are there any samples available? Regards -Michael
... View more
Be warned that the "mbralign" tool from 5.2 currently has a bug that will trash your vmdks if you try to align them. You should use the mbralign binary from the 5.1 release. This is noted on the download page but it can easily be missed. -Michael
... View more
If you want dedup you can't create a single drive on the NetApp which is >1tb because the maximum volume size for dedup is 1tb as you already know. You can, however, map 4 or 5 LUNs to the windows machine and use Windows software based RAID0 (striping) to concatenate these disks into one. Note however, that for doing that you need to convert the disks to dynamic disks, and using dynamic disks with SnapDrive is not really supported. It works though. I did have some small problems in the past with such a setup, namely that when using Software iSCSI LUNs, the stripe volume would always show up as "offline" after a reboot, you had to set it "online" manually (probably some dependency problem between the dynamic disk driver and the iSCSI service). And you most certainly can't take any snapshots with SnapDrive on this volume because it spans multiple LUNs. -Michael
... View more
You don't need to update, however, 7.3.1+ is strongly recommended for dedup. It has to do with the dedup metadata, which was stored in the volume in 7.2, and so took up space in every snapshot you make, drastically reducing the effective dedup rates. Since 7.3.x (I think it is 7.3.1) the dedup metadata is stored in the aggregate and doesn't get caught in any volume snapshots. You need a bit mor pace in the aggregate though. I would also suggest upgrading to 7.3.x generally because 7.2.4L1 was one of the first OS releases for the 2020 (all *L releases are kinda "special") and a lot has changed/improved since then. -Michael
... View more
This is a workaround for that bug that we received today from NetApp for one of our customers' case. I didn't check to see if it works yet, but you might want to try it (remember to backup the relevant reg.keys first!) - Go to the registry in the vm HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\Scsi, - And you will see : Scsi Port 0 Scsi Port 1 Scsi Port 2, etc. Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\Scsi\Scsi Port 0] "DMAEnabled"=dword:00000000 "Driver"="atapi" [HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\Scsi\Scsi Port 1] "DMAEnabled"=dword:00000000 "Driver"="atapi" - You should delete the « SCSI Port <n >» which does NOT contain the subkeys : scsi bus, initiator id et target id. First do an « export » to your hard drive before deleting the keys. Don’t do anything to the SCSI Port keys that contain subkeys. - You will need to restart the service SDW (SnapDrive for Windows) on the vm - You should then be able to see all initiators (iSCSI and FC) in SnapDrive (6.2 and 6.3) on the vm with 4.1 - Note that you will have to re-delete these keys each time the vm is restarted…this is therefore just a workaround, and a fix is being worked on.
... View more
CHAP is used for authentication. IPSEC is used for encryption. Two different things. I guess if you use IPsec then the authentication itself is already encrypted, but since we never had the need for IPsec (VLANs provide physical isolation and don't need any CPU resources) I can not tell you 100%. -Michael
... View more
The EFH module is not ESH or LRC. It may be based on one but it is not the same. Also, the text you quoted explicitly mentioned "LRC storage I/O modules" and "ESH storage I/O modules" (emphasis mine). Since the EFH module is not a storage I/O module you should be okay. -Michael
... View more
The difference is in the details. If you "just" want to sort by aggr name, and then, all volumes inside the aggregate by volume name, then the proposed solution is fine. If, however, you want to sort all volumes by volume name, regardless of the aggregate they are on, then you will have to create the object collection and sort by that afterwards. -Michael
... View more
nsitps1976 wrote: See the DS4243 Installation and Service Guide - It mentions software based disk ownership on page 4 (also see notes below) - Does this help answer the question???? Well, that's exactly what Software based disk ownership is for: splitting disks in one shelf between different controllers. The Service Guide only says that DS4232 doesn't support hardware based disk ownership (which makes sense). Nothing in there says that you cannot assign disks in the same shelf to different controllers.
... View more
chriszurich wrote: That is correct, each controller needs 1 disk shelf. Do you have any sources for that claim? We have various customers that have split shelves between the two controllers. This has definitely worked with DS14 shelves before and I have not found anything that suggests that it has changed with the new SAS based shelves -Michael
... View more
nigelg1965 wrote: Raid group size can't be change after the aggregate is created, so you are stuck with 16. This is wrong. You can change the RAID group size any time you want. "aggr options <aggrname> raidsize <n>". You can even add disks to other raid groups than the last one created by adding the "-r" option to the "aggr add" command. I wouldn't recommend making RAID size bigger than 16 or so, except if you have specific reasons to do so. Otherwise the rebuild times will get bigger and bigger very quickly -Michael
... View more
If you're getting coredumps either post the full error message here (otherwise we can't help) or (recommended) open up a case with your reseller or with NetApp to have them analyze the dumps -Michael
... View more
This requirement has been removed in DOT8.0.1. You *can* have the root volume on 64 bit aggregates (it has been only a warning before anyway so you could simply ignore it) -Michael
... View more
What model of filer is this? Which OnTAP version? The best way for you would probably be to file a support request with your reseller. We debug performance problems like this quite often and there are so many factors that could be involved. Some examples: *extensive CIFS logging/auditing *volume fill rates >80-85%. check "df -h" *volume fragmentation. check "reallocate measure /vol/<volname>" *maybe it's simply too much I/O for your system *more disks/shelves could also help improve I/O performance *SMBv2 features that have vastly improved in newer versions of OnTAP etc. etc. etc. There's so much to consider which makes it very hard to debug via the community forum -Michael
... View more
Is this with ESX 4.1? I have seen similar problems there with paths being lost. I think it's a bug in ESX 4.1 that will be fixed in the next release -Michael
... View more
We always use RAID sizes of 13 or 14 disks. So, for example, set the RAID size to 14 and do "aggr add <aggrname> 14" you will get one 13 disk and one 14 disk raidgroup, which is the best option performance-wise (raid groups should be around the same size) if you have (or want) 2 spares you could set the raid size to "13" and do "aggr add <aggrname> 13" to get two 13-disk raidgroups And you can use "aggr add -n" to preview the new layout without actually adding the disk hope that helps, -Michael
... View more
Depending on the type of data on the LUNs, maybe you can do a client-side online migration? For example for Oracle systems, you could give the DB server a second LUN on the NetApp and then use a third-party product (like "Libelle") to migrate the database from one LUN to the other. Or, for simple file systems, you can probably do a software mirror (linux RAID, Windows dynamic disks) onto the new LUN and, after everything is in sync, fail the old LUN and remove the software mirror again I'm sure there are other solutions available for other applications -Michael
... View more
The first question you can answer yourself: if the controllers need separate aggregates, of course they also have their own root volumes. And it doesn't matter what type of disks you have, you can assign any disks to any controller. You can split the 12 internal disks in 6 for controller A and 6 for controller B (although I wouldn't recommend it). You can give each controller disks from a DS4243 shelf. Or one controller gets the internal disks and one the external ones (that's the way it is usually done) -Michael
... View more