Microsoft Virtualization Discussions

Questions on Hyper-V and iSCSI with a FAS2020


Please bear with my lack of in-depth knowledge of SAN workings. The story is that I'm going to be migrating from a StoreVault S500 (NetApp's small business line) to a FAS2020 in the coming weeks. On the StoreVault, I currently have a Hyper-V server running 6 VM's on a LUN. After skimming through some of these "best practices" docs from NetApp for virtualization, I'm getting overwhelmed. With the StoreVault, I didn't really have to worry about all this aggegrate and volume stuff. There was 1 aggregate and 1 volume, and then however many LUNs you wanted to create. Keep in mind that I'm a pretty small shop (5 physical servers, 6 VM's, Exchange with ~50 mailboxes). Questions I'm wondering in moving over to the FAS2020...

  1. Do I need to worry about aggegates and volumes? I don't really know much about the differences and what to do with them. I'll also be moving over my Exchange LUNs (DB LUN and log LUN) with this migration as well as a ~350GB CIFS share. Do I create multiple aggegates or volumes for certain scenarios or something? If it's a performance thing, would it really matter for my small implementation?
  2. Do I *need* SnapDrive to do LUNs with the FAS2020? I did not purchase SnapDrive with the FAS2020. I didn't even think about it since SnapDrive came free with my StoreVault stuff, but some of the best practices started talking about installing it for LUN stuff.

Are there any other things anybody can think of that may be drastically different moving over to this new system? Thanks!



Hi Robert, thought I would jump in.

1.  It is perfectly valid to run your FAS2020 with a single aggregate, especially in a smaller environment like yours.  While not ideal for high i/o environments, the NetApp best practices guide for Exchange (TR3578) specifically mentions this as acceptable for smaller environments.

2.  You are correct that other data should not go in vol0, and 286GB is definetly larger than you need.  The Data ONTAP admin guide for 7.3.1 recommends 10GB minimum for a 2020.

Hope this helps.

Mike Slisinger


View solution in original post



Hi Robert - Chaffie McKenna, one of the authors of the best practices you've been reviewing, plans to post a response when she has a minute. Unfortunately she's been absolutely swamped this week. Sorry for the delay!

Your questions will be perfect to kick off a live Q&A with Chaffie and other reference architects during a Hyper-V and NetApp Webcast on Thursday, April 9. NetApp and Microsoft experts will review our joint solution while Dan Morris, a systems engineer from a Canadian customs and freight broker, will discuss his upgrade to Hyper-V. Dan's team is using a FAS2050; a success story with details is available at


First of all, we cover Best Practices for implementing NetApp storage for Hyper-V in a NetApp Technical Report titled “NetApp and Microsoft Virtualization Storage Best Practices” with TR-3702, which can be found in the Library.  In addition, we have also published the “Microsoft Hyper-V on NetApp Implementation Guide” in TR-3733, which can also be found in the Library.

You can probably find the majority of the answers you are looking for in one or both of these documents.  The Implementation Guide will walk you through, step-by-step, everything you need to do to configure your NetApp storage system, including the Aggregates, Volumes, and LUNs.  It will then walk you through the installation of Hyper-V and System Center Virtual Machine Manager (SCVMM), including the creation of Virtual Machines (VMs) and attaching VHDs to those VMs.

Now to answer your questions:

1.  Yes, you need to worry about aggregates and volumes.  An aggregate is essentially a RAID Group or a grouping of the actual physical disks into a logical storage grouping, equaling the total storage available on all disks selected.  We suggest using RAID-DP and a RAID Group Size of 16, which should be the default selection.  When it comes to disk selection, we want the disks to be the same type (SCSI or SATA, but not a mix) and preferably the same size.  For the Volume, NetApp strongly recommends that it should be a Flexible Volume (FlexVol), with a 0% Snapshot Reserve, and Space Guarantee set appropriately depending upon whether you want to thin provision your new storage or not. For more information on Thin Provisioning with NetApp Storage, see TR-3563 titled “NetApp Thin Provisioning: Improving Storage Utilization and Reducing TCO”.  Sounds familiar, right?  That’s because we use Storage Virtualization to accomplish Thin Provisioning on our storage, so the message for Server Virtualization is the same as Storage Virtualization – we’re improving Utilization thru consolidation essentially, which helps shave costs off your storage implementation.

A)  I would recommend creating one aggregate for your Exchange Database and one aggregate for your Exchange Logs.  It depends upon the workload supported for Exchange that determines how many disks need to be added to the Exchange aggregates.  For more information on the aggregate sizing, see TR-3683 titled “Microsoft Exchange 2007 on VMware Infrastructure 3 and NetApp iSCSI Storage”, where the recommendations for storage sizing also apply for Hyper-V the same as VMware ESX.

i.  Create separate NetApp FlexVols within the Exchange DB Aggregate for each Exchange Storage group, where you separate Storage Group equally from the total users supported on the Exchange Server.

ii. We have a NetApp Exchange Sizer available today, which will tell you how much storage you require to support your environment.  All you need to do is contact your SE or Account Exec and they can gather the information about your environment needed in order to obtain a recommendations from the NetApp Exchange Sizer.

B) With all of the disks left over after creating the two Exchange Aggregates, I would create an aggregate for all the virtualization related storage (where you will keep the VM Configuration and Virtual Disk Files).  From there create at least one FlexVol following the best practices outlined in TR-3702 (see first paragraph above).

i.  The first FlexVol should store the VHDs for all the Virtual Machines in your infrastructure.  Remember Microsoft’s best practice recommendation of “One VM per LUN” (see TR-3702), this practice will go away once R2 is released, so it is the customers decision (case-by-case) whether to follow the Microsoft Best Practice.  If you plan on upgrading to R2 as soon as it is released and do not plan on relying on the current Quick Migration functionality (requires downtime for migration), then as a customer I might choose to ignore the recommendation.  However, if you are not planning on upgrading to R2 anytime soon or plan to leverage the Quick Migration functionality as much as you can – then I would follow Microsoft’s best practice recommendation of “One VM per LUN”.  Note that it DOES NOT say “One VHD per LUN”, you can have as many VHDs as you wish, so long as they are only associated with one VM.

ii. A second FlexVol may be created if you want to use pass-through disks or directly attach storage to VMs using the Microsoft iSCSI Software Initiator, regardless of whether or not you intend to utilize any of the NetApp SnapManager family of products.

2. No, you don’t need NetApp SnapDrive for Windows to attach NetApp storage to your Hyper-V hosts.  There are many advantages of having NetApp SnapDrive installed, but having it is only a recommendation, not a requirement at this time.  However, if you plan to use HyperVIBE or any SnapManager family products, SnapDrive is required to be installed on the system in which those products also are installed in order for them to work correctly, as they leverage SnapDrive for many functions.

A) However, there are many advantages to having SnapDrive installed, including increasing storage utilization, managing volumes dynamically, enabling business continuance, increasing reliability, speedy backup and restore, increased availability, lowered total cost of ownership, increased responsiveness, and simplification of storage management.  It greatly simplifies the process to create and connect new storage to the Hyper-V hosts, especially if they are clustered.  It allows you to manage the shared storage like it is directly connected to the server, it greatly simplifies management of the storage overall.

B) If you do not have SnapDrive installed, you can use the processes for provisioning shared storage to servers with server core installations, as found in TR-3733. Specifically, follow the instructions in section on page 64 of the current version of this document published here.

i.  If you want a comparison for how much easier it would be if you had SnapDrive installed, then look at the instructions in section as found in the current version of this TR-3733 document published here.

I don't expect anything else to be so drastically different from your StoreVault system, be sure you have all the correct licenses installed (through FilerView) for your NetApp array, and follow the Best Practices outlined in the NetApp Technical Reports found in the Library and you should be good to go.  If you run into problems, don't hesitate to post here again and I'll try to get to it sooner next time around.  Sorry for the late reply!

Good Luck!

Chaffie McKenna, NetApp


Wow, that's a lot of info to process. Thanks very much for the detailed response. However, let me be a little more defined in my questions now that I've got this FAS in-house and turned on. Pardon the "easy" questions. Maybe you can create a newbie/SMB guide based on me.

  1. I only have 6 disks in my FAS2020 running 5 live with 1 spare. When I turned this guy on for the first time, an aggregate was already created using the 5 disks, set to RAID-DP, with 1 spare. There is a volume already created (vol0) for the root volume. I understand your recommendation to create separate aggregates for Exchange DB, Exchange logs, and then for Hyper-V. However, with only a 5 disks/1 spare model that doesn't seem (in my limited knowledge) that it would be efficient. First, to create an aggregate, it says I need two disks. Therefore, I can only create a maximum of 3 aggregates (that is, if I do NOT use a spare). Second, it seems that it wouldn't provide much efficiency to have aggregates using only two disks as that wouldn't be much of a spread of data and IOPS, right? Maybe I'm missing something. This all leads back to my original question of: since I'm such a small shop, should I just have one main aggregate then possibly multiple volumes for different applications? Please keep in mind that I also need a CIFS volume somewhere in here in addition to these three LUNs.
  2. This default vol0 that is the root volume, should this be reserved only for root stuff? The default size for it is 268GB, but only 193MB is used. Surely I don't need to lose this much space, right? If it should remain for root-only stuff, can I resize down? to what size?


Hi Robert, thought I would jump in.

1.  It is perfectly valid to run your FAS2020 with a single aggregate, especially in a smaller environment like yours.  While not ideal for high i/o environments, the NetApp best practices guide for Exchange (TR3578) specifically mentions this as acceptable for smaller environments.

2.  You are correct that other data should not go in vol0, and 286GB is definetly larger than you need.  The Data ONTAP admin guide for 7.3.1 recommends 10GB minimum for a 2020.

Hope this helps.

Mike Slisinger



Thanks for the replies. I've finally got this setup how I want it (I think) and am about to migrate over one night this week. I've made one main aggregate and multiple volumes (Exchange DB, Exchange logs, Hyper-V, vol0 (root), and a Shares vol).

I have another quick question: can I successfully use “vol copy” to copy my shares volume from the StoreVault to the FAS2020? I’ve looked at the commands documentation and didn’t see any restrictions as far as copy from different filers. Also, I’ve looked at the available commands from my StoreVault command line and vol copy is available. I’ve also seen on the community forums some people using “ndmpcopy” to copy volumes. Is one better than the other? Which is faster and requires less downtime? I'm not too interested in requiring that snapshots be carried over although it'd be nice.

I’ll probably move my LUNs/LUN volumes manually so that shouldn’t be an issue.




While I've never tested it, I see no reason why vol copy wouldn't work as long as your 2020 is not on a earlier major release than your StoreVault.

I would expect it to be faster than ndmpcopy and it will bring over all of your snapshots.  If you want to minimize downtime, SnapMirror would do that, but if your volume is small enough and you can handle the downtime, vol copy should work just fine.


I need some help getting this to work. I've tried it and get a "Permission Denied" error. I've read that this is because each of the filers have to have the other listed in the /etc/hosts.equiv file. I've tried it with each other in the regular /etc/hosts file, but it still gives "Permission Denied". Since /etc is shared by default on the FAS2020, I was able to edit the /etc/hosts.equiv file to include the StoreVault. However, I cannot get to /etc/hosts.equiv on the StoreVault. The root drive nor etc is shared. I've tried creating a share for /etc but it doesn't allow me inside. I've tried creating a share for root (/vol/vol0) and that worked, but when I tried to enter the etc directory it still would not let me. I've allowed access for myself, but to no avail. So, the question is....

1) How can I get to the /etc/hosts.equiv on the StoreVault in order to add the FAS2020? Is there any way to get access to "vi" via the SSH command line?

2) Once done, do I need to reboot the StoreVault in order for it to take effect? If so, I'll have to stay yet another time after hours to take it down.


OK, I answered #1 myself. To edit files on the StoreVault, you just have to FTP into it. I FTP'd in with FileZilla and it allowed inside /etc. I was able to pull off the hosts.equiv file, add the FAS info, and put it back. I'm still getting the "Permission Denied" but I'm guessing that maybe it hasn't reloaded that hosts file. I'm guessing I either have to reboot or down/up the network.


Well, I've followed all the requirements and I still get "Permission Denied" while trying to do a "vol copy" from my StoreVault S500 to my new FAS2020. I've added both filers to each other's /etc/hosts.equiv and rebooted. I've followed all the requirements such as destination vol must be same/bigger size, must be offline, must be same/bigger max files, etc. I continue to get "Permission Denied" but no other explanation. I've got my scheduled migration tonight and I'd really rather not do it the old fashioned way of just dragging the data between filers. Can anyone help? If not, I might open a case to see if it is even possible.


Well, as always, I figure it out just after posting. I've worked on this for days and finally posted and then figured it out. Oh well.

My problem, for those interested, was that I was unknowingly setting up the /etc/hosts.equiv file incorrectly. I was unaware that this could be done via FilerView under the Security->Manage Rsh Access section. I was editing these hosts.equiv files manually and thereby setting them up just like the main /etc/hosts files. After I found it FilerView, I saw that the hosts.equiv file is set by specifying an IP and a connecting user name (not an IP and hostname like the regular hosts file). In addition, editing and applying these changes via FilerView does NOT require a reboot. I edited both hosts.equiv on the filers via FilerView, applied the changes, and VOILA! It worked!


Robert, congrats on figuring this out on your own! Is your Hyper-V environment up and running?


Well, the results are in...and they're good. The migration from my StoreVault S500 to the FAS2020 went down last night and lasted about 9.5 hours for 710GB of data including CIFS shares, Exchange DB/logs, and Hyper-V VMs and environment. For those interested in doing the same or similar, here was my process. Also, for the record, my new setup is a FAS2020 with (6) 500GB disks, one aggregate (5 disks, RAID-DP, one spare), a "shares" volume for CIFS, an Exchange DB volume, an Exchange Logs volume, and a Virtual volume for Hyper-V. All this is done without SnapDrive just using FilerView.

CIFS Shares:

  1. Disable CIFS on StoreVault and FAS.
  2. Use "vol copy -S shares newfilername:shares" to migrate "shares" volume.
    • Prerequisites for "vol copy" were (a) volumes same size (or dest bigger), (b) source vol online, dest vol offline, (c) each filer listed as trusted host of other filer in /etc/hosts.equiv (used FilerView->Security->Manager RSH Access->insert and reboot needed).
    • This ran at about 45.8MB/s and ran perfectly.
    • Permissions and snapshots all transferred successfully.
  3. Setup/ensure shares on FAS for "shares" volume.
  4. Enable CIFS on FAS.
  5. Edit logon script and test mappings.


  1. Install Windows Host Utilities and reboot.
  2. Old LUNs were E:\ (DB) and L:\ (logs). Mapped new LUNs as F:\ and G:\.
  3. Take down all stores and stop all Exchange-related services.
  4. For the record, I had major trouble with using space reservations on the Exchange volumes. They kept getting full even though I was nowhere near the max capacity. After I turned off space reservations (since I don't have SnapManager for Exchange and thereby will not do snapshots), all went perfectly well.
  5. Used "xcopy E:\ F:\ /O /X /E /H /K" to copy all DB-related material on E to F.
  6. Used "xcopy L:\ G:\ /O /X /E /H /K" to copy all log-related material on L to G.
  7. Verified xcopy transferred files and permissions correctly (which it did).
  8. In iSCSI Initiator, removed StoreVault target (and persistent mappings/target) to unmap E:\ and L:\.
  9. In Disk Management, changed drives letters F:\ to E:\ and G:\ to L:\.
  10. Reboot server and ensure stores come up correctly (which they did).
  11. Test external email coming in, internal Outlook access, and external OWA access. (all fine)


  1. "Save" all VMs. You could also shutdown all VMs, but save worked just fine. Do NOT "Pause" them.
  2. Install Windows Host Utilities and reboot.
  3. "Stop" Hyper-V services.
  4. Use "vol copy virtual newfilername:virtual" to migrate "virtual" volume.
    • Prerequisites for "vol copy" were (a) volumes same size (or dest bigger), (b) source vol online, dest vol offline, (c) each filer listed as trusted host of other filer in /etc/hosts.equiv (used FilerView->Security->Manager RSH Access->insert and reboot needed).
    • This ran perfectly, but for some reason much slower than the CIFS volume. It ran at about 18.5MB/s.
    • Permissions and files all transferred successfully. I had no snapshots to transfer (hence the absence of -S flag on vol copy).
  5. Old LUN was V:\. New LUN mapped as W:\.
  6. In iSCSI Initiator, removed StoreVault target (and persistent mappings/target) to unmap V:\.
  7. In Disk Management, changed drive letter W:\ to V:\.
  8. Reboot server and ensure all Hyper-V services start correctly. (which they did)
  9. "Start" all VMs and ensure properfly functionality. (which they did)

I have not yet turned on DeDupe scheduling, but plan to do some once it knocks the training wheels off. Enjoy!


So far how do you compare the speed and performance of the FAS2020 to the S500?  What was the configuration of the S500? Looking for S500 replacement options when my support runs out.


Hey Nicholas. Long time no talk. I'm very pleased with the FAS2020 so far. After the StoreVault incident, I swore I'd leave but they made an offer I couldn't refuse. Also, I just didn't find much else out there on this scale at the deal they were offering. So, we pulled the trigger.

I don't have any hard numbers for you, but the performance seems great so far. My only evidence is that it "feels snappier". Ha. I am pleased to now have jumbo frame capability on the NICs now. I've set one of the two NICs on a different subnet and VLAN for iSCSI traffic only using jumbo frames with Exchange and Hyper-V. All is good. One good thing about sticking with them is that migration was easy with the "vol copy" command. Copies everything as-is and even migrates the snapshots.

The configuration I had was an S500 with 12x 250GB running the same as I am now: CIFS shares, Exchange LUN, and Hyper-V LUN. However, I'm now on a 6x 500GB with the FAS2020 so it gives me somewhat smaller space but room to grow with 6 bays empty.


Congratulations and thank you for the detailed post.  I'm sure other members will find it very useful.  You mentioned that you used FilerView for the whole process.  While customers love FilerView, many are finding NetApp System Manager much easier to use.  My favorite feature is that it allows you to manage multiple controllers in one view.  Steve blogged about the release in April.  It is now available for download on the NetApp NOW site.


Thanks for the heads-up on System Manager. I've been aware of this product for some time now via an NDA that I'm under, but was unaware that it had launched. I'll give it a whirl.