If the NT workstation is joined to the same domain as the filer, then the system account equates to the computer account for the workstation in the domain. Suppose I have a domain, mydomain, and a workstation joined to the domain, myworkstation. Add mydomain\myworkstation$ to the permissions on the share and volume. If the filer and/or workstation are not domain joined, you're out of luck.
... View more
Is your intent to use shared disk connected to your cluster as either a drive letter or mount point, or is it to use Cluster Shared Volumes for Hyper-V? John
... View more
"There's no such thing as 2TB SAS. These are SATA drives !!" Why you are, absolutely right vmsjaak13. In getting carried away in the nuances of where the space goes, I overlooked the obvious that was starting me in the face. The main difference, other than the interface and the performance in terms of IOPS for random IO, between SAS and SATA is that SAS drives are formatted at 520 bytes per sector and SATA drives are formatted at 512 bytes per sector. For SAS, that checksum information is stored in the extra 8 bytes per sector. For SATA the checksum information has to be stored somewhere else, which uses blocks. For the same size spindle, you'll have about 10% less usable space per data spindle. The max spindles per RAID DP RAID group for SATA is 16 in a 32 bit aggregate, however I believe it goes up to 20 or so in a 64 bit aggregate. This is clearly a 64 bit aggregate because it is beyound the max size for a 32 bit aggregate. JohnFul
... View more
Let's step though it. One thing not mentioned is Data ONTAP version and if this is a 32bit or 64bit aggregate. Start with the drive. 2TB is in base 10 used by disk drive suppliers. The first step is to convert from base10 to base 2. This is the same across all storage vendors. 2,000,000,000,000 bytes = 1.8.19 TB. We've lost nearly 10% off the top. Next comes parity overhead. Since you chose the default RAID group size of 16, you get 16 drives (the other 8 does not equal a whole RAID group, although you can add drives later to the aggregate to fill a partial). Given 24 drives, I would have personally gone with a number like 22. The max RAID group size without overriding it for SAS on RAID DP is 28. With a RAID group size of 16, you have 2 parity spindles and 14 data spindles. With 22, you have 2 parity spindles and 20 data spindles. So, it's either 25.466 TB or 36.38 TB with either 8 or two spares. In that both of these are above 16TB, I'll assume large aggregates in ONTAP 8.0 or 8.0.1 7-Mode. Since these are SAS drives, they are formatted with 520 bytes per sector. the extra 8 bytes in each sector are used to store checksum data. If this were SATA, the sector size would be 512 bytes and the checksums would take additional blocks. They're not SATA, they're SAS, so no loss here. Another thing that happens is that drives are sourced from more than one vendor. Due to slight differences in the geometry and hence the number of sectors, drive are typically "right sized" so that they are interchangable across the vendors from which they are sourced. This typically consumes about 2% of the space, and that's across the storage industry. 25.466 becomes ~ 24.95, and 36.38 becomes ~ 35.65. After that, we reserve 10% of the space for WAFL to do it's thing. You pay 10% of space to optimize the write performance. How much? Check out http://blogs.netapp.com/efficiency/2011/02/flash-cache-doesnt-cache-writes-why.html where I present the results of 100% random write workload tests over time. That leaves you with 22.45 TB or 32 TB. Last but not least, from the usable space there is a default 5% aggregate reserve. If you are not using MetroCluster or Synchronous Snapmirror, then you can remove the reserve to recoup that 5%. (see the link). With the aggregate reserve, 22.45TB becomes 21.325 TB, what you obtained, and 32.085 TB becomes 3.05 TB. In light of this, I'd recommend using a RAID group size of 22, and removing the aggregate reserve (unless you are using Metrocluster or Syncronous SnapMirror). This would give you 32.085 TB in an aggregate consisting of 22 spindles, and two hot spares for a total of 24 drives. I hope that helps explain where the space goes. JohnFul
... View more
CIFS over WAN .... Is probably going to be a bit painful unless you are using a client and an ONTAP version that supports SMB 2.0 Durable Handles. I think that's 7.3.1/8.0 7mode or later off the top of my head. Then you'd want to enable the durable handles et al. You might want to look into the following options that apply to SMB 2.0 cifs.smb2.client.enable Enables or disables the storage system's SMB 2.0 protocol client capability. off 7.3.1 cifs.smb2.enable Enables or disables the SMB 2.0 protocol. off 7.3.1 cifs.smb2.durable_handle.enable Enables or disables SMB 2.0 durable handles. on 7.3.1 cifs.smb2.durable_handle.timeout Specifies the SMB 2.0 durable handle timeout value. 16m 7.3.1 cifs.smb2.signing.required Enables or disables the requirement that clients sign SMB 2.0 messages. off 7.3.1 John
... View more
You'll find the answer here http://now.netapp.com/NOW/knowledge/docs/snapdrive/relunix411/html/software/install_solaris/overview/concept/c_sd_ovw_managing-LVM-entities.html John
... View more
I used the Data ONTAP PowerShell toolkit to make a short script and test setting the LUN comment attribute in increments of 64 from 64 to 32768 characters long. The answer is it succeeded, so at least 32768 characters. The script I used was to test was: Import-Module DataONTAP Connect-NaController FAS3040A $TestPattern ="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" $TestString = "" Do { $TestString = $TestString + $TestPattern ; set-NaLunComment /vol/testvol/qt_test/test.lun $TestString ; $TestResult = get-NaLunComment /vol/testvol/qt_test/test.lun ; $TestResult.Length } Until ($TestResult.length -eq 32768) You could increase the test value of 32768 in the do - until loop to something higher and continue testing if you're really curious... J
... View more
When a disk is presented to multiple hosts, access to the disk by a given host is controlled through a SCSI Reserve, Who owns the reserve? You may want to open a case with NGS to troubleshoot. J
... View more
If you initialize the disk, you are in effect partitioning it. That's a destructive process for any data that already resides on the disk. Are you sure a partition already exists? J
... View more
If a partition already exists, then you need to give it a drive letter or a mount point. In no partition exists, you will need to crate one and format the disk first. From your comment, it sounds like a partition already exists. J
... View more
Diskpart is a utility in Windows 2008. Starting in Windows 2008, the default disk policy is SAN Offline. THis is meant to protect shared disks from becoming corrupted. The downside is that SAN presented disks are offline by default. To change the policy: 1. Open a command prompt with administrative priviledge. 2. Type Diskpart. 3. At the prompt, type SAN Policy=OnlineAll and hit enter 4. type exit and hit enter 5. close the command prompt window Now go into disk management and try. J
... View more
confirm-nalunhasscsireservation is returning false. The writes may be small enough that you're not catching the reservation before it is released. J
... View more
You wil be able to do this in the Data ONTAP PowerShell Toolkit v1.3 This will include an Invoke-NaSsh cmdlet so that you can execute a console command for which no specific cmdlet exists from within your PowerShell scripts. John
... View more
Well, Actually we can do a lot more than one would think. First, we would need to find which specific initiators are mapped to the LUN, and then find who holds any persistent reservation. Since this is FCP we are dealing with here and not iSCSI, the reservation key is simply the WWNN of the HBA on the host holding the reservation.The ZAPI lun-initiator-list-map-info gives us information about which initiators are mapped to the LUN, and lun-get-persistent-reservation-info tells us who holds the reservation. Using the Data ONTAP Powershell Toolkit, we first find out which initators are mapped to our LUN. To all the mappings by LUN: Get-NaLun /vol/vol2/lun2 | Get-NaLunMap | ForEach-Object {$_.Initiators} | Get-NaLunMapByInitiator Now, to see if there is a persistent reservation on the LUN: Confirm-NaLunHasScsiReservation /vol/vol2/lun2 Where /vol/vol2/lun2 is the path to the LUN we are interested in. Assuming there is a reservation (a True reseult), we can then get the reservation: Get-NaLunPersistentReservation /vol/vol2/lun2 Since this is FCP, the key is the WWNN of the HBA holding the reservation. With the knowledge of the specific HBA that has an exclusive reservation to the LUN, we can then collect IO data on the LUN with: get-nalunstatistics If you need to clear the counters before collecting data, use: clear-nalunstatistics. John
... View more
You'll need to increase the window size for snapmirror for one. I believe the formula is in the DPG. Then, it's a matter of bandwith. In a recent engagement, with multiple snapmirror sessions (multiple volumes in multiple protection manager jobs) running simultaniously across multiple 1Ge NICs running though Riverbed compression across dual 600Mbs WAN connections with 60ms latency over a distance of 2500 miles, I was able to get pretty close to that range with 3170s. The limitation was bandwith. The filers were not getting pounded. J
... View more
When you enable Flexscale for the Flash Cache, you cache metadata and normal data blocks by default. That's the mix you want in a Hyperv environment. Caching Lopri blocks is disabled by default. You wouldn't enable it unless you had a need to cache long read chains (sequential reads). If you're running multiple VMs, you won't see long read chains; the read mix will become more random. JohnFul
... View more
SnapMirror ... is going to replicate every snapshot in the volume. SnapVault is going to replicate the contents of the Qtree. You should be able to do this via PowerShell. Start-NaSnapvaultSecTransfer with the -PrimarySnapshot parameter. You should check out the Data ONTAP PowerShell Toolkit v1.2 if you have not alread done so. http://communities.netapp.com/community/interfaces_and_tools/data_ontap_powershell_toolkit JohnFul
... View more
Hi Baber & Eugene, "There are many strategies for implementing thin provisioning of LUNs in a SAN environment. I'm fond of turning off all space reservations - set the space guarantee of the volume containing the LUNs to 'none'. I turn on auto grow and snap auto delete with try-first set for auto grow. (I prefer not to delete snapshots - they're taken for a reason!) But there's another strategy you may wish to consider for optimum space utilization and cache (PAM) performance. Don't make 50 copies of these LUNs (VMs). Make 50 clones. The space savings for cloning virtual machines is huge. ( 50 X 20G = 1TB , 50 clones of 20G = ~40G ? ). But there's another advantage in cache performance. When you're using clones Data ONTAP only needs to cache the shared blocks once, instead of cache holding 50 copies of the same blocks. Be sure to keep your data sets seperate from your OS images to segregate your rate of change in snapshots" I like to do both of these, with a twist. First, even if I am not planning on overcommitting the storage, I like to remove the reservations at set the guarantee "none". I don't use autogrow (I rely on monitoring to decide if/when I need to grow) and I do use snapshot autodelete. I do put all my luns in qtrees, then remove the lun space reservation and set a threshold quota on the qtree. The threshold quota is a soft quota that also sends an SNMP trap to the monitoring tool of my choice. I do try to think out the layout: I put the OS on a VHD, the majority of the page file on a seperate VHD, and application data on one or more additional VHDs. I group related OS VHDs on a lun, then I can clone that LUN in the volume to create a like layout. I turn dedupe on for that "OS" volume. I put the page file VHDs grouped on LUNs that reside in a seperate volume that is not deduped or overcommitted, but still has the same qtree/quota monitoring. I find that an easy way to figure out what optimal page file sizes are. I create seperate volumes for app data with groupng and dedupe options depending on the nature of the data. Here's an example I started on the other day: The volume Hyperv1 is 500GB and has no reserve or guarantee. I have enabled dedupe on the volume. Inside I have two qtrees, each containing a LUN. I have a 100GB thin LUN that contains a single VHD with a WIndows 7 host at the moment. I also have a 200GB LUN that contains 3 VHDs; a windows 2008 R2 domain controller, a Windows 2008 R2/Exchange 2010 Hub/Cas and a Windows 2008 R2/Exchange 2010 mailbox server. For the three windows servers, I started with a sysprep image then just cloned the lun and did unattended/automated installs and then patched up all the hotfixes and service packs. I can look in with the Data ONTAP Powershell Toolkit v1.2 and have great visibility to what's actually happening to my space: Here I see my 500GB volume with 463.4GB of space available. PS C:\> get-navol | ? {$_.name -like "*hyperv1*"} Name State TotalSize Used Available Dedupe FilesUsed FilesTotal Aggregate ---- ----- --------- ---- --------- ------ --------- ---------- --------- Hyperv1 online 500.0 GB 7% 463.4 GB True 125 16M aggr1 And I see the dedupe ratio (I haven't boken out the page files or app data into seperate LUNs yet. When I do, the dedupe ratio on the base OS will go up substantially) PS C:\> get-navolsis Hyperv1 LastOperationBegin : Sun Nov 7 01:29:24 GMT 2010 LastOperationEnd : Sun Nov 7 01:44:59 GMT 2010 LastOperationError : LastOperationSize : 53206102016 PercentageSaved : 26 Progress : idle for 00:02:58 Schedule : - SizeSaved : 14107312128 SizeShared : 7712165888 State : enabled Status : idle Type : regular Here are my LUNs: PS C:\> get-nalun Path TotalSize Protocol Online Mapped Thin Comment ---- --------- -------- ------ ------ ---- ------- /vol/Hyperv1/VM1/Lun1.lun 200.0 GB hyper_v True True True /vol/Hyperv1/VM2/Lun2.lun 100.0 GB hyper_v True True True Here's the interesting part; because of the quotas on the Qtrees, I also have visibility into the LUN: PS C:\> get-naquotareport Volume Qtree Type Disk Limit Disk Used File Limit Files Used ------ ----- ---- ---------- --------- ---------- ---------- Hyperv1 VM1 tree 45.5 GB 8 Hyperv1 VM2 tree 4.2 GB 8 A volume contains 0.5GB of metadata. In addition to that, my 300GB of LUNs are only using a combined 49.7GB for a total of 50.2GB used in the volume. Sine I have dedupe enabled, and am getting 26%, From the volume I am only consuming 36.6GB. I'm working on a function to do a daily email report in a better format, and have been experimenting with those SNMP traps. I send the traps to SCOM/Appliance Watch Pro 2.1, where I can take actions on them when my threshold is reached. I set the threshold at 75% of the declared size so that I have time to evaluate the situation and take action before the volume fills up. Actually, if I keep getting 26% dedupe, I'll probably push the quota up to around 95% (so I still get the alert before the LUN "fills" due to the LUN size). When I get the alert in SCOM/Appliance Watch, I can log it, send an email, fire off another script (grow the LUN, run the space reclaimer, whatever) depending on the evaluation logic I write. In a VDI situation, where you have a gold image that you volume flexclone, you don't keep data on the image; you keep it external in CIFS/roving profiles/etc. It's great for deploying lots of exact copies. When it's time to do patch Wednesday, you create a new gold image, flexclone, then rebase your VMs. Because volume flexclones depend on that base snapshot until you do a split (which you wouldn't do if your intent is to dedupe hence the new gold/rebase when you patch) it doesn't work so well for servers that you want to keep around but continue to patch. By using lun clones, there is no base snapshot floating around and I can patch away till doomsday and still run scheduled dedupe and keep a decent dedupe ratio up there. Many application servers aren't so hot at being cloned, and that's why I lun clone a sysprep image and then finish up with unattended/automated install of the app. You'll need to look closely at your situation and decide which type of clone will work best for you. If you haven't seen the Data Ontap PowerShell Toolkit yet, it's over here http://communities.netapp.com/community/interfaces_and_tools/data_ontap_powershell_toolkit. You may also want to stop by blogs.netapp.com/msenviro and see some of the stuff Alex is doing with Opalis integration. Last but not least, if you went to NetApp Insight last week in Las Vegas or are going to the upcoming Insight events in Macau or Prauge, check out session MS-41782 or download a copy of the slide deck. John Fullbright (JohnFul on Twitter)
... View more
Who owns them now? Yes, the -Name parameter expects a string. You could create an array and pipe that to set-nadiskowner. It's possible to be a bit more creative as well depending on what you're trying to do. Get-nadiskowner returns objects that include a -Name property, a string, which is the name of the disk. On set-nadiskowner, the -Name parameter accpts pipeling input bypropertyname... Let's say that I add a new shelf and want to assign all the disks to a specific controller, controller1: $controller1=get-nacontroller controller1 get-nadiskowner -OwnershipType unowned | set-nadiskowner -Controller $controller1 Or perhaps I have a shelf that came from another controller and have now attached it to controller1 and wish to take ownership of all the disks with unknown ownership and assign them to controller1: $controller1=get-nacontroller controller1 get-nadiskowner -OwnershipType Unknown | set-nadiskowner -Controller $controller1 J
... View more