Ah...yes, should have been more specific, sorry. This is in the /etc/log dir of the physical filer (vfiler0). The vfiler's /etc/log dir only contains the *.alf and *.evt files
... View more
Hi Scott, Thanks for your reply. Do you mean /etc/log/auditlog? If so, yeah, I checked in there, but didn't see anything relating to the change to the CIFS share either. Craig
... View more
HI All, Sorry, I can't help feeling I should know this, but I just can't find what I'm looking for. We have a number of vfilers providing CIFS file sharing. We have a team of 1st line support people who have rights to create, remove and modify shares via the Windows MMC. I'd like to be able to keep a log of these changes, but I can't seem to find out how/where to do this. I've turned on CIFS audit logging, but only seem to see login/logout events. I've turned on the option cifs.audit.account_mgmt_events.enable, but it doesn't seem to have changed what is logged in the event logs. Anyone have any clues on this? Thanks, Craig
... View more
Perhaps some definitions will provide some clarity: Volume Guarantee = reserve space in the aggregate. If set to 'volume' that volume reserves its full size from the aggregate regardless of whether it has data in it or not. If set to 'none' this is effectively thin provisioned. (vol options guarantee) LUN Space Reservation = reserves the space for the LUN if set to enabled whether there is any data in it or not (lun set reservation ...) Fractional Reserve = reserves blocks in the volume to use in the event that the volume fills up. (often this is turned off these days). Only comes into effect when there is one or more snapshots. See how much is being used, if enabled, using df -r Snapshot Reserve = used as an area for snapshots. Snapshots do not need this to exist - if it's set to zero snapshot data consumes space in the volume just like the rest of the data. Here's how I normally set things up, as an example: Set your vol size to the size of the LUN plus about a third, so let's say 1.5TB in your case, but set the volume guarantee to 'none'. This is more space efficient, but still allows the LUN to use it's full size if it wants to. Set snapshot reserve to zero (snap reserve doesn't have much use when using LUN's). Leave fractional_reserve set to zero, unless you're paranoid and have a decent budget for storage. Set lun space reservation to disabled, as this combined with volume guarantee of none will realize any dedupe savings later on (since you're using vmware) - see the dedupe DIG for more info. This way your LUN can grow to the full 1.1TB used if it likes, and still has ~400GB for snapshot space. Now, if your snapshots grow beyond the 400GB available, your LUN will go offline, so to prevent this, set the volume autosize to on and set the max to 2t and the increment to, say, 100g. There are all sorts of different ways of doing it, and different people have different preferences, so this is an example rather than a set of rules. Try this and see how you get on.
... View more
Hello everyone, We are planning to upgrade from VSC 2.1.3 to VSC 4.0 in the next few weeks. In the Interop Matrix there is a note stating "vCenter Linked Mode is NOT supported with VSC for VMware vSphere". Since we are currently using linked mode, and I can't see any mention of this limitation with 2.1.3, does anyone know whether this is a show stopper, or have any further information on this limitation? Thanks in advance, Craig
... View more
Looking at it here, it checks qtrees every 8 hours by default, as opposed to 15 mins for volumes. I suppose it depends how many qtrees you have, but you could increase the monitoring frequency under Setup -> Options -> Monitoring -> Qtree Monitoring Interval
... View more
Check if DFM knows about your vFiler - home -> member details -> Virtual systems? Had a similar issue - DFM knew about the volume and qtree, but for some reason hadn't discovered the owning vfiler. Not sure if that's it but worth a shot!
... View more
Not really, as NFS(v3) is a stateless protocol. The closest you can get, at least that I'm aware of, is the nfs per client stats. This will show you active NFS clients using your vfiler, but not which exports they are mounting, or which users are accessing them, I'm affraid: vfiler context <vfilername> options nfs.per_client_stats.enable on nfsstat -l options nfs.per_client_stats.enable off vfiler context vfiler0 WARNING: make sure you turn client stats off afterwards, and don't leave running for more than a minute or so as it has a performance cost.
... View more
We've been moving a number of VSM destinations to a different filer. What I do is remove the source and dest volumes from the dataset (so the mirror now appears in 'External Relationships'), then do the dest volume move. Once you've resync'ed the mirror to the new destination and DFM has discovered the new relationship, you can import it via 'External Relationships'. I use the GUI to do this, but I'm sure there are CLI equivalents. Doesn't help you know, but for future reference...
... View more
From http://support.netapp.com/eservice/ems?emsAction=details&eventId=253662&software=ontap&emsId=cf.disk.inventory.mismatch&emsversion=0 "This message occurs when one of the nodes in a high-availability (HA) pair has reported this disk in its disk inventory, but the HA partner node has not. This might be due to one of following reasons: (1) One node can see the disk, but the other node cannot. (2) Ownership of the disk has changed. (3) The disk has either been failed or unfailed. (4) The disk has been inserted or removed." What does storage show disk -p show? Any missing paths? How about disk show / disk show -n? Are all disks showing correct ownership?
... View more
I'm sure there are KB articles about this, but don't have time to go hunting right now, sorry. As a quick example, if you delete a load of data from the client-side (eg NTFS) the client marks the blocks as free, as opposed to physically zero'ing out the data, right? Down at the storage level WAFL has no way to know these blocks have been deleted, so when you write more data to the LUN it will consume new blocks in the volume. Typically, as a LUN ages, you will find the NetApp side will show the LUN at, or close to 100% full, but the clients filesystem may still have plenty of space. This is by design, and often not a problem - although it looks a bit odd at first. Check out Snapdrive's Space Reclaimer feature if using Windows - this will reclaim those free blocks at the WAFL end if required. Docs around this will also explain why you see the difference in more detail than I've done so here. Regards, Craig
... View more
I've seen this after moving VSM destination volumes to a different filer/aggr. What I did was go into the PM GUI, edit the dataset, then remove the offending physical resources from the Primary and Mirrors. Anything taken out in error can easily be re-added using the Import in 'External Relationships' Hope that helps, Craig
... View more
Hi, What are your export options? How are you trying to mount it in OSX (eg via Finder or command line)? Is the permissions error when trying to mount, or does it mount OK, but you can't access the mounted filesystem? Thanks, Craig
... View more
Hi, Are there any relevant messages in the vcenter logs, or /etc/messages on the filer? We have a very similar environment, except we use NFS for our datastores. RDM's via FC also. I've not seen this here, but all I can add here is that we use snapdrive in the guest to create and map the RDM's, which works well. You may have another problem, so snapdrive may have similar issues. However, in case it helps, the process we use is: 1. Create the volumes/qtrees, but leave them empty. 2. Install Snapdrive on the guest, make sure you add the vcenter server and SMVI hostname. Use a service account which has access to the filer. 3. Start up Snapdrive GUI. You should see the VMDK's you already have for C:\, etc under the disks list (or sdcli disk list) 4. Use the 'Create Disk' wizard to create the LUN(s). When you come to select the igroup, use 'manual' and select your existing igroup for the esx hba's. All being well, snapdrive will create the LUN, make sure the volume options are set correctly, present it to the guest, format it, etc. Hope this helps!
... View more
That is a possibility, yes. But... You could say the same about running high I/O dev/test on the same filers as production, whether they are clones or not. Some things to consider: 1. If you are running dev/test on the same aggregate as production, the disk load will be shared whether you have flexclones or not. 2. All data on the controller uses the same NVRAM/CPU/ETC. So, FlexClone or not, you are sharing resources anyway. 3. If this is of concern, consider cloning from a snapmirror destination or a DataGuard copy (if using Oracle with DG) on a different filer. 4. Use FlashCache - the shared blocks used in the clone are more likely to be in the cache (as you have multiple clients accessing the same blocks), and will reduce disk load. Ultimately, it can be a trade-off between the benefits/savings from FlexClone and the cost of dedicated disks/filers/etc. We are using clones of Oracle DB's on some Prod filers, and it was also a concern at the design stage. We took a shot at it and it's working OK 1 year on. Any critical DB's with high IO (such as data warehouse applications) are cloned at DR from the mirrors. Regards, Craig
... View more
Hi Raj, Not a lot of info here but we have a number of HPUX boxes with Oracle (10 & 11), SMO, SDU over 10GbE and (d)NFS. Works very well!! We're not using clustering, RAC or block based storage though, so can't comment on that... For specific compatibility, you should check the Interop Matrix here: http://support.netapp.com/matrix/ Regards, Craig
... View more
Hi Casey, Here's some info from memory... Firstly, the cache hit rate shown in sysstat output isn't showing the same thing as the flexscale stats. Ignore the sysstat one for now... With a low hit rate you should look into what type of data you are missing, and what you are caching. If you use stats show ext_cache_obj this will give you more info. Better still run it over a few minutes using stats start -I cache ext_cache_obj ...wait for a minute or two during a busy period... stats stop -I cache. This will give an output like this (notes added for some counters to check): StatisticsID: cache ext_cache_obj:ec0:type:IOMEM-FLASH ext_cache_obj:ec0:blocks:67108864 ext_cache_obj:ec0:size:256 <------- size of cache in GB ext_cache_obj:ec0:usage:92% <------- How full the cache is (should be >90% unless you've just booted) ext_cache_obj:ec0:accesses:32047/s <----- number of times the cache was accessed per sec ext_cache_obj:ec0:disk_reads_replaced:3768/s <---- how many disk reads were not replaced by the cache. ext_cache_obj:ec0:hit:17898/s <---- ave number of hits per sec (add to the number of misses below to get total) ext_cache_obj:ec0:hit_normal_lev0:16517/s <---- this is 'normal' data hits. here you can see most of the hits are 'normal' data ext_cache_obj:ec0:hit_metadata_file:907/s <--- this and the next few are the types of hit ext_cache_obj:ec0:hit_directory:42/s ext_cache_obj:ec0:hit_indirect:435/s ext_cache_obj:ec0:total_metadata_hits:1385/s ext_cache_obj:ec0:total_metadata_misses:434/s <--- if this is high relative to the total operations, you should consider metadata cache mode ext_cache_obj:ec0:miss:14149/s <---- number of misses per sec ext_cache_obj:ec0:miss_metadata_file:85/s <---- this and the next few are the types of miss ext_cache_obj:ec0:miss_directory:1/s ext_cache_obj:ec0:miss_indirect:347/s ext_cache_obj:ec0:hit_percent:55% <--- % hits (hits / miss) ext_cache_obj:ec0:inserts:426/s ext_cache_obj:ec0:inserts_normal_lev0:337/s ext_cache_obj:ec0:inserts_metadata_file:49/s ext_cache_obj:ec0:inserts_directory:6/s ext_cache_obj:ec0:inserts_indirect:39/s ext_cache_obj:ec0:evicts:11/s <---- this indicates the number of blocks evicted (ie not accesses frequently) If this is very high you *may* need more flash caches ext_cache_obj:ec0:evicts_ref:6/s ext_cache_obj:ec0:readio_solitary:1524/s ext_cache_obj:ec0:readio_chains:3768/s ext_cache_obj:ec0:readio_blocks:15788/s ext_cache_obj:ec0:readio_max_in_flight:511 ext_cache_obj:ec0:readio_avg_chainlength:4.19 ext_cache_obj:ec0:readio_avg_latency:0.56ms ext_cache_obj:ec0:writeio_solitary:0/s ext_cache_obj:ec0:writeio_chains:6/s ext_cache_obj:ec0:writeio_blocks:426/s ext_cache_obj:ec0:writeio_max_in_flight:182 ext_cache_obj:ec0:writeio_avg_chainlength:64.00 ext_cache_obj:ec0:writeio_avg_latency:2.14ms ext_cache_obj:ec0:invalidates:514/s <--- data in the cache that's been overwritten on disk (and thus 'invalidated') Based on this you may need to adjust your cache settings (options flexscale) to cache different data. You should also check you're VM alignment - misaligned blocks may affect caching efficiency. If that looks OK, dedupe may also help. If you have Performance Advisor you can also collect these stats and review regularly if required. Hope that helps, Craig
... View more
Just to be clear; VSC will not snapshot the data held in the RDM's, just the datastores containing the VMDK's only. Snapshots of the volumes containing the RDM LUN's will need to be done separately via SnapManager/SnapDrive or similar. Correct?
... View more
Hi Thomas, Thanks. We did try that, but the combination of adding the new SCSI controllers and changing the guest config seemed to have broken snapdrive after bringing the VM back up. The SDW GUI just hung and a service restart didn't help. I didn't troubleshoot it at the time, but maybe I'll try it again when I get time. We suspected the problem is the addition of the SCSI controllers after snapdrive is installed. We couldn't add the SCSI controllers first as you need a disk to connect to the VM first (back to creating everything manually). That said we're going to see if we can add the scsi controllers to the template's vmx file before we deploy it, if that works we'll try creating the RDM's in snapdrive, then reconfiguring them to the correct scsi controllers as you describe. I'll report back... Out of interest, does anyone know if this capability is pencilled in for any future release?
... View more
Hi All, We're in the process of planning an upcoming Windows SAP environment hosted in a Vmware ESXi 5 environment. We have a particular disk layout in mind using a combination of VMDK (over NFS datastores) and FC RDM's. As per requirements / best practice for SAP, we have around 7 FC RDM's, plus 4 vmdk's (plus OS/pagefile/etc) per guest. We have planned to use 3 or 4 separate VM SCSI controllers to separate datafile, logfile, binaries and OS disks. This is for performance reasons and because we're closing in on the 15 disk limit per SCSI controller. So, an example of the config we're looking for is this: We produced this by manually creating/mapping the LUNs on filers and mapping them through to the guest in vcenter. However, we would prefer to use SnapDrive (we're using version 6.3.1 on Windows 2008) to do all of this for us, but there doesn't seem to be a way to tell SnapDrive which guest SCSI controller to use when creating/connecting the RDM's. Does anyone know if this is possible with SnapDrive (apologies if I've missed something), or if not how SnapDrive decides which controller to use when creating new LUN's? Thanks in advance, Craig
... View more
Hmmm, strange... How about in VSC -> Backup and Recovery -> Setup. Check your vcenter server, port and user. check the user has correct rights in vcenter roles. Also worth checking in vcenter for roles applied at different levels within the hierarchy (eg at DC level, cluster, folder, etc) as these can override inherited rights. Have you checked VSC plugin is correctly registered with vcenter? eg http://<vcenter_server>:8143/Register.html ? Sorry if you've checked this already or if I'm teaching you to suck eggs.
... View more
I'm using SDW 6.3.1 but assuming things are more or less the same with vs 6.4. Things to check for starters: 1. Check you have vmware tools installed in the VM, then check in the SDW gui overview screen that the 'System Type' is showing as a virtual machine, not a physical machine. 2. check you have configured the smvi server details (typically the vcenter server) using sdcli smvi_config list. If not, or if it's incorrect, correct this. 3. right click the snapdrive host in the GUI and check your vcenter settings 3. Restart the snapdrive service and check again. The issue is that SD is not checking with vcenter I've seen a strange issue to do with vmware tools' registry entries, but would need to dig the details out. If the machine shows as a virtual machine, then this is not the problem. Hope that helps,
... View more