Radek Thanks for the responses. I am still a bit unconvinced about b), since I'll be moving snapmirror volumes between aggregates of different types and sizes. I've always been a fan of VSM but want to ensure that the data will be spread equally among all disks - if I need to run reallocate, I can. Will read up more and update this post.
... View more
Proceed with caution on DS4243 and ACP http://communities.netapp.com/message/24987#24987 "A new issue as of 7.3.2P3 (burt 401176)... By default the ACP communicates via SSL, if you have secureadmin (SSL) configured and or OpsMgr is set to communicate over SSL there could be a race condition between ACP and the HTTPS which will cause a panic (what happened in my case) There are a couple workarounds to this burt, disable https or disable ACP or both. Given that DS4243 engineering recommends the use of ACP, they suggest disabling https (options httpd.admin.ssl.enable off) first to prevent the panic. In the event of a recurrence, they are okay with disabling ACP (options acp.enabled off) too. " http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=393357 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=289315 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=403446
... View more
Environment Source: DS14 144GB Destination: DS4243 450GB SAS Data: VMware data with deduplication on with SMVI snapshots When considering SnapMirror for migrating data from the DS14 to DS4243 (for that matter, between disparate disk size aggregates), I have couple of questions around best practices a) What are the dedupe recommendations/best practices - turn it on on the destination while SM is running and set it to auto? or perform dedup after SM baseline is done? b) Since the # of drives and size of the drives are going to be different on Source and Destination, is it a best practice/recommendation to run wafl reallocate on the destination? If so, I am guessing that the dedup should be off? c) Given the block transfer nature of VSM and b), should we use QSM instead? any preference? Looking for an optimal order of things. Thanks for your help in advance!
... View more
*shameless plug* - but it is in hopes that this will help someone. The perl script I talked about earlier uses Manage ONTAP SDK, SD CLI and SQL commands to perform a refresh of the secondary SQL server from SnapMirror destination. Of course SnapDrive makes it all possible. If we need to schedule these set of tasks, then this script might be of use. http://rajeev.name/2010/03/04/refreshing-test-dev-sql-environments-using-netapp-technologies/
... View more
I don't think there's a CLI, after a bit of research. Also, this might be of use. I ended up writing a perl script that uses the SDK, SD CLI and SQL Commands to as close to the goal line as possible. http://rajeev.name/2010/03/04/refreshing-test-dev-sql-environments-using-netapp-technologies/
... View more
The environment in question is very close to what is described in this thread: http://communities.netapp.com/message/7713#7713. Great information BTW. Any one looking for SQL, SnapDrive and SnapMirror integration is recommended to start there... Does SnapDrive CLI provide any functionality to create FlexClones from SnapMirror Destination volumes? We can do this using the GUI, however, I can't seem to find a way to do it over a CLI. The key here is to get SnapDrive to do it so that it is host-consistent. Cheers
... View more
Currently, I am using SnapManager for SQL Server to replicate (snapmirror) hourly full database backup plus transaction logs every 10 minutes from California to New York. How do I test mount my databases in New York? Do I need SnapManger for SQL Server in New York? I am not too clear on how to bring up my databases in New York. I would like to understand the procedure and document for DR purporse If I can re-state it, you are using SMSQL to take SQL consistent backups and then using SnapMirror to replicate those volumes over to NY - right? On the NY storage system, you'll have those volumes in a snapmirrored state. I believe you can use SnapDrive to mount the LUNs in those volumes to a test server. If you have flexclone license, you may also think about taking a clone of that and working off of that clone volume. The sql command line can be used to disconnect and connect the databases, if you are looking to script it. I have some scripts that I've put together that can be modified to suit your purpose, most likely. You can PM me.
... View more
Wei You mentioned: They have to share the loop's bandwidth, 400MB/s Isn't the FC loop - 4Gbps ~ 500MB/s? or is my understanding incorrect? Thx
... View more
If I use a single spare, I have 23 drives for my aggregate. Should I create one large raid group or two smaller raid groups such as a 10 +2 and a 9+2 as suggested in an earlier post? You had mentioned that you have clustered system (2 controllers). Would you like to serve data using both of those controllers? In that case, you'll need to split the drives among the two controllers and create individual aggregates using those drives. Based on the info so far, I can take a gander at *a* simplistic config 24 drives -2 Hot Spares (1 HS per controller) = 22 drives Assign 11 drives to each controller and create aggregate out of that. With the default RG being 16, you'll get 9 data drives and 2 parity drives. Depending on your future plans, this RG size may need tweaking but we don't have that info. If you want all the data to be served only by one controller, then you can assign all the drives to that controller, keep 1 (or 2 to enable Disk Maintenance Center) for Hot Spares. That'll leave you with 22 drives. If there are no future plans on growing this aggregate, then a RG size of 11 will give you 2 raid groups (9D+2P). The default RG size of 16 will net you 2 raid groups (14D+2P, 4D+2P), which is probably not the best configuration. There are a variety of options in between - so need more info. (Larger raid group result in longer re-build times but give you better capacity) There's another live thread here on larger RG rebuild times: http://communities.netapp.com/message/23886 HTH
... View more
[Been a while since I played with ldap configuration] I *think* an individual object can override the server specific setting by specifying the hash method in the password attribute, depending on the ldap policy. There's a server specific setting that dictates how all the password encryptions are done, which is probably where you are getting the MD5 hashing from. You may want to work with the LDAP admin and see if you can set the encryption of one test account to other hashing methods and see if that works. ({crypt}, {clear}, {3des}, {ssha} etc) I know this does not answer your questions specifically (not mine, for that matter), but HTH.
... View more
IHAC who is seeing this behavior and I've verified it in the 7.3.2 simulator create a snapshot rename the snapshot perform a snap restore of the volume using the snapshot volume is restored - however, the containing snapshot has the original name created in step #1, not the new name from step #2 Is this intentional/expected? simtap> snap create sqlproddata snapshot1 simtap> snap list sqlproddata Volume sqlproddata working... %/used %/total date name ---------- ---------- ------------ -------- 0% ( 0%) 0% ( 0%) Feb 09 07:39 snapshot1 simtap> snap rename sqlproddata snapshot1 snapshotnew simtap> snap list sqlproddata Volume sqlproddata working... %/used %/total date name ---------- ---------- ------------ -------- 0% ( 0%) 0% ( 0%) Feb 09 07:39 snapshotnew simtap> snap restore -s snapshotnew sqlproddata WARNING! This will revert the volume to a previous snapshot. All modifications to the volume after the snapshot will be irrevocably lost. Volume sqlproddata will be made restricted briefly before coming back online. Are you sure you want to do this? y You have selected volume sqlproddata, snapshot snapshotnew Proceed with revert? y Tue Feb 9 07:39:58 EST [wafl.snaprestore.revert:notice]: Reverting volume sqlproddata to a previous snapshot. Volume sqlproddata: revert successful. simtap> snap list sqlproddata Volume sqlproddata working... %/used %/total date name ---------- ---------- ------------ -------- 0% ( 0%) 0% ( 0%) Feb 09 07:39 snapshot1 simtap>
... View more
I would have thought so myself...However the bug report threw me off. Acc. to bug report and the workaround explanation, a single "?" matches one AND more characters. I do not have a vscan server to test..
... View more
I have a question about vscan and wildcards. I looked thru some of the KB articles, Bug Reports and Best Practices (Links below) http://communities.netapp.com/docs/DOC-3312 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=81989 https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb32628 The BugReport indicates that using a single "?" as a wildcard matches zero or any "remaining" characters. For e.g. HT? will match HT, HTM, HTML, HTX etc.. The KB articles indicates that "???" has special meaning and seems to suggest that it works just like "*". So "???" should match any extension (any number of letters and any letters) So what's the difference between a single "?" and "???". Both should match any number of characters and any letters? Am I overthinking this? I am looking for a wildcard pattern that is all inclusive and want to specify specific exclusions in the exclude list. thx
... View more
I would lean towards the 3100 series as well. I'd really like to see a 24 port NX style switch for disk/fabric/network connectivity and leave the PCI slots for PAM stuff.
... View more
LUN 0 is used by different array vendors for different things. EMC uses LUN 0 as a standard gatekeeper LUN. EMC CLARiiONs use LUN 0 as a virtual lun to check host connectivity, check zoning etc. This LUN may disappear after the checks are finished. If I remember right, Hitachi uses LUN 0 for special boot access. When a V-Series expects a LUN, it expects a LUN to overwrite and use for WAFL operations, so this may be the reason why they say not to use LUN 0... my $0.02 cheers
... View more
The source files were from a volume with security style of UNIX. Should I read that these files are coming from an NFS/Unix system? Might want to ensure that your new volume has create_ucode and convert_ucode options set to ON. To troubleshoot access, you might also want to turn the cifs trace_login option to ON on the filer console. This will throw debug messages to the filer console upon CIFS access and generally provide good pointers to the problem. >options cifs.trace_login on HTH
... View more
The setfacl is a very Solaris file system specific call (along with its cousin, getfacl). As such, both the server AND the client had to be solaris. True SUN did have support for POSIX ACLs since 2.5.1 days for UFS. The sideband protocol added POSIX ACLs support to Solaris 2.5.1, IIRC. Having said that, please note that those ACLs are now pretty much dead. Solaris ZFS does NOT support those, 'cause it uses the new NFSv4 ACLs, which is now a standard. NetApp supports NFSv4 ACLs. If you need ACLs, go NFSv4 style.
... View more
Recently, I had a customer who received a FAS3140 (no disks and no PCI cards in the order) with all the onboards set as targets (doh!) Goes into maintenance mode and attempts to offline the adapter (and change to initiator mode) - Errors out saying can't offline while FCP service is stopped Can't stick a disk shelf and boot it up fully, cause all adapters are target ports Ended up issuing a "set-defaults", according to the Customer. (anyone know exactly what the default settings are?) Also, I often see 0a and 0c set to initiators and 0b and 0d set to targets..
... View more
Radek Thanks for the post. I also pinged a few folks, searched on the 'Net and overall, I am getting a "I am not sure" impression. The closest we come to a solution without a complete one is CommVault. Even that however, has limited support matrix and does not restore Windows ACLs. Apart from the "ease of use", specifically with NetApp solution and the speed of a NDMP solution, I would interested in what other people think of this lack of flexibility and what it means to you.. Do you guys see this (lack of portability) as a disadvantage of NDMP?
... View more
NFSv4 uses nfsmap_id domains to identify user and group mappings.On your AIX system, check the nfs settings and ensure that the nfs domains match with that on the filer. HTH
... View more