Does the storage system retrieve AD Security groups at all? Try running commands like "cifs lookup DOMAIN\username" and wcc -s DOMAIN\username. If the user belongs to a SG and the CIFS communications are all working properly, you should see the groups listed.
... View more
Also, ONTAP 8, 7-Mode may behave differently then ONTAP 7, since the underlying layer is different.
Whoa - If it does behave differently, it's imperative that this stuff be *well* documented. Because many folks are upgrading to ONTAP 8 in predominantly NFS environments and if there are NFS implementation level changes that are not fully backward compatible and documented, then this could mean bad juju. *real* bad juju.
... View more
hmm... I thought it would be the other way around.. can you elaborate more... is this a Security Group you created? What's the error message? Do any messages show on on the console of the NetApp or in the logs?
... View more
make sure that the volume is exported with root squash disabled... i.e. make sure you have the option anon=0 option set.. Here's an example: $ ssh -l root 172.16.56.141 "version; exportfs" root@172.16.56.141's password: NetApp Release 8.0RC3X8 7-Mode /vol/vfiler_80sql_vol1 -sec=sys,rw,nosuid /vol/vol0/home -sec=sys,rw,nosuid /vol/volroot64 -sec=sys,rw,anon=0 /vol/vol0 -sec=sys,rw,anon=0,nosuid /vol/src732_mir -sec=sys,rw,anon=0,nosuid Now, as root sh-3.2# mount 172.16.56.141:/vol/volroot64 /mnt sh-3.2# cd /mnt sh-3.2# mknod blk1 b 0 6 sh-3.2# ls -l blk1 brw-r--r-- 1 root wheel 0, 6 May 15 21:24 blk1 sh-3.2# HTH
... View more
I'm in the process of decommissioning a filer server but would like to configure the host name of the file server as a cname (alias) of the netapp. I have done this but I get an error that the share is not accessible. I can ping the cname and I get a reply from the netapps IP address. Do I need to do anything on the netapp to allow the additional hostname? You should add the file server name as the netbios alias on the storage controller. Add the hostname of the file server to the /etc/cifs_nbalias.cfg file and load those entries using cifs nbalias load command. Additional reading: https://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs1857
... View more
Those interfaces need to be configured with NFO (Negotiated Failover). Enable NFO on the interface (ifconfig e0a nfo) Set/Check the CF Policy to effect a takeover in case of NICs failure (options cf.takeover.on_network_interface_failure.policy) Enable the NFO (options takeover.on.network_interface_failure) Additional reading: http://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs2310 HTH
... View more
vif create lacp vif0a -b ip e0a e0b vlan create vif0a 90 101
ifconfig vif0a-90 10.x.x.x netmask 255.255.255.0 (fill in actual IP address and correct netmask)
ifconfig vif0a-101 10.x.x.x netmask 255.255.255.0 (fill in actual IP address and correct netmask) Looks good overall. I'll let others chime in and give their feedback. I would add couple of notes, however. if you want the partner filer to take over these interfaces in an event of a CF event, you might want to stick that "partner <name>" option at the end of the ifconfig statement. Such as (ifconfig vif0a-90 10.x.x.x netmask 255.255.255.0 mediatype auto partner vif0b-90) Also, for the VIF that is designated for NFS, you might also want to turn off WINS, since that interface will only carry NFS traffic. Such as ((ifconfig vif0a-101 10.x.x.x netmask 255.255.255.0 mediatype auto partner vif0b-101 -wins) HTH
... View more
Yes for the most part. Create can create a virtual interface (vif) using e0a and e0b. Depending on the switch capabilities, it can be a lacp vif or a multi mode vif. Both types aggregate the links so you have two links to use (2 x 1Gbps links in the vif) On that vif, you can create a cifs and nfs vlans (if they are indeed in different vlans). Doing so will result in two vlan specific interfaces that you can then configure with IP addresses etc.
... View more
Hmm.. http://now.netapp.com/NOW/knowledge/docs/ontap/rel732_vs/html/ontap/smg/GUID-ED095B97-E10C-4C7F-91C9-AC756D3C9C64.html If you are configuring ACP for disk shelves attached to an active/active configuration, you must supply the same ACP domain name and network mask for both systems.
Attention: Do not connect the ACP port to a routed network, and do not configure switches or hubs between the ACP port and the designated Ethernet port. Doing so is not supported and will cause interruptions in service.
Once you select a domain name and network mask for the interface, the ACP processor automatically assigns IP addresses for traffic in the ACP subnet. This assignment also includes the storage controller network interface that you selected for ACP traffic. Do we have conflicting docus here? 'cause you mentioned that somewhere it says a switch can be used. (BTW - what's the ETA on the bug resolution, anyone know?)
... View more
Configuring a system with a 64-bit root aggregate in 8.0 is not recommended by NetApp Why is this the case? Is there a technical and/or support reason? However as an additional piece of information, autoroot is allowed in 64-bit aggregate to bring up the system quickly when the existing root can't be brought up online. After the system is brought up, it is recommended that root volume should be switched to a 32-bit volume as soon as possible. Will autoroot be created in a 32bit aggregate if one is present? Also, why is the 32bit aggregate a default, if the 64bit is the recommended aggregate type for 8.0 and above..?
... View more
Environment: 8.0 - 7Mode Turns out that a root volume in 8.0 cannot be migrated to a 64 bit aggregate. I would like to find out why this is the case? Also, it is a best practice to create 64 bit aggregates in 8.0 systems? If it is, why is it not the default option? Thanks
... View more
I have another question in mind since we are at the Aggregate design topic
As i know the max possible Raid Group size is 16, it is possible for us to create an Aggregate with single Raid Group 14 Data + 2 Parity am i right ?
Will it in turn churn out better performance as it has more spindles ?
What is the advantages i have, in maximizing an Aggregate to 23 Disks, other then having a single larger Aggregate to manage from ? Yes you can create a aggregate with 16 drives with RG Size set to 16.. Remember that an aggregate will have multiple raid groups underneath it as it grows bigger or as it is created. As I said before, if you want to maximize for aggregate capacity and get the aggregate size as close to 16TB as possible, then the optimal configuration of that aggregate would be not with the default raid group size of 14 but with a raid group size of 12. Just doing some quick math here with the raid group sizes and for 23 drives RG Size = 10. 3 Raid groups with 3 drives in the last raid group. RG Size = 11. 2 Raid groups with 1 drive left over RG Size = 12, 2 Raid groups with 11 drives in last raid group RG Size = 13, 2 Raid groups with 10 drives in last raid group RG Size = 14, 2 Raid groups with 9 drives in last raid group RG Size = 15, 2 Raid groups with 8 drives in last raid group RG Size = 16, 2 Raid groups with 7 drives in last raid group Given all of these combinations, I'd take RG Size of 12, which has a even balance of disks in my raid groups. The aggregate will perform better because it has more spindles and it has a more balanced and uniform raid layout.
... View more
I dun quite get it, you are proposing 2 different ways of craving the Raid Groups Depending on the Raid Size ?
Not really. If you specify a raid group size of 12 when creating a drive and select 23 drives to be added to the aggregate, it will automatically created the two raid groups. IIRC, there's also a "dry run" option with a "-n", that will show the layout of what it would do, without actually doing it. In order to get as close to the 16TB limit as possible, the configuration is to include 23 drives (and also limit parity drives to 4) as indicated above.
... View more
Consider this (I tend to do math in a round about way, so please bear with me... ) Total # of drives = 3 x 24 = 72 # of Hot Spares = 4 (To enable disk maintenance center, 2 hot spares are required per controller) # of Active Drives = 68 # of active drives/controller = 34 Using RG Size of 12, you get 2 aggregates per controller Aggregate 1 (RG0 = 10D+2P, RG1 = 9D+2P). Total # drives in Aggregate 1 = 23 drives Aggregate 2 (RG0 = 9D+2P). Later, you can add another 12 drives to this aggregate (10D+2P) and get another aggregate with maximum efficiency. With the default RG Size of 14, in order to maximize spindles in a 16TB aggregate, I'll have to have these RG layouts 12D+2P 7D+2P Not sure if I like that. Does this make sense?
... View more
You can prevent root squashing to occur for this mount point and as root, can make the changes. Something like below On the NetApp 732sql@simtap1> exportfs -p rw,anon=0 /vol/RHEV 732sql@simtap1> exportfs /vol/RHEV -sec=sys,rw,anon=0 On the client sh-3.2# mount 172.16.56.153:/vol/RHEV /RHEV/ sh-3.2# df -h Filesystem Size Used Avail Capacity Mounted on 172.16.56.153:/vol/RHEV 2.0Gi 113Mi 1.9Gi 6% /RHEV sh-3.2# cd /RHEV/ sh-3.2# mkdir Images sh-3.2# chown -R 36:36 Images sh-3.2# chmod u+s Images sh-3.2# chmod g+s Images sh-3.2# ls -ltr total 208835 drwxrwxrwx 5 root wheel 4096 Mar 13 22:51 .snapshot drwsr-sr-x 2 36 36 4096 Apr 25 08:38 Images
... View more
Great post - added a lot of clarity to my response. Thank you! You can't mix SAS & SATA drives inside the same shelf but you CAN mix them inside the same stack -- this is huge and much more flexibible than before (should only do one transition between SAS and SATA within a stack). I think this point itself might warrant a separate thread. This ability of mix-n-match SAS/SATA shelves within a stack might well bring forth it's own set of interesting scenarios. While I do agree that they'll support mixing and matching shelves in a stack - I am sure there are some best practices and some more will come out. (sort of like, they gave us the rope, we can chose to use it or hang ourselves with it ) For e.g. 1 filer, 3 DS4243 SATA and 3 DS4243 SAS Shelves. Which is better in the following scenarios? Create two stacks and put 3 SATA in one and 3 SAS in the other Create 1 stack and group 3 SAS together and 3 SATA together (one crossover point between SAS and SATA) Create 1 stack and randomly stack disk shelves (SAS+SATA+SAS+SATA+SAS....) If ports are not an issue (yeah - right!), then the order might be 1), 2). But if they are, then I would reckon that 2) would be more preferable than 3) and I also recall reading somewhere about this.
... View more
For SAS (DS4243), NetApp uses the term stack, as opposed to a loop (loop is a term used for DS14 disk shelves) With the exception of FAS2XXX systems, each SAS stack supports a maximum of 10 disk shelves, which brings the # of disk drives to 240. I believe for FAS2XXX, the limit is 4 DS4243 per stack, which brings the # of disk drives to 96. For a FAS3170A with 840 disk drives, you'll need 3.5 loops (4 loops). So per controller, you'll need 8 ports (4 x 2), which is 2 quad port SAS HBAs per controller.
... View more
@scottgelb thanks for the helpful answer. I read the same notes prior to posting as well. I guess read the notes as a caution to the administrator and not as "SnapDrive will prevent any creation of LUNs or any operations on the root volume", which is what I am saw in testing - the root volume was not even being listed in SnapDrive. In the end, I would second that behavior - and would have liked to see this listed in the docs... thx
... View more
I need to share the same volume between multiple Active Directory domains (no trust betweent he domains). Given this clarity from the author about the need - I am actually NOT sure if MultiStore is going to help all the way. Agreed that MultiStore will create virtual partitions - but sharing the same volume between two domains - is not something that can be accomplished by MultiStore.. IMHO. If you have a FlexVol as part of a vFiler unit, you can create a CIFS share in that vfiler unit, but you cannot create a share for that same volume in the parent vfiler (vfiler0), as far as my testing goes.
... View more
ONTAP: 7.3.2 and 8.0 Platform: Simulator Feature: MultiStore Protocol: iSCSI Host OS: W2K8 R2 SnapDrive 6.2 I am finding that SDW6.2 does not detect LUNs that are presented in vFiler units when the LUN is in the vFiler "root" volume (where the /etc directory resides). Is this expected behavior?
... View more
And for the docs... You can store up to 255 Snapshot copies at one time on each volume. http://now.netapp.com/NOW/knowledge/docs/ontap/rel733/html/ontap/onlinebk/GUID-FB79BB68-B88D-4212-A401-9694296BECCA.html
... View more
Yes - this card - X2054B-R6 - is an initiator only card. you can use this card for disk shelves and for tape - but you cannot use this card to present luns to your fabric. As it was previously mentioned, the onboards can be changed to target mode and can be used to present luns. HTH
... View more
Mar 18 16:41:00 [server] slapd[74]: SASL [conn=1204363] Failure: Couldn't find mech DIGEST-MD5 Mar 18 16:41:00 [server] slapd[74]: bind: invalid dn ([ldap_admin]) Looks like your ldap server (slapd) is receiving SASL bind call (either it is set on the server to accept only SASL connections or it's the client requesting). Note that this has nothing to do with User Authentication and its encryption.. (think of this has handshake that happens much before). It seems like client is requesting a SASL bind with DIGEST-MD5 mechanism that the server is not configured to support - With that being the case, the subsequent ldap bind-dn is failing. If this handshake is not successful, then no subsequent ldap queries are allowed. HTH
... View more