Happy to help. I agree with Bino, in general you'd split the disks across the two heads evenly, especially with "only" 24 disks. We've several 2240-4 (same brains different disks) they seem to barely touch 5% on the CPU serving CIFS to sites around 80 users from one head and NFS for VMware from the other.
... View more
As previously said network is a possible cause. Other things could be time on filer is too far off time on DC. AD object for filer has been deleted or change by a Windows admin. If all users are experiencing a problem, you may need to rebind it to AD - run CIFS setup at command prompt
... View more
Hi By default what has happens is the disks are split across the heads with each head owning half. To change the owner of a disk you're going to need to get down and dirty at the command-line. Connect via SSH/Telnet/serial/SP and type disk show You should get some that looks a bit this. DISK OWNER POOL SERIAL NUMBER HOME ------------ ------------- ----- ------------- ------------- 0a.00.14 myfiler01-c1(1234567414) Pool0 Z1N2L1JM myfiler01-c1(1234567414) 0b.00.15 myfiler02-c1(1234567663) Pool0 Z1N2L1ZF myfiler02-c1(1234567663) 0b.00.3 myfiler02-c1(1234567663) Pool0 Z1N2LA4X myfiler02-c1(1234567663) 0b.00.23 myfiler02-c1(1234567663) Pool0 Z1N2L23X myfiler02-c1(1234567663) 0a.00.4 myfiler01-c1(1234567414) Pool0 Z1N2LGYN myfiler01-c1(1234567414) 0b.00.19 myfiler02-c1(1234567663) Pool0 Z1N2L2FE myfiler02-c1(1234567663) 0b.00.13 myfiler02-c1(1234567663) Pool0 Z1N2L1XJ myfiler02-c1(1234567663) 0b.00.1 myfiler02-c1(1234567663) Pool0 Z1N2LH25 myfiler02-c1(1234567663) 0b.00.17 myfiler02-c1(1234567663) Pool0 Z1N2LBG8 myfiler02-c1(1234567663) 0b.00.21 myfiler02-c1(1234567663) Pool0 Z1N2L9DJ myfiler02-c1(1234567663) 0a.00.22 myfiler01-c1(1234567414) Pool0 Z1N2L3W2 myfiler01-c1(1234567414) 0a.00.0 myfiler01-c1(1234567414) Pool0 Z1N2L1Y9 myfiler01-c1(1234567414) 0a.00.12 myfiler01-c1(1234567414) Pool0 Z1N2LGRM myfiler01-c1(1234567414) 0a.00.10 myfiler01-c1(1234567414) Pool0 Z1N2L2TV myfiler01-c1(1234567414) 0a.00.18 myfiler01-c1(1234567414) Pool0 Z1N2LBJS myfiler01-c1(1234567414) 0a.00.20 myfiler01-c1(1234567414) Pool0 Z1N2L9H9 myfiler01-c1(1234567414) 0a.00.16 myfiler01-c1(1234567414) Pool0 Z1N2LHKW myfiler01-c1(1234567414) 0b.00.9 myfiler02-c1(1234567663) Pool0 Z1N2LGQM myfiler02-c1(1234567663) 0a.00.6 myfiler01-c1(1234567414) Pool0 Z1N2L91R myfiler01-c1(1234567414) 0a.00.2 myfiler01-c1(1234567414) Pool0 Z1N2L1DV myfiler01-c1(1234567414) 0b.00.5 myfiler02-c1(1234567663) Pool0 Z1N2LGE3 myfiler02-c1(1234567663) 0b.00.11 myfiler02-c1(1234567663) Pool0 Z1N2L8VH myfiler02-c1(1234567663) 0b.00.7 myfiler02-c1(1234567663) Pool0 Z1N2L92W myfiler02-c1(1234567663) 0a.00.8 myfiler01-c1(1234567414) Pool0 Z1N2L94C myfiler01-c1(1234567414) You then need to identify the spare disks on one head you want to move to the other and type disk assign diskid -s unowned -f disk assign 0c.00.16 -s newfiler Remember once a disk is allocated to an aggregate you won't be able to move it to another filer. Shame you reseller / NetApp weren't more help at purchase.
... View more
1. Because it wants to keep one spare in case of a drive failure. 2. Unlike other company's solutions the second controller isn't just a hot spare sat there doing sweet FA until things go wrong. With a typical Netapp HA pair you have two independent storage devices (heads) capable of seeing each others disks and in failover situation one can takeover all the functions of the other including it's IP / DNS name / LUNs etc. You could allocate more disks to one head than the other, but you need a minimum of 4 disk per heads (1 data, 2 dual parity, and a spare), it's not really bad practice as such it depends on your needs. If the desire is to really have as large a single volume as possible then a 20 / 4 split could be done.
... View more
Put simply: 1. No, a spare is assigned to a controller 2. No, you need at least one spare per controller 3. No. In reality you could probably be able get away with one spare per cluster, if you are in a position to be able quickly change the ownership of the spare to the other controller if that's were it's needed. If you had no spares you'd be getting error messages quite often. In theory you'd need to lose two disks in an aggregate before you'd lose data. But without a spare ready to go, the rebuild time could be quite high. It's a risk, do you want to take it?
... View more
Hi Richard There doesn't seem to a specific document for that combination but http://support.netapp.com/documentation/docweb/index.html?productID=30391 and similar explains what's required. I think you'll want to use the root vol on the new heads, for simplicity. Basically: 1. rename root volume on old controller 2. plug old shelves in to new controller 3. switch on 4. re-assign ownership of disks
... View more
I'd say the powershell command is returning the raw data from the api of the date in UTC - the filer ontap command returns the date in it's configured timezone, which I'd guess is CEST.
... View more
Generally on the back of a recent filer, you'll have the IOIO port this is an RJ45 connection for serial (RS232) - you should have a short cable to convert it to 9 pin D female E0M this is an RJ45 connection for management e0a e0b etc these are normal RJ45 Ethernet ports (1Gb or less) The wretch/spanner port this is an RJ45 for ACP (management of newer shelves) SAS ports - rectangular ports for newer shelves Fibre channel ports for either older shelves or FC hosts. 10GB ports - normally rectangular ports for special short run cables or fibre network connections. http://support.netapp.com/portal?_nfpb=true&_pageLabel=documentation should have all the info you need.
... View more
Hi The command ifstat interface should help, interface can be the physical port e0a e1b etc or the vif name. Check are you using LACP as the trunk method on the VIF and of course the switch config.
... View more
Assuming there are no files open on it, you should be able to remove it without cause any problems. I'd do it in Snapdrive if I was you, you may need to use force disconnect. I've done this many times before without any issues.
... View more
Is volume really full? Have you tried adjusting the f ractional reserve (typical 100% for a LUN, ie a 100Gb lun requires a 200Gb volume)? Setting it to thin provisioned? Setting dedup (does require some snap space), if only temporarily? Deleting old snapshots, if any. Starting a snapmirror should have no impact on source luns, they remain on-line and the SM process is transparent to the lun's client system. It should be possible to SM a vol with very little free space, it's just slower if there's a lot of change going on.
... View more
Hi You may need to tell us a little bit more about your configuration especially your fabric/fibre infrastructure, and host configuration. Ideally with a NetApp cluster and FCP hosts you'd have two HBAs in the host each connected to a separate fabric switch, each switch would then have a connection to each NetApp head. This provides redundancy in case of the failure of an HBA, a fabric switch or a filer. Assuming the lun is on Filer1, in normal working the host will see 4 paths to it. HBA1 to SW1 to the Filer1 directly HBA2 to SW2 to the Filer1 directly HBA1 to SW1 to the Filer1 through the interconnect with Filer2 HBA2 to SW2 to the Filer1 through the interconnect with Filer2 There a few things that could be happening in your case: 1. Things are connected wrongly or the fabric switch are not configured properly and the only path the host is seeing to its lun is via the filer interconnect. 2. Some traffic is going through interconnect when the preferred path is busy and you've nothing much to worry about. 3. The host system is using all the FC paths with equal priority, you may be able to change this behavior, depends on the O/S and HBA driver. A few commands to run the filer command to determine the situation. lun stats -o will give a an outlook a bit like this /vol/yourvol/yourlun.lun (13 days, 23 hours, 46 minutes, 46 seconds) Read (kbytes) Write (kbytes) Read Ops Write Ops Other Ops QFulls Partner Ops Partner KBytes 2574820375 5211323117 9785526 208670967 112320 0 30619 16 Do a bit of maths, compare column 8 to the sum of column 1 and column 2. If it's more than 25% I say you've a serious problem. lun stats -z reset the counters to zero. Good luck
... View more
Like c_morrall, I think DFS is about the only way you're going to be able to do this, This works quite well, and makes things a lot more manageable. We've implemented something similar although not with that sort of size, we got sick of a small minority of users filling up a shared area with junk (ripped music / personal photos) and stopping other people saving genuine work related files. We map users a specific drive letter to what is the root of a DFS namespace, under there are DFS folders which point to separate volumes. I suspect fulfilling this requirement simply is going to be difficult with most system, may be the customer has written the requirement to specifically exclude most storage providers and they have a specific solution in mind (who?). Also I'd question how some of those OS's will react to seeing a 400TB volume. An out there crazy way of doing it could be to front end it with a windows server and have some large luns, bonded together via software raid.
... View more
If you are really not using the internal disks on 2050 and I assume you've disks on the 3140 already holding it's root volume. You would simply need to connect the disks to the 3140 and then change ownership of disks. Some reconfig on the Oracle server is likely, if only to tell it the new wwwn or FQN of the filer holding its luns. In conclusion, yes it's possible, based on what you've said it should be fairly straightforward. But I wouldn't want to be doing without a NetApp or qualified 3rd party engineer present. Good luck
... View more
What I think aborzenov is asking is, are your root volumes (vol0) on the disks in the internal slots of the 2050 itself, on the disks of the extra shelves or spread across the two. If the internal disks contain any volume you want to keep things are going to be complex, especially if any volume is spread across internal and external disks. You don't really need to keep the root vol, as it will be completely different for a 3140 to a 2050. If any volumes you want to keep is wholly on the external disks then you should be ok to just move the shelves across, assuming the 3140 already has some disks on its own which contain its own root vols. Any other scenarios will involve a lot of work. I assume you've the 3140 going spare rather than going to buy one new (it's an old model)
... View more
Hi Yes, that's pretty much it. We've done this numerous times. There's a couple of extra bits just to clarify. 3.1 Quiesce the snapmirror 3.2 Break the snapmirror
... View more
To unlock the files / determine who has it open, the simplest solution may be to point MS Computer manager at the filer. Assuming the customer has its own windows admin people but are not NetApp skilled.
... View more
The MS office applications gets the details for "the file is locked by " from what has been set in the Options / username field rather than from an O/S call to get the file system lock owner. I expect your customer has done a PC image or application update.
... View more