Hi, just had a read through this thread, are you running 7-mode or c-mode. Cause the 2040 supports both ?
Up to 8.1.4 c-mode and 8.1.4 7-mode.
... View more
Hi everyone,
Just interested to see what others are doing in regards to implementing QOS limits within NetApp and a virtualization stack such as VMware.
For example:
Implementing QOS iops limit on a NetApp volume that happens to be a VMware datastore will create an iops ceiling, however if this datastore consists of 10 vm's, it only takes 1 VM to consume all iops before all other 9 vm's suffer. This option also allows for adaptive QOS, so as you expand the volume the QOS expands with it.
However implementing an iops limit within vSphere - Virtual Machine - on the individual disk, is way more granular and will prevent the issue of iops blowing out on the volume QOS cap, although you can easily overprovision iops vs the max iops available in the NetApp system. This method would also use some sort of automation to go through each individual disk and check that the correct iops limit has been implemented based on the storage tier the vm disk is sitting on. If a change of vm disk storage tier occurs the automation script or tool would then adjust the iops limit of the disk.
Is this something that alot of others are using, do you do it differently ?
... View more
Hi Kailas, you can do this in the CLI pretty easily, just make sure you do it to a vserver that you no longer need as all settings will be destroyed and delete!!
For example vserver name = myvserver
::> volume offline -vserver myvserver -volume myvserver_rootvol
::> volume delete -vserver myvserver -volume myvserver_rootvol
::> vserver delete -vserver myvserver
You can then go back into the gui and refresh your svm list and it should be gone.
... View more
Hi there, are you able to confirm that the disks have been partitioned ? i.e. ::> disk show and ::> node run <node name> disk show ?
Do you have any data on this system currently ? If not you can blow it away, unown all disks within the partition menu. The system will then own the disks (odd numbered disks go to one node and even numbered disks go to the other node), partition the disks, zero them and install the OS. This will then give you root and data partitions on each disk making the root aggregate quite small as opposed to using the full disk size.
I've documented the full procedure here: https://www.sysadmintutorials.com/netapp-ontap-9-configure-advanced-drive-partitioning/
WARNING: However make sure the system is completely empty i.e. no data on it or the data that is currently on it is no longer needed. As everything will be destroyed!!
... View more
FYI we are on 9.1P13 and we still see these alerts come through, even though lif connections are successful and we are not experiencing any problems with cifs.
... View more
yes that's correct, it will partition the new shelf but keep the root partition size the same, which in your case is 55GB. If you want to get the partition size down, you'll need to remove everything - volumes, aggregates, partitions and disk ownership. With the 2 shelves plugged in you can then re-initialize node 1 (keep node 2 in the boot loader until node 1 is finished), it will take half the disks, own them and partition them with the smaller root size. Then you can do the same with node 2. Are you familiar with the process to remove everything and re-partition ?
... View more
depends 🙂 Is this an existing system with ADP already ? If it has ADP and you wish to re-configure the disk layouts then everything will be removed, volumes, aggregates, partitions, etc If it has ADP and you just wish to re-initialize, meaning remove all volumes, aggregates and data, then you can just perform a re-initialize from the boot menu If you are not using ADP and you wish to do so, then you need to remove all volumes, aggregates and data, and remove ownership from all the drives.
... View more
@robinpeter we actually had a customer that had a chassis failure in their 2000 series last week. Took the whole storage down affecting 500+ staff. 5-6 hour turn around for parts and engineer to replace the chassis. I never heard of this before but unfortunately these really bad situations do happen 😞 @xiawiz I doubt you will fit 20 loops in any system, also you wouldn't run Raid-tec with SSD it's more for the larger SATA drives, in which case you would be looking at the 8200 series.
... View more
I also had the same - tickets open with Netapp and VMware Basically it came down to the failover time between controllers. Sometimes the failover time was quick (in this case we didn't experience any APD) in other times it was slightly slower (depending on how busy the controller was at the time) which led to an APD. Basically there is was no guarantee that the failover was going to occur faster or slower at the time. Really depends or comes down to how busy the controller is. You can see the failover times for individual protocols in ::> event log show -event *nfs* (after failover) Out of interest what controllers are you running, and what's their utilization like ?
... View more
Hi, Sorry I think I wrote that reply late at night 🙂 If you use the command: ::> statistics top file show -sort-key total_ops -max 100 Then copy that table to excel and use the sort feature in there for volumes.
... View more
Hi Achou, is there just one type of server OS mounting to an NFS export ? i.e. if it's VMware I would use powercli to search through my hosts and see which ESXi hosts are mounted to that namespace/volume I think using statistics top file show - will be your best bet, however you need to specify -max 10 (Change 10 to the amount of hosts you have in your environment, I think max is 100 for this value) The sort-key switch will sort on volume ::> statistics top file show -sort-key volume -max 10
... View more
Hi Midi First when the disk is inserted into the shelf it is unassigned. You then need to assign the disk to the node that you wish to take ownership. It is then a spare disk. when you add a disk to the root aggregate of the node that owns the disk, that disk will then be partitioned with P1,P2,P3 However before you add the disk, please make sure you understand how your disks are laid out amongst nodes, especially taking into consideration my point earlier with 48 disk max limit, 24 per node. Once the disk is added to the aggregate and partitioned, you cannot simply just remove it if you make a mistake. If you are unsure, best to log a support case and let them guide you through it.
... View more
Hi Midi you can add disks definately however you have to be aware of some maximums. A max of 48 drives can be partitioned, 24 per node. How are your drives assigned to your nodes ?
... View more
Hi, I've been testing some root-data-data partitioning setups with Ontap 9.1 and wanted to share my results with a few comments and also to get some feedback from the community. First test scenario - Full system initialization with 1 disk shelf (ID: 0) The system splits the ownership of the drives and partitions evenly amongst the 2 nodes in the HA pair. Between disks 0 - 11, partitions 1 and 3 are assigned to node 1 and partition 2 is assigned to node 2 Between disks 12 - 23, partitions 2 and 3 are assigned to node 2 and partition 1 is assigned to node 1. Creating a new aggregate on Node 1 with Raid Group size 23 will give me: RG 0 - 21 x Data and 2 x Parity 1 x data spare One root partition is roughly 55GB on a 3.8TB SSD Benefits of this setup: root and data aggregates spread their load amongst both nodes Cons: Single disk failure affects both node data aggregates. It could possibly be better to re-assign partition ownership so that disk ID 0 - 11 are owned by node 1 and 12 - 23 are owned by node 2 ? Second test scenario - Full system initialization with 2 disk shelves (ID: 0 and ID:10) - Example 1 The system splits the ownership of the drives between both shelves with the following assignments: Node 1 owns all disks and partitions (0 - 23) in shelf 1 Node 2 owns all disks and partitions (0 - 23) in shelf 2 Creating a new aggregate on Node 1 with Raid Group size 23 will give me: RG 0 - 21 x Data and 2 x Parity RG 1 - 21 x Data and 2 x Parity 2 x data spares One root partition is roughly 22GB on a 3.8TB SSD The maximum amount of partitioned disks you can have in a system is 48, so with the 2 shelves, we are at maximum capacity for partitioned disks. For the next shelf, we will need to utilize the full disk size in new aggregates. Benefits of this setup I see: in the case of 1 disk failure or a shelf failure, only 1 node/aggregate would be affected. Cons of this setup: A single node root and data aggregate workload is pinned to 1 shelf It's possible to reassign disks so that 1 partition is owned by the partner node which will allow you to split the aggregate workload between shelves, however in the case of a disk or shelf failure both aggregates would be affected. Third test scenario - Full system initialization with 2 disk shelves (ID: 0 and ID:10) - Example 2 In this example, I re-initialized the system with only 1 disk shelf connected. Disk auto assignment was as follows: between shelf 1 disks 0 - 11, partition 1 and 3 are assigned to Node 1 and partition 2 is assigned to Node 2 between shelf 1 disks 12 - 23, partition 2 and 3 are assigned to Node 2 and partition 1 is assigned to Node 1 I then completed the cluster setup wizard and connected the 2nd disk shelf. The system split the disk ownership up for shelf 2 in the following way: Disks 0 - 11 owned by node 1 Disks 12 - 23 owned by node 2 Next, I proceeded to add disks 0 - 11 to the node 1 root aggregate and disks 12 - 23 to the node 2 root aggregate. This partitioned the disks and assigned ownership of the partitions the same as shelf 1. Because the system was initialized with only 1 shelf connected, it created the root partition size as 55GB as opposed to 22GB in my second test scenario above. What this means is that a 55GB root partition is used across the whole 2 shelves as opposed to 22GB. How much space do you actually save when using 3.8TB SSD's: 55GB x 42 (Data disks) = 2,310GB 22GB x 42 (Data disks) = 924GB Difference = 1, 386GB or 40% Benefits of this setup: Load distribution amongst shelf 1 and 2 Cons: Larger root partition single disk or shelf failure affects both aggregates Fourth test scenario - Full system initialization with 2 disk shelves (ID: 0 and ID:10) - Example 3 Following on from my thrid test scenario, I re-assign the partitions so that partitions on disk id: 0 - 11 are owned by node 1 and 12 - 23 are owned by node 2 Benefits of this setup: 1 disk failure only affects 1 node root and data aggregate equal load distribution amongst the shelves. Cons: Larger root partition 1 shelf failure will affect both nodes Interested to hear feedback on the above setups, which ones do you prefer and why ? Also feel free to add additional comments or setups that are not listed above.
... View more
I had a support ticket open along side these posts which pointed me in the right direction. Finally got to the bottom of it if and hopefully helps out some people: So from the factory or from a complete system re-initialization a node will own all data partitions of a disk, P1, P2 and P3. If you want to set it up like in the documentation where you assign P2 to the opposite node, you have to do the following: 1. make sure the disk is listed as a spare. ::> storage aggregate show-spare-disks 2. enter advanced mode ::> priv set adv 3. To assign data P2 to the opposite node, in this case Node2. ::>storage disk assign -disk 2.0.0 -owner Node2 -data2 true -force true (use -data1 if you wish to reassign data1 partition) FYI - On a 3.8TB SSD you will see around 1.74TB per data partition.
... View more
Hi Dirk, After a re-initialization of the system with 1 shelf the disk assignment is as follows: Disk Slot: 0 - 11 assigned to node 2 Disk Slot: 12 - 23 assigned to node 1 Disks 0 - 1 partitions 1,2,3 are assigned to node 2 (partition 1 and 2 being data and partition 3 being root) Disks 12 -23 parititions 1,2,3 are assigned to node 1 (partition 1 and 2 being data and partition 3 being root) According to the documentation in the "understanding root-data parititioning" it states: Root-data-data partitioning creates one small partition as the root partition and two larger, equally sized partitions for data as shown in the following illustration. Creating two data partitions enables the same solid-state drive (SSD) to be shared between two nodes and two aggregates. After initialization of the system the 2 data partitions of a disk are not shared amongst nodes. If I remove ownership of 1 partition and try to assign the disk to the opposite node, the system does not allow me: Node1> disk show -n DISK OWNER ------------ ------------- 0b.00.23P2 Not Owned Node1> disk assign 0b.00.23P2 disk assign: Cannot assign "0b.00.23P2" from this node. Try the command again on node Node2
... View more
Is best practice for 3 partitions root+data+data assigned to the same node, or would you assign 1 data partition to the HA partner, or doesn't it matter ? FYI - I'm enquiring about A300's running 9.1
... View more
Hi Andris, the KB article seems to talk about the advanced disk partitioning with 2 partitions, not the newer enahnced ADP 3 partition layout. I've been looking for some more documentation on the 3 partition layout but can't find any links or sections in the 9.1 documentation. Would you have a KB or documentation link for it ?
... View more