Got an answer from the support. There is a BURT and as you said it is fixed in Ontap 8.1.1 http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=512686
... View more
It should be like that, unfortunately the documentation talks about the "aggr option root" step. I have contacted my support (IBM) and I'm awaiting their answer. I will follow their directions and thus avoid problems with the support if I need help, should anything go wrong. Also, I want a running system and avoid any possible unnessecary disruption. IBM have never showed any flexibility in their support so I have to be extra careful in following the documented instructions.
... View more
I do "love" these documentation omissions I have found just the last two days alone.. I suppose that I have to do without this feature then.
... View more
Any clue why I would get this with Ontap 8.1p1? My system is a MetroCluster with mirrored aggregates and the volume is deduplicated (not compressed yet), but why isn't it supported and why an unknown error? netapp> vol options /vol/software read_realloc space_optimized vol options: UNKNOWN error reason: 263 (CR_VOLUME_IS_MIRRORED).
... View more
I'm going to contact support regarding this. I'm fairly sure that I have moved the root volume something like 3 years ago (with vol root option only), but I had to follow instructions as this is a new version of Ontap, therefore as the docs seems to be incorrect (for normal usage) it should be adjusted so that others don't trip on the same problem as me. Especially as a lot of systems are going for 64-bit only setup further on and a lot of them will have to move the root volume.
... View more
The old one was still marked as root, and the new one as diskroot. Didn't dare to try and see what happend. This is how it looked like after the new volume on 64-bit aggr was marked as root. netapp-i> aggr status Aggr State Status Options VM online raid_dp, aggr root, raidsize=14 mirrored 32-bit VM_64 online raid_dp, aggr diskroot, raidsize=14 mirrored 64-bit netapp-i> aggr options VM_64 root aggr options: option 'root' can be modified only in maintenance mode
... View more
Sure, if the documentation would claim that. The problem is that it clearly states in step 4 (before rebooting in step 5) that I have to set the aggregate as root when moving to the root volume to another aggregate. "If you moved the root volume outside the current root aggregate, enter the following command to change the value of the aggregate root option so that the aggregate containing the root volume becomes the root aggregate: aggr options aggr_name root aggr_name is the name of the new root aggregate. For more information about the aggregate root option, see the na_aggr(1) man page."
... View more
I tried moving volumes from 32-bit to 64 bit aggregate with "vol move" and the system complained that the aggregates aren't homogeneous. I know that both the TR's regarding DataMotion for Volumes mention that, but as this constrain is NOT mentioned as a restriction in Ontap 8.1 "Block Access Management Guide" I asssumed that it would work with the newest release. To my surprise it didn't. Is it a documentation miss or did I misunderstand something?
... View more
I just upgraded our controllers to Ontap 8.1p1. As we have one 64-bit aggregate I thought it would be excellent moment to move over the root vol to 64-bit and continue converting all aggregates to 64-bit. The problem is that when I followed the ontap instructions "Changing the root volume" in the Ontap 8.1 System administrators guide, everything was ok until I was about to change which aggregate is the root. When I tried to assign the 64-bit aggregate as root, the system complained I had to do that in maintenance mode. This is definitely NOT documented or mentioned, so I stopped there. I did neither have service-time left, nor nerves to try anything out. Can anyone tell me what I missed here?
... View more
I just created my first 64-bit aggregate with System Manager 2.0R1 on our Ontap 8.0.1 system. It failed partialy, among things it said was that it could not set snap schedule. Unfortunately, it did succeed but Ontap CLI immediatly complained that it is not recommended. I ended up with a 64-bit aggregate, with a default aggregate snap schedule set, aggregate option nosnap=off I researched some and found following Netapp KB and followed it: https://kb.netapp.com/support/index?page=content&id=2011977 Unfortunately I didn't set the aggregate nosnap option so Ontap continued complaining despite setting the "off schedule" One more thing I noticed is that system manager didn't seem to take care of how many spares would be left after creating the aggregate. I came to the last "continue" wizard page and there I stopped as I didn't have time to test if the last click would bring out any kind of warning/error about not having any spares left after creating the aggregate. With Ontap (long missing) dissability to remove disks from an aggregate I view the spare question as very serious as it might not be noticed until too late when the aggregate already is in production. And there seems to be some serious bug related to aggregate snapshots on syncmirrored aggregates: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=162634 Then again, it might have given an error at the last step when trying to create the aggregate.....
... View more
Yes, I was creating a ”SAN” Volume and got the no_i2p option set. I tested with “NAS” volume and then no_i2p is not set, as you developers told you. I would argue that I don’t care how System Manager 1.0/1.1 behaved. Did 1.0/1.1 do the right thing would be the question? I’m only interested in the current software setting the correct values. Also, I noticed that “NAS” volumes sets “guarantee=none” when thin provisioning is selected. I was told (a couple years ago by a Netapp consultant) that “guarantee=file” is the preferred in a CIFS/NFS environment. Out of your on-the-field techies, is “none” or “file” the preferable one to be set when a NAS volume is created? Thanks Dejan
... View more
Well, neither "vol create" nor Filerview creates volumes with no_i2p on by default. That is why I assumed there is something with System Manager 2.0 default options. But I can move the discussion if you still think it has to do with some Ontap defaults.
... View more
Is there any particular reason why newly created volumes on a Ontap 8.0.1 gets no_i2p set to on by default? Supposedly i2p helps optimize some operations so I see no reason to disable it.
... View more
Hi. You have layers of dependancy here. a) The VM with its operating system b) Vmware vSphere c) Netapp When you added the data in the VM and deduped you have your "starting position". Deleting data in the VM didn't actualy remove the data, it most probably just changed a number of pointers in the filesystem structure within the VM. So if you try to dedupe it again, you will end up with about the same amount of space taken on the storage, because the data within the VM is essentially the same as before the delete. If you want to try it, do a wiping of the unused space withing the VM with any program that will write zeroes to the unused blocks. Rerun dedupe and the chance is that your VM's occupied space might even be smaller than before you started the tests. And Netapp will tell you that the volume is suddenly less full, all thanks to the dedupe. Note that writing to all the unused blocks defeats Vmwares thin provisioning and it will expand to full size, but dedupe will remove the wiped blocks. As a hint, Vmware's Vmware tool has an option called "prepare to shrink", but it is unfortunately only enabled if your virtual machines is thick provisioned to start with. But you could Storage vMotion your VM to let it expand, run Vmware tool "prepare to shrink" and then Storage vMotion back, while enabling thin provisioning. The other layer Vmware. So far it doesn't let through information regarding deleted files or block from the OS. So it is only Vmware thin provisioning that helps. I'm not sure exactly about Vmware's criteria for what block are unused, but so far it has never failed me with removing blocks with valid data. If it is a simple "all zeroes" algorithm it could use NFS "sparse files" capabilities or something similar in VMFS nowdays. I would be nice if Vmware soon would enable the use of "unmap" capabilities of modern OS (think Trim with SSD's) where deleted files would "unmap" the previously occupied blocks within the VM and hint to vSphere that the storage that the blocks are available for other use. No need to zero out anything (until next time it is used but that is thin provisioning forced to do any anyway), it would just work. And the storage would have less to dedupe the next run. vSphere 5 has the Unmap capability with VMFS on supported systems (Ontap 8.0.1+ I think) but it is applied only when deleting whole files from the storage, not parts or a few blocks at a time. That lets you thin provisioned Luns expand as data is added and shrink when files are deleted from the VMFS Luns(!) in a way mimicing NFS capability of dynamic provisioning. The third layer is the storage and it's thin provisioning capability with dedupe and possibly compression. But I suppose you are familiar enough with that part.
... View more
TR-3749 is already out of date, despite the fact that it was updated as late as september 2011. That is one month after the public release of vSphere 5 and months after the paperlaunch of the new Vmware. One would expect Netapp to have access to the new vSphere 5 ahead of release and have the paper ready as soon as the product is released. Instead the paper still refers to the previous release vSphere 4.1 features and limitations. What am I missing in the paper you maybe ask. *) Updated VMFS5, limitations and possibilities (like the VAAI UNMAP function, large LUN support) *) Updated VAAI functions, requirements (on Netapp/Ontap systems) and recomendations *) Best practise for number of LUNs, given the above improvments *) Datastore clusters, best practises and recommendations (LUN layouts vs aggregates etc..) *) SIOC, best practise vs Datastore clusters, number of LUNS and how they are distributed on aggregates *) How to use Vmware VASA with Netapp systems *) Vmware VASA, will it give Vmware hints like "these LUNs/NFS Volumes are on the same disk/aggregate and adjust SIOC accordingly" *) Netapp SSD vs vSphere SSD support, best practices. Just my thoughts about how to improve the document
... View more
I think I have been lucky as I haven't seen any of these timeouts yet. Probably a matter of time. I would have liked to get Netapps (or in my case IBMs) official stand in this. I will have to open a case it seems. Message was edited by: Dejan Ilic
... View more
Hi. I'm testing the new features in vSphere 5 VMFS 5 and created an 400GB large VM on a newly created thin provisioned LUN and deleted the VM. It worked as expected and the LUN returned to occupying almost nothing. This behavior is one of the big reasons I have been sticking to NFS so far. For reasons I won't go into we will probably switch to FC/VMFS for a while, possibly FCoE later on. Now I noticed that especially the EMC camp has been shooting flares that there are timeout problems with the UNMAP feature and Vmware has KB describing how to disable it. So my question is : is the VAAI accelerated delete (unmap) safe on Ontap 8.0.1 or do we have to go down the same route as the others and disable the feature until further?
... View more
Well, we did have the problem. I had more people commenting it should work, but unfortunatly the setup was initialy done by a Netapp consultant (not partner consultant, but a netapp techie) and this is our production envirovment so I can't touch it very much to resolve the problem. Anyway, we didn't go for neither upgrade or vnx. Instead we are looking at cloud solutions for filestorage and DAS for our exchange 2010. Only virtualization will be left when MSSQL 2012 with DAS-support for availability clusters is released. And then we will have another look at if the N-series is worth the maintenance and expansion-cost.
... View more
Well, we are running Ontap 8.0.1 on our metrocluster. I guess this would be another reason to upgrade to Ontap 8.1 once it is our as GA for our N-series. You didn't say if the plugin is a part of VSC, Host package or something new.
... View more
Hi. I can't get a grip of where I can find the NFS-plugin needed to get NFS acceleration in vSphere 5. It doesn't seem to be a part of the Virtual Storage Console 2.1.1. I see OnCommand™ Host Package 1.1 with Vmware support, but it seems to be overlapping VSC, and I can't get a grip if this is what I need. Is it part of a product I need? Is it free? As we are running Vmware with NFS on our production envirovment I would like to enable the plugin on a ESXi in the envirovment (pre-production) but outside the production cluster(or one ESXi at a time). Is it possible to phase in the plugin or is it systemwide within the same vCenter when I install it?
... View more
While I do understand the fact that compressed data never touches PAM, I would like to know why this kind of design decision was made as a general policy for every volume with compressed data.
... View more
Is there any particular reason why there is such a behavior? Compressed data might be frequently accessed and ie keeping compressed or even better rehydrated (un-compressed) data in the flashcard could prove to give a significant performance boost with Flash Cache Card. I feel that the PAM behavion should be usercontrolled, as you already enable users to do today onper volume basis (right?) Anyway I just noticed that Ontap 8.1 RC documentation claims that compression is compatible with among others (storage management guide page 246-247): Performance Acceleration Module or Flash cache cards
... View more
OK. Now i did find some news I haven't heard of before. Ontap 8.1 removes the Dedupe volume size limit and Compression can be done as a post-process (as dedupe today) suddenly making the features even more usefull. From tr-3958.pdf Maximum volume size For Data ONTAP 8.1, compression and deduplication do not impose a limit on the maximum volume size supported; therefore, the maximum volume limit is determined by the type of storage system regardless of whether deduplication or compression is enabled. Still its not clear if dedupe is searching for duplicates "volume only" or in the whole aggregate, given that the working set for dedupe is now moved to the aggregate.
... View more