And then after this was cabled, I opened RCU plugin, added filer1 (with IP 192.168.x.1), used rcu user permissions, and then I was only able to use interfaces from vfiler0, so vlan20 and e0 like on this screen "netapp_rcu_config.png"
But later on when I try to deploy clones, I don't see datastores "netapp RCU.png".
I guess this is because this server is only in vlan20, but how the heck make all datastores visible by that tool?
Did anyone already used that together with vfilers?
The ONTAP version required is indeed 7.3.3 but that won't resolve the issue you're seeing. When you add vfiler0, the provisioning and cloning capability (P&C) will NOT present the interfaces that belong to vfilers; only those that exist on the physical filer. You can not add new volumes to vfilers with this version; only to physical filers. We are adding support to provision through vfiler0 to the vfilers in the follow-on release slated for next month.
Thanks for your info, but I don't understand it. What I wanted to do is not to create volume or do anything with volumes, what I wanted to do is to simply deploy new servers with RCU. That's it. So you say I will not be able to use RCU to deploy VMs when I use vfilers?
Sorry for the confusion. Yes, you can deploy new servers with RCU but you will only see interfaces that belong to the filer/vfiler you added. If the interfaces belong to vfilers, you will not see them in the list of available interfaces. Hence you will only see datastores mapped to those interfaces when selecting which ones you want to deploy to. If you want to see the datastores that are mapped to vfilers, you must add the vfilers to RCU.
thanks, but if I would like to add vfilers to RCU, what should I do in my case? Should i then enable RSH access on vfiler? I put my vfiler status -a output, so I would be very happy if you could point me in right direction.
ok, let me present simple drawing from what I have now (example).
So looking at this picture, I have let's say 3 racks: one rack contains ESX hosts from one customer and storage connection goes via vlan100 to netapp and vlan100 is assigned to vfiler100, second rack goes via vlan200 to netapp and is assigned to vfiler200. STORAGE SWITCH on that picture is serving trunk port single VIF, so also other racks connect to it. Before (when I didn't use RCU) vCenter was NOT connected to NetApp at all - as there was no such need. But now I wanted to use it, so I connected it physically to that mgmt switch that interconnects these 2 filers (via vfiler0) and I thought it is enough (but it is not aparently).
So before I could let's say connect 1 nic to stor1 switch, set vlan100 on that NIC, assign IP from that range 172.x.x.x and also connect to vfiler100 volumes. BUT NOT TO OTHER (as vfiler was assigned only one vlan, and rsh protocol was disabled for non-mgmt vfilers). Ofcourse it is obvious, as you want to separate vfilers, traffic and everything else.
But now, I have no clue how to make RCU working with my setup? Should I then on each vfiler enable RSH and then set vCenter NIC in vlan100, and then I will be able to add vfiler100 to RCU? But then if I would like also vfiler200 to be added to RCU, then I would need second physical NIC connected to stor2 switch with vlan200 set.
Is this correct? Does my explanation make more sense now?
This setup is NOT supported in the current version of RCU. It will be supported in the follow-on release due out in August. In the current release, the only option you have is what you described in your questions. The VSC service must be able to access the vfilers via the mgmt switch.