I'm pretty sure in all cases each controller needs its own root volume.
On your second controller, you have to assign one or more disks using disk assign and create a root volume for it. Did you assign all of your disks to controller A? Do a disk show -n to see if you have any unowned disks, then use disk assign to assign them if available.
The only way I know of to install a new ONTAP OS on a new set of disks is to boot into the special boot menu and choose option 4 (formerly 4a, I think in ONTAP 7). This options initializes the disks attached to the system, creates a new root aggr and volume, and starts the ONTAP setup. Just be very careful you have no data on any disks attached to that controller you need to keep. It literally zeroes the disks and starts over. The disks attached to the partner controller should be untouched. I recently had to do this for a pair of 6280 controllers that shipped with no root volumes.
If you assigned all of your disks to one controller, you have to unassign them from one controller and assign then to the other.
I'm not very familar with the 270c controller, but after a little research it looks like you only have one port you can actually plug into a SAN fabric (port 0c). The other port (0b) is an initiator only. If what I'm reading is correct, you only have two FC ports total. This means you can only wire 0c to your brocade switches.
My guess is that if your controller with the 0c fc adapter fails, you will still get the normal failover. If it's hardware (port) or other physical failure, you'd be down. Personally, it doesn't make sense to only have one port onboard for fabric connections. You may want to double check your documentation just to be sure. My source is here:
You'd also have to make sure the port is configured as a target (it can be configured as a target or initiator).
In any case, if you can only connect one port, then you can either plug it into switch A or B, but not both. The only way to get full redundancy on the client end is to put an ISL between your two switches.
If I am incorrect, and you can connect it twice, then what you have below looks correct (assuming your ESX hosts have 2 HBAs, one connection to each fabric switch, and are properly setup for Multipathing)
FC LUNs are always visible on BOTH cluster nodes at the same time. So even though your LUN physically resides on controller A, you can access it through controller B, which "forwards" the requests to controller A. If Controller A dies, controller B takes over and will become responsible for the LUNs.
If your fabric still works when one of the brocades dies depends on the zoning. Since LUNs are always visible on both controllers at the same time, if zoning is correct it will definitely work. This is called the "single image" mode (which is now the only supported fc mode).
Check if your ESX hosts see two paths to your LUNs. If so, you're safe. But make sure that you install the ESX host utilities or use ALUA on your iGroup.