Network and Storage Protocols

Igroup settings

deygaurab
6,617 Views

Friends,

When I do a igroup show -v on my storage controller (my environment is completely on fcp), I see all the igroups as logged in (I have 2 fabrics) with vtic option enabled.  One particular igroup, though it shows as logged in to both the fabrics, it doesn’t show the vtic option. I am trying to understand the reason behind it. If this is because of any  host multipathing issue . For example,

IG_GLOQ11WP02 (FCP):

        OS Type: windows

        Member: 20:00:00:25:b5:20:00:2d (logged in on: vtic, 0b, 1b)  ------------------------vtic enabled

        Member: 20:00:00:25:b5:20:00:4d (logged in on: vtic, 1d, 0d)

        UUID: 12099b3c-b7a9-11e1-ad0c-00a098201667

IG_GLOQ11WP01(FCP)

          OS Type: windows

        Member: 20:00:00:25:b5:20:00:1e (logged in on: 0b, 1b)----------------- NO vtic enabled.

        Member: 20:00:00:25:b5:20:00:0e (logged in on: 0d, 1d)

        UUID: 08d188dc-b63b-11e1-ad0c-00a098201667

Both the igroups have no ALUA enabled.

My second query, I see couple of windows igroup starting with via.RPC (looks duplicate). The igroup for the same host is created separately and shows up correct . My concern is why a separate via.rpc igroup shows up?  I would have suspected snapdrive ,if the LUN allocations were done by snapdrive. but here the LUN allocations have been done manually from the storage.

Example:

    viaRPC.20:00:00:25:b5:20:00:3c.GLOQ11WP03 (FCP):-- shows logged in to one fabric only.

        OS Type: windows

        Member: 20:00:00:25:b5:20:00:3c (logged in on: vtic, 0b, 1b)

        UUID: 45c84910-058c-11e2-b067-00a098201667

        ALUA: Yes

Whereas I have a good igroup for the same host and which I created. I do not have any idea where the first one above(yellowed) showed up from.

    IG_GLOQ11WP03 (FCP):

        OS Type: windows

        Member: 20:00:00:25:b5:20:00:3c (logged in on: vtic, 0b, 1b)

        Member: 20:00:00:25:b5:20:00:2c (logged in on: vtic, 0d, 1d)

        UUID: d8a32b26-b85a-11e1-ad0d-00a098201667

Would appreciate any suggestions here.

Cheers,

Rahul

8 REPLIES 8

cguldelmco
6,618 Views

For the vtic issue, I would check the SAN zoning on the IG_GLOQ11WP01 client.  The client might not be zoned to the cluster partner.

For the viaRPC issue, the client might have logged into the Filer before the igroup was created.  Shut the Client down, verify that the viaRPC igroup shows not logged in, delete the viaRPC igroup and boot the client.

MILESNABO
6,618 Views

It's a snapdrive-type igroup name so I think you're right about that. You'd still get an igroup created by snapdrive, if you selected automatic igroup management when connecting to the lun, even if it was already mapped at the storage - although if the process was completed there would be another, separate igroup for the other fabric/HBA port.

Incidentally as snapdrive thinks you should have ALUA enabled, you might consider turning it on for your manually created igroup, depending on your setup.

deygaurab
6,618 Views

Thanks a lot guys.. Regarding the  zoning for IG_GLOQ11WP01. I verified the zoning seems correct . It is zoned with its cluster partner. On the other query, Can I enable ALUA non disruptively?

Cheers

Rahul

deygaurab
6,618 Views

though i enabled the alua in the igroup today. but prior to enabling it, i saw that the vtic was back in the igroup. I wonder if that is being controller by host multipathing?

Where can I find the "Automatic igroup management" in the snapdrive?.. Can I uncheck the option if I find it configured and get rid of the duplicate igroups. Interestingly, I could see igroup from only one fabric logged in to the storage.

For example,

viaRPC.20:00:00:25:b5:20:00:3c.GLOQ11WP03 (FCP):-- shows logged in to one fabric only.

        OS Type: windows

        Member: 20:00:00:25:b5:20:00:3c (logged in on: vtic, 0b, 1b)

        UUID: 45c84910-058c-11e2-b067-00a098201667

        ALUA: Yes

Cheers,

deygaurab
6,618 Views

In addition to this, I see these messages apprearing on the console.Looks like the snapdrive is managing the igroups . I do not have any scheduled snapshot creation by snapdrive. Any idea why I could be seeing these messages on the console?

Thu Oct  4 13:05:37 CEST [spd_fas6210_pdt: lun.newLocation.offline:warning]: LUN /vol/sdw_cl_vol_X86_OS_PDT_0/Qisedei001_OS/isedei001_OS has been taken offline to prevent map conflicts after a copy or move operation.

Thu Oct  4 13:05:38 CEST [spd_fas6210_pdt: lun.map:info]: LUN /vol/sdw_cl_vol_X86_OS_PDT_0/Qisedei001_OS/isedei001_OS was mapped to initiator group viaRPC.20:00:00:25:b5:10:00:24.ISEDEI001=2

Thu Oct  4 13:05:38 CEST [spd_fas6210_pdt: lun.map:info]: LUN /vol/sdw_cl_vol_X86_OS_PDT_0/Qisedei001_OS/isedei001_OS was mapped to initiator group viaRPC.20:00:00:25:b5:10:00:34.ISEDEI001=2

AGUMADAVALLI
6,618 Views

Run the command "lun check config", it is the issue with lun pathing.

Download the netapp host ulitility kit to correct it and enable AULA too.

thank you,

AK G

deygaurab
6,618 Views

Hi AK,

Thanks for your response. NetApp host utlitiy is installed on the host and host is seeing single path to the storage. May be the configuration is not correct.. ALUA is also enabled on the igroup now . I will check LUN config and let you know....

Cheers

Rahul

deygaurab
6,618 Views

I cheked the lun config and it didnt report anything. it just reported that ALUA is not enabled on the automatically created igroup "viaRPC.20:00:00:25:b5:20:00:1e.GLOQ11WP01=6"

wafl.volume.clone.created:info]: Volume clone sdw_cl_vol_AQU_DAT_0 of volume vol_AQU_DAT was created successfully.

lun.newLocation.offline:warning]: LUN /vol/sdw_cl_vol_AQU_DAT_0/Qgloq11wp01_Edrive/gloq11wp01_Edrive has been taken offline to prevent map conflicts after a copy or move operation.

lun.map:info]: LUN /vol/sdw_cl_vol_AQU_DAT_0/Qgloq11wp01_Edrive/gloq11wp01_Edrive was mapped to initiator group viaRPC.20:00:00:25:b5:20:00:0e.GLOQ11WP01=6

lun.map:info]: LUN /vol/sdw_cl_vol_AQU_DAT_0/Qgloq11wp01_Edrive/gloq11wp01_Edrive was mapped to initiator group viaRPC.20:00:00:25:b5:20:00:1e.GLOQ11WP01=6

Any idea what could be the problem in the host multipathing?

Cheers

Rahul

Public