Data Backup and Recovery

FCP Partner Path Misconfigured DOT 7.2.7

VGENTNER01
2,637 Views

According to the information given below says that the amount of data is greater than the amount of time and makes part pass information to the partner and this gives an error fcp miss configured error.

I found this example in the now support:

In the example above, both Partner Ops and Partner Kbytes have exceeded the threshold in the given time interval. The hosts accessing the LUN in this way should be identified and the reasoning for the access evaluated. Possible solutions are to restrict access, or tune the host MPIO software so that it will not attempt access through the partner path.

This is is my case:

ntap1b> lun stats -o

    /vol/v_cantv_locus01/locus01_lun  (460 days, 10 hours, 8 minutes, 53 seconds

)

        Read (kbytes)   Write (kbytes)  Read Ops Write Ops  Other Ops  QFulls

Partner Ops Partner KBytes

        293587          714209          73409     12212 400        0

0           0

    /vol/v_cantv_cal2/cal2  (460 days, 10 hours, 8 minutes, 53 seconds)

        Read (kbytes)   Write (kbytes)  Read Ops Write Ops  Other Ops  QFulls

Partner Ops Partner KBytes

        319763533       1397782900      4010772 83631338   69515      0

43676937    857471434

    /vol/v_cantv_im2/im2  (460 days, 10 hours, 8 minutes, 53 seconds)

        Read (kbytes)   Write (kbytes)  Read Ops Write Ops  Other Ops  QFulls

Partner Ops Partner KBytes

        1892544         45049           139600    38108948 69584      0

19140109    968135

    /vol/v_cantv_sql2/qSQL2/datossql2.lun  (460 days, 10 hours, 8 minutes, 53 se

conds)

        Read (kbytes)   Write (kbytes)  Read Ops  Write Ops Other Ops  QFulls

Partner Ops Partner KBytes

        1265039         13783330        59684     13482134 13268506   0

30          0

    /vol/v_cantv_store2/SUNWmsgsr  (460 days, 10 hours, 8 minutes, 53 seconds)

        Read (kbytes)   Write (kbytes)  Read Ops Write Ops  Other Ops  QFulls

Partner Ops Partner KBytes

        402207582       716717199       9405134 112489571  71174      0

61141001    561499542

    /vol/v_cantv_store2/p1  (460 days, 10 hours, 8 minutes, 53 seconds)

        Read (kbytes)   Write (kbytes)  Read Ops Write Ops  Other Ops  QFulls

Partner Ops Partner KBytes

        18091577812     128281206       194716995 43919849   69944 0

119099983   9111285619

    /vol/v_cantv_store2/p2  (460 days, 10 hours, 8 minutes, 53 seconds)

        Read (kbytes)   Write (kbytes)  Read Ops Write Ops  Other Ops  QFulls

Partner Ops Partner KBytes

14845457565     201608392       214250578 58707694   69809 0

I tryed to correct it with the following options:

options lun.use_partner.cc.warn_limit 300
options lun.use_partner.cc.bytes 2457600

but in the 7.2.7 DOT do not exist this commands.

Is Any way to solve this problem without having to update the DOT to 7.3?


2 REPLIES 2

aborzenkov
2,637 Views

Setting these options would not fix anything, it would just hide the problem. You have to find out why data is accessed via non-preferred paths and fix it. It could be host misconfiguration or real problems with connectivity (bad cable, port, switch, adapter, ...).

ismopuuronen
2,637 Views

Hello,

>lun stats -z will zero those lun stats, wait couple days and chek again output >lun stats
Then you get better picture which LUNs are causing those errors. I can see the information in your example is from last 460days. so it doesen't tell too much. lots of chages may be done in that time.

When you find your LUNs, you need to fix the path settings from clients side.
Ofcourse, if all the LUNs are going through partner node, then there might be something else wrong as well.


Why this happens, takeover/giveback can be one reason, during takeover all the paths goes through one node, and when giveback is done, paths may remain.

Client doesen't care about "optimal" path, it's ok when it sees the LUN, but from netapps view, client is using a wrong path.

Br.
Ismo.

Public