ONTAP Discussions

Solaris EFI LUN misalignment

brunopad87
6,757 Views

Hi there,

I'm having some troubles with misalignment on Solaris EFI LUN types with ZFS.

 

I'm following this paper and it states:

 

Using Solaris EFI labels on standard Solaris LUNs results in LUN misalignment. Misalignment happens when the starting sector of the LUN does not begin on a 4K boundary. Aligning LUNs on 4K boundaries provides for optimum performance by preventing partial writes and reads, which cause the storage system to work harder.

EFI labels occupy the first 34 sectors of a LUN. As a result, the starting sector of the LUN does not begin on a 4K boundary.

To avoid this problem, make sure that partition 0 is sized to 39 cylinders. This causes the remaining LUNs to start on 4K boundaries.

If you are creating a LUN on the storage system, you can specify the LUN type "solaris_efi" when you create the LUN.

Note: If LUN type "solaris_efi" is not available with your version of Data ONTAP, use LUN type "windows_gpt".

You can also manually format the partition 0 so that it is 39 cylinders. For more information, see the section "Solaris 10 UFS LUN Misalignment Problem" in Technical Report 3497 "ORACLE 10g PERFORMANCE – PROTOCOL COMPARISON ON SUN SOLARIS 10."

 

In format when I label the disk as EFI, it really occupies the first 34 sectors, but how can I size it to 39 cylinders?

8 REPLIES 8

ekashpureff
6,755 Views

Use the solaris 'format' command partition>modify sub-command, as it sez in the appendix (pg 15) of the Oracle performance paper referenced:

 

format -e
Specify disk (enter its number): enter the number corresponding to the disk you wish to partition
[disk formatted]
Disk not labeled. Label it now? no
format> partition
partition> modify
Select partitioning base:
0. Current partition table (original)
1. All Free Hog
Choose base (enter number) [0]? 1
Do you wish to continue creating a new partition
table based on above table[yes]? yes
Free Hog partition[6]? 6
Enter size of partition 0 [0b, 33e, 0mb, 0gb, 0tb]: 39e

 

I hope this response has been helpful to you.

 

At your service,

 

Eugene Kashpureff
NetAppU Instructor and Independent Consultant
(P.S. I appreciate points for helpful or correct answers.)

hdiseguros
6,755 Views

Which LUN type should I use then?

We already tried and it didn't work.

ekashpureff
6,755 Views

With Data ONTAP 7.3.4 Solaris EFI is a supported LUN type:

Multiprotocol type of LUN
    (solaris/windows/hpux/aix/linux/netware/vmware/windows_gpt/windows_2008/xen/hyper_v/solaris_efi)
    [linux]:

With earlier versions the documentation sez you can use windows_gpt and get correct allignment.

I hope this response has been helpful to you.

At your service,

Eugene Kashpureff
NetAppU Instructor and Independent Consultant
(P.S. I appreciate points for helpful or correct answers.)

hdiseguros
6,755 Views

We are using ontap 7.3.1 and lun type solaris_efi is already avaible, but using both lun types, solaris and solaris_efi, the access pattern after creating zpool stills misaligned.

Even using manual alignment methods showed at TR-6303 and other Oracle/solaris documents

We are loosing hope at this problem, the only way out we are seeing is to get back to UFS 

hdiseguros
6,755 Views

Here's how I configured.

Current partition table (original):
Total disk sectors available: 12566494 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector        Size        Last Sector
  0        usr    wm                34       0.00MB         39
  1 unassigned    wm                 0          0              0
  2 unassigned    wm                 0          0              0
  3 unassigned    wm                 0          0              0
  4 unassigned    wm                 0          0              0
  5 unassigned    wm                 0          0              0
  6        usr    wm                40       5.99GB         12566493
  7 unassigned    wm                 0          0              0
  8   reserved    wm          12566495       8.00MB         12582878

Here's the test:

root@xxxx # pwd
/testezfs/teste
root@xxxx # ls -l
total 4096768
-rw------T   1 root     root     524288000 Nov  8 08:56 1
-rw------T   1 root     root     524288000 Nov  8 08:56 2
-rw------T   1 root     root     524288000 Nov  8 08:56 3
-rw------T   1 root     root     524288000 Nov  8 08:57 4
root@xxxx # rm 1 2 3 4
root@xxxx # mkfile 500m 1 2 3 4 5 6

And the output from stats show lun command:

lun:/vol/progressrepl/testzfs-W-Daho/3/lot:display_name:/vol/progressrepl/testzfs
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_ops:0/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_ops:875/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:other_ops:0/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_data:0b/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_data:113012224b/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:queue_full:0/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:avg_latency:37.27ms
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:total_ops:909/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:scsi_partner_ops:0/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:scsi_partner_data:0b/s
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_align_histo.0:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_align_histo.1:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_align_histo.2:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_align_histo.3:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_align_histo.4:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_align_histo.5:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_align_histo.6:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_align_histo.7:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_align_histo.0:26%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_align_histo.1:8%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_align_histo.2:4%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_align_histo.3:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_align_histo.4:19%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_align_histo.5:8%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_align_histo.6:5%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_align_histo.7:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:read_partial_blocks:0%
lun:/vol/progressrepl/testzfs-W-Daho/3/lot:write_partial_blocks:26%

aborzenkov
6,755 Views

It probably is not related to EFI. There was recent thread about zfs alignment issues (I do not have reference, sorry, but search should bring it up). It seems that zfs is using variable block size that is not controllable (or not quite controllable) by user so misalignment is inevitable.

To verify try UFS on the same LUN, whether it will exhibit the same alignment pattern.

Of course the real question is how much performance impact it has.

john_giordano
6,757 Views

Hi,

Where to begin?  We are also experiencing this issue/bug/pain/whatever between our Oracle T3-1 server and our NetApp FAS 2020 connected via Fiber Channel.

We have engaged both Oracle and NetApp support but what is still outstanding is that neither company seems to be able to agree on how to create the zpool.

Right now we have an Oracle 10 DB on the zpool we have to destroy and then re-create in order to get proper IO alignment.  As this system runs a critical NMS for our company we have to be sure about what we are doing here.

The Oracle T3-1 server has been patched per Oracle so that it can communicate with the NetApp using a 4k block size.  Apparently, we have to edit our ssd.conf file, add the line:

ssd-config-list="NETAPP LUN","physical-block-size:4096";

and then reboot the T3-1.

What we are not sure on is the recreation of the zpool after we copy the data off and then destroy it.  Does anyone know for sure how to do this?  I have spent hours Googling, reading whitepapers and talking to support people and it still seems to be "grey area".

Oracle says we should do it thusly:

zpool create ntapcontroller1lun0 c6t60A9800050336873575A587739787A76d0

Which means it using the whole disk and no slice.  This is fine as we don't need any other slices on here and this will also enable ZFS write caching which sounds good to me.  By doing this the start sector size is at 256.

His exact quote (and he seemed pretty savvy) was:

"Further ZFS will not use the write cache unless you pass a "whole disk" name

like c6t60A9800050336873575A587739787A76d0 to zpool create (note the

s6 has been removed). When passed a whole disk ZFS will create a slice 0

starting at sector 256 (a multiple of 8 sectors) that will preserve the

4KB zfs io alignment when it reaches the logical unit."

-------------

NOW, NetApp seems to say that we need to manually create a slice, manually align it so it starts at sector 40 and then do some sort of dance.

I should note the LUN type is Solaris_EFI.

What do you guys think?

Thanks,

jg

S920A2042
6,757 Views

I just went through this myself.  Apparently the answer is to install Host Utilities 6.1, run host_config again, reboot, and create your LUNs with multiprotocol type solaris (*not* solaris_efi).  All the zpools I've created with whole-LUN vdevs since doing those things have been perfectly aligned.  Note that this is for OnTAP 8.x.  I'm still experimenting to see if it works on 7.x also.

Public