Oracle redo log writes are not guanteed to be aligned and is discussed in this Oracle whitepaper:
Since all database writes in Oracle are aligned on 4K boundaries (as long as the default block size is at
least 4K), using flash for database tablespaces should never result in slower performance due to
misaligned writes. Redo writes however, are only guaranteed to fall on 512 byte boundaries [5]. Redo
writes also have a quasi-random access pattern when async I/O is employed. These two properties
contribute to performance degradations for some workloads. Data illustrating this is shown in Table
6.6.
[5] This has been changed in Oracle 11R2, with the addition of a 'BLOCKSIZE' option to the
'ALTER DATABASE ADD LOGFILE' command. This option, which is (as of October 2010) not
available for Oracle on versions of Solaris, guarantees that redo writes will be a multiple of
BLOCKSIZE, and thus aligned on BLOCKSIZE boundaries. The availability of this option will likely
change the stated conclusions about flash based redo logs.
Perhaps on your Oracle version you can update to make your redo writes aligned.
Also, of interest, starting with Data ONTAP 7.3.5.1 (but not yet in 8.x family yet) the nfs stats (use command 'nfsstat -d') have been extended to categorize IO by offset, and to shows the NFS files with the most misaligned IOs:
Misaligned Read request stats
BIN-0 BIN-1 BIN-2 BIN-3 BIN-4 BIN-5 BIN-6 BIN-7
1474405719 47648 5472 5331 3192 3192 2843 2080
Misaligned Write request stats
BIN-0 BIN-1 BIN-2 BIN-3 BIN-4 BIN-5 BIN-6 BIN-7
302208520 6965622 6541184 6586093 6532810 6558522 6563999 6570036
...
Files Causing Misaligned IO's
[Counter=285899], Filename=DKOOP_E02/DKOOP_E02_NAS1/ooperfbau/ooperfbau_g1_m1.redo
[Counter=323208], Filename=DKOOP_E02/DKOOP_E02_NAS1/oradata2/ooperfbau/ooperfbau_g3_m2.redo
[Counter=257224], Filename=DKOOP_E02/DKOOP_E02_NAS1/oradata2/ooperfbau/ooperfbau_g4_m2.redo
[Counter=319141], Filename=DKOOP_E02/DKOOP_E02_NAS1/ooperfbau/ooperfbau_g3_m1.redo
[Counter=283732], Filename=DKOOP_E02/DKOOP_E02_NAS1/ooperfbau/ooperfbau_g2_m1.redo
[Counter=259950], Filename=DKOOP_E02/DKOOP_E02_NAS1/ooperfbau/ooperfbau_g4_m1.redo
[Counter=280414], Filename=DKOOP_E02/DKOOP_E02_NAS1/oradata2/ooperfbau/ooperfbau_g2_m2.redo
[Counter=288506], Filename=DKOOP_E02/DKOOP_E02_NAS1/oradata2/ooperfbau/ooperfbau_g1_m2.redo
[Counter=605], Filename=DKOOP_E02/DKOOP_E02_NAS1/oraarch/ooperfbau/ooperfbau_1_742556409_0000001756.arch
[Counter=20], Filename=DKOOP_E02/DKOOP_E02_NAS1/oraarch/ooperfbau/ooperfbau_1_742556409_0000001842.arch
[Counter=315170], Filename=DKOOP_E02/DKOOP_E02_NAS1/oradata2/ooperfbau/ooperfbau_g1_m2.redo
[Counter=173], Filename=DKOOP_E02/DKOOP_E02_NAS1/oraarch/ooperfbau/ooperfbau_1_742556409_0000001817.arch
[Counter=304773], Filename=DKOOP_E02/DKOOP_E02_NAS1/ooperfbau/ooperfbau_g2_m1.redo
Info on how to interpret the stats is in the man page but it's quite helpful to use this technique to understand (a) how much misaligned IO is occurring and (b) which NFS files receive the most misaligned IOs.
In the above output we can conclude that 87% of our NFS 4k increment writes are aligned and 13% are unaligned, and checking the file list we see that by far the biggest cuprits are the redo logs. Now without looking at pw.over_limit (I don't have wafl_susp output for this snippet above) I can't say if there'd be much positive affect by reducing these misaligned writes, but in any case you can use the technique above to better understand the workload arriving at the system and where to focus if needed.
Cheers,
Chris