You can look at sysconfig -a and see if any disks are missing/bypassed/failed. If you have a guess as to when they failed, or failed within a month, then you can probably look at the oldest weekly_log ASUP and compare its sysconfig -r output. I'm not sure if you can get those disks replaced, though. If your root vol snapshots go back far enough then you can look in there and look at their oldest messages.0/.1/.2/.3/.4/.5 file and see when the disks failed.
I don't know which version of ONTAP it's running, but if you look in /etc/log/autosupport you will probably find a number of directories for each ASUP that's generated. I get that it's not sending ASUPs to NetApp but if it's getting generated it'll be there. And some of these ASUPs will have sysconfig -r output so if you look for old ones, you can probably find it. Having said that, you may also look through sysconfig -a and look at disk IDs and maybe you can find some missing disk IDs. For DS14 types the IDs run in 14 consecutive numbers with 2 missing numbers between shelves:
ID 16-29 for shelf 1
ID 32-45 for shelf 2, so on.
So you know the numbers you're supposed to be missing(30, 31, 46, 47, etc) If you're missing some IDs that should NOT be missing(43, for example), then it's probably not a bad guess that that's the missing/bypassed/failed disk. You may also need to do this from both nodes in a cluster(if this is an HA pair). Sometimes one node can report healthy status while the other won't.