Data Backup and Recovery

SM-SQL Configuration Sanity Check

dsimeone
2,494 Views

Greetings All

 

I am a NetApp resident engineer currently in a test/POC phase with SM-SQL in a straight VMDK environment (no RDMs) backed by NetApp FC lun datastores, with CDOT 8.3.1. SM-SQL v7.2.1 is installed fine along with SnapDrive underneath (v7.1.3P1). My customer, although enthusiastic about SM-SQL, is adamant that it be configured in the following ways, both in apparant direct conflict with the install guide (let alone how SnapManager has worked for ~20 years like this):

 

1) He rails at the idea that the subject DB of a given server must be migrated to "other" NetApp disks, when arguably, the DB is already ON NetApp disk, on the VMDK (in the datastore served by the NetApp lun), as is the Logs disk. Says he's certain there was some way to do this "migration in place" even though I see no such alternate doc (and BTW this guy is a former NetApp SE!). His position is mgmt would kill him if he had to migrate every SQL DB in their environment to new disk. For example, source disks are 😧 for DB and G: for logs, destination disks are K: for User DBs, L: for logs and S: for SnapInfo. I think custy expects somehow (in Config Wizard) to map source 😧 with destination 😧 and Source G: with destination G:. Trying this just for grins, for 😧 drive the Wizard immmediately errors out complaining that User DBs cannot be on the same disk as system/master DBs and it would have to fall back to stream-based restores, duh, and so I'm stuck. However, I then hit upon the goofy idea: "Fine, then migrate the system/master DBs to K drive instead!" Then G: to G: and finally SnapInfo to S: So my resulting question is- will this even work and SM-SQL function properly? or some other similar trick? Or am I on crack?

 

2) Related to (1) but somwhat separate, regardless of "migration in place" or not, custy also rails at the idea of having to create separate VMDKs on separate datasores (on separate luns in separate volumes) for the destination disks;.and instead just thinks they can go all on the same existing datastore hosting the VM! This is in 100% direct contradiction with the doc (ref' capture attached for VMDK config from SM-SQL 7.2 Install guide), which clearly illustrates separate VMDKs in separate datastores within (stated elsewhere in the doc) separate volumes. Yes it's great that multiple DBs can fan into these 3 disks but you stull need to start with 3 new VMDKs on separate volumes. I *assume* this is becasue SM-SQL thinks with independent snapshots on the K, L and S volumes, it can snaprestore each with abaondon during restore. But the reality in this scenario is they'd all be on the SAME volume, becasue all the VMDKs are created in the same datastore, and as a result these snaps would trip each other up and wreck the restore job (and maybe even the DB itself). However, this in turn assumes SM-SQL does volume level snaprestores and not more selective single-file snaprestore, so as not to affect the other contents of the volume. If indeed single file snaprstore is done across all 3 VMDKs during the SM-SQL restore process, then in theory the 3 VMDKs (K, L, and S) just maybe *could* all peacefully coexist within the same volume. Or am I on crack again?

 

 

So that's it. The last thing is my custy is an extremely sharp storage architect in his own right (NetApp and other vendors), and so the answers I come back with either way must be rock solid defenable, with no uncertainty, or he will come back and find holes and continue to pick the issue apart.

 

0 REPLIES 0
Public