Effective December 3, NetApp adopts Microsoft’s Business-to-Customer (B2C) identity management to simplify and provide secure access to NetApp resources.For accounts that did not pre-register (prior to Dec 3), access to your NetApp data may take up to 1 hour as your legacy NSS ID is synchronized to the new B2C identity.To learn more, read the FAQ and watch the video.Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.
We were able to successfully get our asset management team access to the OCI MySQL database. They have a copy of the schema so they can look things up. However, they are having difficulty finding a 1:1 relationship with the descriptions in Java (headers).
Here's an example.
In Java, they were trying to look at capacity data (Storage Arrays/Storage Pools)
Change localhost to the IP/hostname of your DWH server.
The Java client is not necessarily showing information that will match the OCI DWH.
#1 - The Java client is near real time, the DWH will show you how the world looked at the time of the ETL
#2. The Java client may be doing some math.
That isn't to say that you might not be able to get to where you want to be.
Because I am simple, I am going to try to answer your questions for the dwh_inventory mart. The inventory mart gets wiped out every night, and recreated - therefore, you can do no historical trending on it. To do trending, you will want to use the storage and storage pool mart in dwh_capacity.
You are on the right path for the first 2.
Over committed capacity is a tricky topic:
In OCI, you are generally going to have a storage pool - a volume or internal volume *has* to sit on a storage pool.
You can perform thin provisioning at the volume level - Pure storage is such a platform - thin provisioned block volumes living on a storage pool.
You can perform thin provisioning at the internal volume level - Ontap, Celerra / VNX for File
You can even do both with Ontap - thin LUNs (OCI volumes) sitting on thin FlexVols (OCI internal volumes) sitting on a storage pool.
So, you may want to determine internally how your organization wants to look at over commitment risk.
100% block only environments with OCI likely have no internal volumes whatsover - so the natural behavior is to look at summing volume sizes by what storage pool they sit on, and calculate overcommitment risk.
Ontap environments tend to sum the capacity of internal volumes and compare that to the storage pool numbers.
It is also possible in SQL to do some very complex things - like determining storage pool by storage pool, what platform you are on, and using either the sum of the volumes or the sum of internal volumes depending on the platform/vendor/manufacturer of the array that has the storage pool
For storage pool free capacity, I think totalallocatedcapacity - totalusedcapacity is what you are looking for, but right now I don't have the cleanest environment to look at
Yeah, we had already looked at the documentation. I had pulled a copy of the docs into a PDF file so that it was something that could be shared and searched. We couldn't find the headers that we were looking for in that documentation. If I'm reading it right, the Java headers are not in the database. That's what we were really wondering about. The issue is that we have a group gathering data that are not familiar with storage. They are report writers. So they were looking for some commonality between what we can see in Java and what they needed to look for in the DB schema to pull the same data for their reports.
It sounds like they were on the right track with the dwh_capacity DB and storage_pool tables. It also sounds like that they won't be able to collect Over-committed capacity like the Java GUI can.
This is a pretty quick take at trying to model via SQL what the some of the OCI UI does. I have been spot checking against the WebUI "Over Committed Capacity %"
Again, what I am doing here is only looking at volumes, which is the WRONG approach for an environment where I am worried about Ontap, Celerra, etc over commitment by *internal volume* versus volumes. To do this in a more sophisticated fashion, I would look at the types of arrays in the "dwh_inventory.storage" table, and filter out the ones that use internal volumes.
If you take Committed GB from this output and divide by Storage Pool Capacity GB , you should see the Web UI's "Over Committed Capacity %"
select sp.name as 'Storage Pool Name', sto.name as 'Storage Name', sp.totalUsedCapacityMB / 1024 as "Total Used GB", sp.totalAllocatedCapacityMB / 1024 as "Storage Pool Capacity GB", SUM(vol.capacityMB / 1024) as "Committed GB" from dwh_inventory.storage_pool as SP LEFT JOIN dwh_inventory.volume_in_storage_pool volNsp ON sp.id = volNsp.storagePoolId LEFT JOIN dwh_inventory.volume vol ON volNsp.volumeId = vol.id LEFT JOIN dwh_inventory.storage sto ON sp.storageid = sto.id GROUP BY sp.id,sto.id