BlueXP Services

FabricPool functionality on openstack swift

josephillips
4,441 Views

Hi

 

Im working with fabricpool and openstack swift with S3 Middleware is working great but i see a behaivor that i wish to understand.

 

i have a volume configured tiering policy with snapshot-only  during random times on day i saw the netapp is performing a lot of GET from the object storage

 

my questions is ?

1.Why is performing a lot of GETS if the policy is snapshot-only and nobody is performing a restore of snapshots?

2.if the object storage is down and i have snapshot-only policy can affect the access to the data on the performance tier?

 

thanks

5 REPLIES 5

manistorage
4,386 Views

can you share the output of the below

volume show-footprint -vserver  vserver-name -volume vol-name

 

1.Why is performing a lot of GETS if the policy is snapshot-only and nobody is performing a restore of snapshots?

My Guess --  If the local tier is at >90% capacity, cold data is read directly from the cloud tier without being written back
to the local tier. By preventing cold data write-backs on heavily utilized local tiers, FabricPool preserves
the local tier for active data. 

2.if the object storage is down and i have snapshot-only policy can affect the access to the data on the performance tier?

-- If for any reason connectivity to the cloud is lost, the FabricPool local tier remains online, but applications
receive an error message when attempting to get data from the cloud tier.

--Cold blocks that exist exclusively on the cloud tier remain unavailable until connectivity is reestablished.

josephillips
4,379 Views


Vserver: svmTC-Store
Volume Name: vol_data
Volume MSID: 2163438862
Volume DSID: 1033
Vserver UUID: 3328e1b7-59f2-11e8-9d50-00a098d5bef2
Aggregate Name: aggr_Data00_TC_NetApp1210_01
Aggregate UUID: cc5224b9-ca9c-41f1-a62d-428b7188cdf6
Hostname: TC-NETAPP1210-01
Tape Backup Metadata Footprint: -
Tape Backup Metadata Footprint Percent: -
Deduplication Footprint: 23.29GB
Deduplication Footprint Percent: 0%
Temporary Deduplication Footprint: -
Temporary Deduplication Footprint Percent: -
Cross Volume Deduplication Footprint: 72.45MB
Cross Volume Deduplication Footprint Percent: 0%
Cross Volume Temporary Deduplication Footprint: -
Cross Volume Temporary Deduplication Footprint Percent: -
Volume Data Footprint: 4.42TB
Volume Data Footprint Percent: 41%
Flexible Volume Metadata Footprint: 30.98GB
Flexible Volume Metadata Footprint Percent: 0%
Delayed Free Blocks: 40.73GB
Delayed Free Blocks Percent: 0%
SnapMirror Destination Footprint: -
SnapMirror Destination Footprint Percent: -
Volume Guarantee: 0B
Volume Guarantee Percent: 0%
File Operation Metadata: -
File Operation Metadata Percent: -
Total Footprint: 4.52TB
Total Footprint Percent: 42%
Containing Aggregate Size: 10.73TB
Name for bin0: Performance Tier
Volume Footprint for bin0: 4.08TB
Volume Footprint bin0 Percent: 91%
Name for bin1: tm_swift_1210
Volume Footprint for bin1: 393.3GB
Volume Footprint bin1 Percent: 9%

manistorage
4,331 Views

Hi,

only 9% of data is in your S3 bucket.

 

only Volume Footprint for bin1: 393.3GB
Volume Footprint bin1 Percent: 9%

 

Regards,

Mani

merdos
2,498 Views

Hey Jose Phillips, this is interesting! what was the middleware S3 that you mention above? Or is this part of Swift? And secondly, why not just use Swift directly? Thank you! 

dbenadib
2,409 Views

Hi, Actually GET request could be also due to object fragmentation probably due to snapshot rollout.

FabricPool does not delete blocks from attached object stores. Instead, FabricPool deletes entire objects after a certain percentage of the blocks in the object are no longer referenced by ONTAP.

For example, there are 1,024 4KB blocks in a 4MB object tiered to Amazon S3. If a customer/client app makes a delete or a write to a file that has cold blocks in an object, the cold block becomes unreferenced, but it stays in the object.

This fragmentation slowly builds up until it crosses the unreclaimed space threshold and we delete the object and fold any existing referenced blocks into a new object. Defragmentation and deletion do not occur until less than 205 4KB blocks (20% of 1,024) are being referenced by ONTAP.

When enough (1,024) blocks have zero references, their original 4MB objects are deleted, and a new object is created.

More details available in TR-4598.

 

 

Public