In an SG GRID, there are 8 nodes, 4 are 5712, and 4 are 5760, all in the same storage pool, 2 buckets created, one for FabricPool and the other for Commvault backup destination. EC 2+1 used in the two buckets. We noticed in recent days that one of the SG5712 node is reaching 93% while the other are around 57-70% used... SG5760 nodes are at 30% or a little bit less. How is SG balancing the writes? Why is one node approaching 100% faster than the others?
... View more
I have been searching and googling and also checking HWU, but cannot find it. A bucket consists of a FlexGroup, which again consits of one or more flex volumes. For FAS8300, ONTAP 9.14.1, the max volume size is 300 TB and the max object size is 16 TB. But what is the maximum size for one bucket?
... View more
Hi, Experts, In SG TR-4889, when deploy SG on a bare metal or virtual host platform, there is limit per storage node as below: − Maximum number of object store volumes per Storage Node: 16 − Maximum object store size: 36 TiB (39 TB) That means maximum capacity per node is 16 volumes*39TB =624TB. Do you have experience to deploy higher capacity than the limit/node ? Rgds, Jackie
... View more
Is this supported? Or must first upgrade to 11.7? I looked in IMT and read Release Notes for 11.8, but could not find any info if this is possible
... View more
Is it the default rule run on ingested objects anyhow or just when not hit by filtering rules? Also see my attachments for an example. Here some objects less than 1 MB are made 2 copies on one site. Another rule specifies erasure coding scheme for objects bigger than 1 MB. Both rules also specify tenant and bucket name. So my question is if 2 copies are made on all nodes in addition to the filtering rules? Or just if filtering rules are not hit, ie. if tenant/bucket is not specified.
... View more