One followe-up.
Are there any issues if I take the temporary volume offline and delete after I vol moved the parent volume to the other aggregate? Any impact on the clone left behind?
thanks
... View more
Is there any difference to volume move a clone than a regular volume?
I am tried to volume move clones to the other aggregate. However, they all still running at 98% replicating for long time. It looks not moving at all.
Can anybody please shed ligt here? Thanks!
... View more
@aborzenkov
You are right. Unfortunately, we didn't do what you suggested. Now throughputs to LIF's on two specific nodes are much heavier than the others.
Question: How do I know if throughtputs to these LIF's are too heavy, and causing performance issues? Or how heavy is too heavy? To me, there seems no way to tell latency on LIF's.
... View more
Your steps looks very well.
However, I have not sure an information with you: two new nodes have already added into the cluster, and now there are 10 nodes with new LIF's and new everything. Loads are already balanced across all 10 nodes.
Now, i just need to remove two old nodes. I can move all LIF's to the other nodes, which will cause unbalancing, and yet leave all old LIF's with old name convention (ex, nfs-lif-node1, nfs-lif-node2..) in the cluster forever, whereas node1 and node2 should be already gone. That is why I am thinking to take a downtime and remove all old lif's...
Make sense?
... View more
I have total of 8 nodes in the cluster. Planning on replacing two of them by adding two new ones first, then take two out. In the end, there would be still 8 nodes.
Understood I can lif move all lif's to the other two nodes(HA) without interruption. However, as I said, move lif's will also move all connections along with NFS datastores and file systems , it will put a lot of loads to those two nodes, causing unbalanced load.
I am thinking to manuall umount those NFS's connecting to two nodes going away, and remount them to two new nodes. That will have service downtime.
Make sense?
... View more
dbenadib ,
There were concern with that approach: All network trafic will be going to these two nodes that I move these LIF's to, then it would cause unbalanced load.
So, from long run, it'd better to clean them up, and to avoid confusion. To remove them, there will be a downtime.
Make sense?
... View more
We wanted to decommission nodes and LIF's homed onthese nodes. The problem is that some VMware NFS Datastores and also NFS file systems are using these LIF's/IP's, is there any way non-discruptively remove these LIF's/IP's?
My understanding is that we would be downtime to remount NFS datastores or file systems.
Thanks!
... View more
Thanks for the message!
Follow-ups and explanations:
1. The reason I am thinking to use SnapCreator is because of the automation requirement, so, can SnapCenter for Oracle-plug also allow me to schedule the process?
2. Is the SnapCenter for Oracle is the replacement of Snap Manager for Oracle?
3. Not the lease, I am thinking now that I cannot use SnapCreator for DR, because the primary site would be gone in DR, whereas the clone DB could be set up on the DR site. So, I cannot use SC in DR scenerio. Am I right?
... View more
I am thinking about using Snap Creator to create a Oracle DB based on snapshot clone on the remote site after put the db in a hot backup mode.
Can I use this thinking to do DR and why?
... View more
Thanks for the clarification.
One more follow-up. Can you tell which LIF on the node the data will passthrough, is it data lif's or Admin lif?
... View more
@AvivDeg
Due to the fact that the cold snapshots are currently in the cloud, are you saying that the vol move will migrate data from the cloud directly to StorageGrid? If yes, I would assume the data path should at least pass through the performance tier if not be staying on the performance tier for too long , right?
... View more
@AvivDeg
What if aggrs are not in the same bucket?
As I said, Source aggr is attaching to AWS S3, Destination is attaching to StorageGrid?
Thanks,
... View more
Two aggregates here:
one is attaching to AWS-S3, aggr-aws, the other is attached to StorageGrid, aggr-sg.
Suppose a volume in aggr-aws with "snapshot-only" policy enabled, and cold data was already in the cloud tier, when I "volume move" this volume to aggr-sg:
1. Will cold data be pulled back to the performance tier first? 2. If yes, how long will cold data be staying in the performance tier, before get moved to aggr-sg?
... View more
Can some experts please pin-point the issue here? it looks I already have the default gateway for the mgmt LIF's of all SVM's. Please check "route show" outputs (.18.x is the mgmt network).
... View more
I guess my question should be:
What such LIF should look like or how should such LIF be created?
Because I already have a LIF with mgmt firewall policy for each SVM, and with such gateway:
0.0.0.0/0 10.192.82.1, as shown in my previous output. Was this gateway defined incorrectly?
... View more
::> route show Vserver Destination Gateway Metric ------------------- --------------- --------------- ------ a 0.0.0.0/0 x.x.82.1 10 0.0.0.0/0 x.x.92.1 40 b 0.0.0.0/0 x.x.82.1 20 0.0.0.0/0 x.x.90.1 30 0.0.0.0/0 x.x.92.1 10 c 0.0.0.0/0 x.x.82.1 20 d 0.0.0.0/0 x.x.82.1 20 d 0.0.0.0/0 x.x.82.1 20 f 0.0.0.0/0 x.x.82.1 20 g 0.0.0.0/0 x.x.82.1 20 h 0.0.0.0/0 x.x.82.1 30 0.0.0.0/0 x.x.90.1 20 0.0.0.0/0 x.x.91.1 10 i 0.0.0.0/0 x.x.76.1 30 0.0.0.0/0 x.x.82.1 20 0.0.0.0/0 x.x.90.1 10
Again, it is complaining about only one node among 4 nodes. My understanding, there is no so called management LIF for a SVM, only node has management role.
... View more
I am receiving a warning message in "Active IQ", shown as attached, and only on one of 4 nodes in the cluster
What should be the right management LIF and it's GW for SVM's and look missed here?
I already have a "admin" LIF created for each SVM, and with the role type of "data". Dont know what other role types should I have here? Also, there is already a gateway configured for this LIF as shown in "route show..".
... View more
Thanks!
One more follow-up.
In order to implement NDAS, there will be extra space for snapshots on Primary storage, since the backup method is snapshots based. Correct?
... View more
In SSD world, does that make any differences in distinguashing the performance?
sequential read; sequential write; random read; random wirte
I would think they should be all the same.
Thanks for your inputs
... View more
Yes, there is only one IPspace, and one only called "default".
I assume since there is only one node, so, without "Cluster" IPspace seems fine.
... View more
"node show" seems fine. I am okay with single node cluster.
The problem is that I could not ping any IP's on-premises from this CVO, though I can ping LIF's in CVO from on-premise. Any idea what could be the cause?
... View more
I assume it is a single node cluster, since there is only one node here.
But,
1. How could I confirm it? If it is, then the output I pasted should not be an issue?
2. The problem is that I could not ping any outside IP's from this CVO.
... View more