Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am running a 6040 pair as a gateway, with IBM XIV behind it. Have got DOT V7.3.4. Needed to move (with no downtime) CIFS and NFS data from luns on old XIV to new XIV. Created new plex with luns/volumes from new XIV and used Syncmirror to sync the data. Split the Syncmirror (leaving the old aggregate behind with new aggregate name) and everything worked great. I needed to reclaim the space on the old XIV for SAN use. I had taken the volumes in the old aggregate offline, and ran that way for a week. When I was ready to finalize my move, I did a destroy on the old volumes from the original aggregate, then did offline on the aggregate, then destroyed the old aggregate. I had assumed (stupid me) that since the CIFS and NFS shares were in use on the new aggregate created by the Syncmirror, on the new XIV, that the CIFS and NFS would be OK. Unfortunately, when I destroyed the old volumes it deleted my CIFS shares. We do not have NFS auto-update turned on, so the NFS data was safe. Any idea how I could have done this and not delete the CIFS shares? The aggregate and volumes were good, so we just had to recreate the shares, but we did have some downtime while we did that. Thoughts?
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How did you destroy your volumes? (cli vs. http interface).
Normally there should not have been any shares pointing to the "old" aggregate. I had the impression that one couldn't destroy a volume with a CIFS share attached to it (but I may be confusing vfiler delete here) without forcing (-f) it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I did the destroy from the http interface. The thing that I find most interesting is the syslog showed no mention of the CIFS being destroyed. It showed the volumes being destroyed, pluis the aggregate being destroyed as well, but not a single word about the CIFS. They were just gone.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm guessing that the culprit is to be found in the http interface. It has probably made some assumptions about volumes that it shouldn't have. If you can't find an active bug/fix then you might want to open a support case with all of the details.
Otherwise, I guess I would familiarize myself with using the cli more often, hehe
