Subscribe

Validate that a path is valid for a cifs share creation

[ Edited ]

I'm writing a script to validate that our DR SVM is recoverable, in terms of having the same CIFS properties, shares, access, and user mappings. That said, CIFS shares can be created on any inode, not just a qtree, and a perfectly legal command to create a share will fail if the folder being shared has been deleted. I'd like to include a check to see that the path I've saved is a valid target for a CIFS share, but I would rather not have to check this by opening the share with NFS or CIFS. Is there a way to do this in the CLI or API?

 

I'm using CDOT 8.2.3.

Re: Validate that a path is valid for a cifs share creation

Upgrade to 8.3.1 and use SVM-DR..   And in 8.4 it's going to be all like 7-mode vfiler.

 

I wouldn't waste your time writing powershell scripts if it's essentially all done.

 

What i've done in the past is use identity preserve mode and then have custom workflow to add the right lif and remove the other one

Re: Validate that a path is valid for a cifs share creation

It'll take a while before that's ready, and in the meantime, I have DR tests to pass. 

Re: Validate that a path is valid for a cifs share creation

Hello basilberntsen,

 

You can have your scrip checking if the directory exist using the "ls" command from the node layer.

 

below is an example of what you can run.

 

mohammaj-cluster::*> cifs share show -vserver vs2 -share-name odx -fields path
vserver share-name path
------- ---------- ----
vs2 odx /odx

mohammaj-cluster::*> vol show -vserver vs2 -volume odx -fields node
vserver volume node
------- ------ -------------------
vs2 odx mohammaj-cluster-01

 

mohammaj-cluster::*> node run -node mohammaj-cluster-01 "priv set diag; ls /vol/odx"
.
..
dir1
dir11

mohammaj-cluster::*> node run -node mohammaj-cluster-01 "priv set diag; ls /vol/odx/dir1"    <--- you can add somthing like this to your script.
.
..
odx_2

 

 

Usually you will have "/vol/volumename" as a path for your volume. Note that if you will have the same volume name living on the same node for different vservers. you may have diffrent path. example "/vol/volumename(1)"

 

mohammaj-cluster::*> node run -node mohammaj-cluster-01 "priv set diag; vol status"
Volume State Status Options
vol0 online 
vs1_root online 
odx online           <------- odx volume on vs1
odx(1) online      <------- odx volume on vserver1