Subscribe
Accepted Solution

Other than admin or who has the privilege on the filer, who else can set up a new SS schedule?

[ Edited ]

All the sudden, a new snapshot schedule under the SS policy associating with a volume has been setup. As before there is only one schedule associating a volume, which will be taken once per day. Now, 2nd schedule got created, and once every 8 hours, in addtion to previous one.

 

We have a few people who have admin access. My question, can customer set up a snapshot policy?

What is log file should I check into to find out details about this change?

 ver: 8.2.3

Thanks you.

Re: Other than admin or who has the privilege on the filer, who else can set up a new SS schedule?

Not sure what you mean by customer, but if you mean end user, no, unless they have rights.

 

check the audit log file

 

/etc/log/auditlog

Re: Other than admin or who has the privilege on the filer, who else can set up a new SS schedule?

Could backup software somehow create schedules? I heard that we are using snapshot backup, TSM. Could this integrated software start up SS?

Re: Other than admin or who has the privilege on the filer, who else can set up a new SS schedule?

Mentioned /etc/log/auditlog file is fine for 7-mode systems.

On cDOT the auditing has been changed completely.

 

See here how to check who was doing what on your system:

https://library.netapp.com/ecmdocs/ECMP1636068/html/GUID-279ACA3C-00D2-490C-BEE9-C05625A550B1.html

 

 

---------

'Kudos' is a good way to say "Thank you" Smiley Happy

 

Re: Other than admin or who has the privilege on the filer, who else can set up a new SS schedule?

well, It is weird. I could not find how the snpashot all the suddent starts to be scheduled. 

 

I checked auditlog.* under /mroot/etc/log, I don't see any related actions, and also command-history.log.* as well. 

 

Can anybody please shed some lights here?

Re: Other than admin or who has the privilege on the filer, who else can set up a new SS schedule?

I booked a lab just for you...

 

 

 

 

cluster1::*> debug log files modify -incl-files mgwd
cluster1::*> debug log show -timestamp >30m



Tue Mar 17 06:33:27 2015 cluster1-01 [kern_mgwd:info:901] ssh :: 192.168.0.5 :: admin :: volume create -vserver svm-exchange -volume test_vol -size 3g -aggregate aggr1_01 :: Pending



cluster1::*> net traceroute -node cluster1-01 -destination 192.168.0.5
  (network traceroute)
traceroute to 192.168.0.5 (192.168.0.5), 64 hops max, 44 byte packets
 1  jumphost (192.168.0.5)  0.350 ms *  0.414 ms