We have about 70 TB of files that we want to transfer from the old storage HPE 3PAR to AFF. Is there a way to transfer these files? In addition, can I transfer these files while the user has these files open? Is there any impact on data transfer? Is this type of transfer not allowed?
... View more
Hello,
I am running ontap 9.7 and I am trying to figure out if there is a way to connect to a lif that is in a different vlan.
To give some background on our network it is set up like this with firewall open from workstations to servers in each department
10.10.20.0/24 = Department A workstations
10.10.25.0/24 = Department A Servers
10.10.30.0/24= Department B workstations
10.10.35.0/24 = Department B Servers
My networking Team is requesting that we mount our CIFS shares on our workstations through IPs in the server subnets to keep non workstation things off of the workstation subnets.
Our set up on the netapp side currently looks like this.
we have ports e0c and e0d aggregated into a0a on both nodes and ports e0e and e0f aggregated into a0b on both nodes.
we then have VLANS set up on all 4 aggregated ports. ie a0a-20 a0a-25 a0a-30 a0a-35 on all 4 aggregated ports.
As far as I can see the VLANs are supposed to prevent traffic going between them. So is there a way for me to get the a workstation in vlan 20 to mount a share in vlan 25?
Thank you for any guidance you have!
... View more
Hello, Since this Monday, I've had two discs fall into failed state. One was automatically replaced by one of the spare disks, but the other was not. For information, my aggregate is made up of 7200RPM disks. I actually had 3 spare disks: - 1 x 7200RPM (the one that replaced the failed disk) - 2 x 15000RPM (which don't automatically launch the replacement) I guess the replacement doesn't start automatically because the disks have different specifications (I understand that and I'm quite okay with it). However, I'd like to force the disk replacement even if the spare disk is faster. Most of the disks have been running for as long as the disks that failed, and I'd like to minimize the chances of ending up with 3 failed disks (and then the drama...). My failed aggregate: Position Disk Pool Type RPM Size Size Status -------- --------------------------- ---- ----- ------ -------- -------- ---------- shared 2.0.8 0 FSAS 7200 3.57TB 3.64TB (normal) shared 2.0.10 0 FSAS 7200 3.57TB 3.64TB (normal) shared 2.0.12 0 FSAS 7200 3.57TB 3.64TB (normal) shared 2.0.14 0 FSAS 7200 3.57TB 3.64TB (normal) shared 2.0.16 0 FSAS 7200 3.57TB 3.64TB (normal) shared 2.1.23 0 FSAS 7200 3.57TB 3.64TB (normal) shared 2.0.4 0 FSAS 7200 3.57TB 3.64TB (normal) shared FAILED - - - 3.57TB - (failed) shared 2.0.22 0 FSAS 7200 3.57TB 3.64TB (normal) My spare disks: Disk HA Shelf Bay Chan Pool Type Class RPM Size Size Owner --------------- ------------ ---- ------ ------ ----------- ------ -------- -------- -------- 2.0.6 0a 0 6 B Pool0 SAS performance 15000 3.63TB 3.64TB SAME_OWNER_AS_THE_AGGREGATE 2.0.17 0a 0 17 B Pool0 SAS performance 15000 3.63TB 3.64TB SAME_OWNER_AS_THE_AGGREGATE Thank you in advance !
... View more
We have an old box needed to migrate to new box. The new box is running on ontap 9.13. Old is 8.3, REPGENGNA::security login> password -vserver REPGENGNA1_SVM -username XXXXXXX Enter a new password: Enter it again: Error: command failed: Password cannot contain the username. How can we temporary disable this password check ? We tried the following but it does not work. vserver cifs security modify -vserver REPGENGNA1_SVM -is-password-complexity-required false
... View more
Hi, all my clusters are sending their syslogs via their node_mgmt interfaces, except for one. In this one (2-node cluster) one node sends via node_mgmt and the other via cluster_mgmt. Because of how my company validates the syslog sources, I need all my syslogs to come from the node_mgmt interfaces, but there's no way to force it to. Why is one cluster behaving like this? When I migrate the cluster_mgmt-interface to the other node, this node stops sending through the node_mgmt and cluster_mgmt takes over. I'm confused. Cheers, ConfusedParrotfish
... View more