Talk with fellow users about the multiple protocols supported by NetApp unified storage including SAN, NAS, CIFS/SMB, NFS, iSCSI, S3 Object, Fibre-Channel, NVMe, and FPolicy.
Talk with fellow users about the multiple protocols supported by NetApp unified storage including SAN, NAS, CIFS/SMB, NFS, iSCSI, S3 Object, Fibre-Channel, NVMe, and FPolicy.
Hello all, Long story short. The issue is that Logstash is duplicating logs after application restarts due to PVCs being mounted with different minor device versions, causing Logstash to mistakenly treat the same log files as different. Having said that, I’d like to know if it's normal for the 'minor device number' of an NFS 4.0 volume to change when it’s mounted multiple times. This appears to be a known issue when working with Network File Systems, as outlined in the Elastic documentation I found online, this happens because NFS can present different minor device versions, which Logstash interprets as different file systems, leading to log duplication: https://www.elastic.co/docs/reference/logstash/plugins/plugins-inputs-file#_reading_from_remote_network_volumes If any of you folks have a second opinion on this, I'd love to hear it. Thank you very much, Joel.
... View more
Hi, I have used robocopy for CIFS data migration from old ONTAP (9.1P2) to new (9.11.1P7). After initial copy and incremental, I see the volume on the new system is taking almost 3.5 TB of extra space. Also the number of files in source volume is more than that in destination volume. There are no errors in robocopy and the summary shown everything copied. I have done random checks on folders on both sides they look good. This is really strange! ----------------------------------------------------------- SOURCE USAGE ---------------------------------------------------------------- source::> vol show-space cifs_data Vserver : source_cifs Volume : cifs_data Feature Used Used% -------------------------------- ---------- ------ User Data 4.24TB 28% Filesystem Metadata 5.68GB 0% Inodes 2.05GB 0% Snapshot Reserve 768GB 5% Deduplication 34.97GB 0% Snapshot Spill 350.4GB 2% Performance Metadata 580.3MB 0% Total Used 5.37TB 36% Total Physical Used 5.19TB 35% source::> vol show -vserver source_cifs -volume cifs_data -fields files vserver volume files -------- --------- -------- source_cifs cifs_data 31876696 ------------------------------------------------- DESTINATION USAGE ---------------------------------------- dest::> vol show-space -volume cifs_data Vserver : dest-cifs Volume : cifs_data Feature Used Used% -------------------------------- ---------- ------ User Data 7.63TB 51% Filesystem Metadata 3.01GB 0% Inodes 1.50GB 0% Snapshot Reserve 768GB 5% Performance Metadata 42.77GB 0% Total Used 8.42TB 56% Total Physical Used 8.23TB 55% dest::> vol show -vserver dest-cifs -volume cifs_data -fields files vserver volume files --------------- --------- -------- dest-cifs cifs_data 21251126 ---------------------------------------------------- ROBOCOPY command used -------------------------------------------------- robocopy /e /mir /copyall /r:0 /w:0 /ETA /mt:32 /sec /secfix /dcopy:t \\source_cifs\cifs_data$\ \\dest_cifs\cifs_data$\ Is storage showing wrong usage ? or is there any bug or issue with this ? Any help is appreciated. Thanks!
... View more
Hi! So far I have configured S3 buckets for FabricPool or Veeam environments with self-signed certificates. The thing is that given the multiple applications that my customers are starting to use with S3 repositories, they see that Netapp has this possibility and I would like to test a configuration to provide this service but installing external certificates, signed by external CA. I'm reading the documentation and as I'm not a great expert in certificate matters I don't get to understand how such a configuration would be done, what requirements I need and how to implement it. I am asking for help please to guide me in this process to understand if the steps are correct. First, I have to request a certificate signing request with “security certificate generate-csr -common-name myS3.mydomain.com....”. I understand that here you indicate the name that the S3 URL will have and the production domain that signs it. Is there any special purpose of the certificate to indicate? I have to return the CA root and intermediate certificates apart from the signed certificate for myS3.mydomain.com and install them for the created SVM S3. The following steps I know and I have no problem. Now, when the S3 Object Store has to be created, the command vserver object-store-server has to be given -certifiate-name, is the common name of the generated certificate indicated here? Then the bucket is created, the user and so on. In the client authentication part, from the machine you want to access the S3, is it necessary to install a certificate?, I understand that the validation is done by entering the URL and user's access key and secret key. I don't know if there is any limitation to implement this On-Premisse solution or any other issue that I am not taking into account. Thank you very much for your help.
... View more
I have a query regarding the compatibility between nfsd and glibc. In our system, we've upgraded glibc to version 2.40 and are using NFS version 2.1.1 with NFSv3. Previously, with glibc 2.23, everything was working fine, and we weren’t using libtirpc. However, after the glibc upgrade, libtirpc was included and enabled in NFS as well. Now, none of the NFS-related services (nfsd, rpc.statd, rpc.mountd, portmap) are running. When attempting to start nfsd, the following errors occur: "unable to set any sockets for nfsd" "writing fd to kernel failed: errno 89 (Destination address required)" or "errno 111 (Connection refused)" Console logs show: "svc: failed to register nfsdv3 RPC service (errno 111)." After upgrading glibc, the --enable-obsolete-rpc option has been removed from glibc. Can anyone provide guidance on how to debug or resolve this issue?
... View more
I inherited administration of an offline domain environment with a NetApp (OnTap 9.11) and no dedicated NTP appliance Because the environment is offline, we have significant issues with time skew from the hardware. While we are looking for a better solution, at this stage we manage it by manually fixing the time roughly every month. It appears that this also led the to the previous administrators giving up on using CIFS shares. The shares exist and there is evidence they tried to make it work. But ultimately the shares are empty and other arrangements through Windows VM's are currently employed...all of which are on old OS's and need to be updated. I've been given the task of trying to fix things...and I really don't want to build new Windows servers and kick the can further down the road. The immediate issue is the time skew/sync. I want to use the domain controller as the NTP server, at least this way, the time skew *should* be the same on all devices The NetApp was already configured to use the DC as the NTP server, but it wasn't syncing. After a bit of a fight, using information from ONTAP 9 - time server rejected as unreliable - NetApp Knowledge Base. I did manage to get the NetApp to sync to the domain controller. 1 month later, go to fix the time skew. Domain controller: Time skew as expected NetApp: Time is still correct I can only surmise that this means that the NetApp is not continuously syncing with the Domain Controller. It got the time once and then just did its own thing. How can I force the NetApp to sync more often?
... View more