Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
- Threaded format
- Linear Format
- Sort by Topic Start Date
Board Activity
Hello, I have a FAS2552 with Ontap 9.8 SVM for NFS. Auth: Unix System Hosts have Root acces (Super user) Tested (SSH/ESX) by creating a folder and file on the NFS datastore I can Copy a vm to the datastore, register VM and i can start it. I cannot do a storage migration from other NFS store (QNAP): Unable to load configuration file *VMX 13 (Permission denied). I cannot create a clone from other VM. Same error. And i cannot create a new VM, also with the same error. It looks like FP Policy, but when this is what i see when i check it: SCL01::> vserver fpolicy show This table is currently empty. Could there be something else blocking it?
... View more
By Lennart_Koenraads ContributorNetwork and Storage Protocols
Friday
85 Views
0
0
Hello all, Long story short. The issue is that Logstash is duplicating logs after application restarts due to PVCs being mounted with different minor device versions, causing Logstash to mistakenly treat the same log files as different. Having said that, I’d like to know if it's normal for the 'minor device number' of an NFS 4.0 volume to change when it’s mounted multiple times. This appears to be a known issue when working with Network File Systems, as outlined in the Elastic documentation I found online, this happens because NFS can present different minor device versions, which Logstash interprets as different file systems, leading to log duplication: https://www.elastic.co/docs/reference/logstash/plugins/plugins-inputs-file#_reading_from_remote_network_volumes If any of you folks have a second opinion on this, I'd love to hear it. Thank you very much, Joel.
... View more
By Joedpalma9ContributorNetwork and Storage Protocols
2 weeks ago
160 Views
0
0
Hi, I have used robocopy for CIFS data migration from old ONTAP (9.1P2) to new (9.11.1P7). After initial copy and incremental, I see the volume on the new system is taking almost 3.5 TB of extra space. Also the number of files in source volume is more than that in destination volume. There are no errors in robocopy and the summary shown everything copied. I have done random checks on folders on both sides they look good. This is really strange! ----------------------------------------------------------- SOURCE USAGE ---------------------------------------------------------------- source::> vol show-space cifs_data Vserver : source_cifs Volume : cifs_data Feature Used Used% -------------------------------- ---------- ------ User Data 4.24TB 28% Filesystem Metadata 5.68GB 0% Inodes 2.05GB 0% Snapshot Reserve 768GB 5% Deduplication 34.97GB 0% Snapshot Spill 350.4GB 2% Performance Metadata 580.3MB 0% Total Used 5.37TB 36% Total Physical Used 5.19TB 35% source::> vol show -vserver source_cifs -volume cifs_data -fields files vserver volume files -------- --------- -------- source_cifs cifs_data 31876696 ------------------------------------------------- DESTINATION USAGE ---------------------------------------- dest::> vol show-space -volume cifs_data Vserver : dest-cifs Volume : cifs_data Feature Used Used% -------------------------------- ---------- ------ User Data 7.63TB 51% Filesystem Metadata 3.01GB 0% Inodes 1.50GB 0% Snapshot Reserve 768GB 5% Performance Metadata 42.77GB 0% Total Used 8.42TB 56% Total Physical Used 8.23TB 55% dest::> vol show -vserver dest-cifs -volume cifs_data -fields files vserver volume files --------------- --------- -------- dest-cifs cifs_data 21251126 ---------------------------------------------------- ROBOCOPY command used -------------------------------------------------- robocopy /e /mir /copyall /r:0 /w:0 /ETA /mt:32 /sec /secfix /dcopy:t \\source_cifs\cifs_data$\ \\dest_cifs\cifs_data$\ Is storage showing wrong usage ? or is there any bug or issue with this ? Any help is appreciated. Thanks!
... View more
By kalki1ContributorNetwork and Storage Protocols
2 weeks ago
641 Views
0
4
Hi! So far I have configured S3 buckets for FabricPool or Veeam environments with self-signed certificates. The thing is that given the multiple applications that my customers are starting to use with S3 repositories, they see that Netapp has this possibility and I would like to test a configuration to provide this service but installing external certificates, signed by external CA. I'm reading the documentation and as I'm not a great expert in certificate matters I don't get to understand how such a configuration would be done, what requirements I need and how to implement it. I am asking for help please to guide me in this process to understand if the steps are correct. First, I have to request a certificate signing request with “security certificate generate-csr -common-name myS3.mydomain.com....”. I understand that here you indicate the name that the S3 URL will have and the production domain that signs it. Is there any special purpose of the certificate to indicate? I have to return the CA root and intermediate certificates apart from the signed certificate for myS3.mydomain.com and install them for the created SVM S3. The following steps I know and I have no problem. Now, when the S3 Object Store has to be created, the command vserver object-store-server has to be given -certifiate-name, is the common name of the generated certificate indicated here? Then the bucket is created, the user and so on. In the client authentication part, from the machine you want to access the S3, is it necessary to install a certificate?, I understand that the validation is done by entering the URL and user's access key and secret key. I don't know if there is any limitation to implement this On-Premisse solution or any other issue that I am not taking into account. Thank you very much for your help.
... View more
By KikoContributorNetwork and Storage Protocols
4 weeks ago
View By:
- View By:
- S3
484 Views
0
0
I have a query regarding the compatibility between nfsd and glibc. In our system, we've upgraded glibc to version 2.40 and are using NFS version 2.1.1 with NFSv3. Previously, with glibc 2.23, everything was working fine, and we weren’t using libtirpc. However, after the glibc upgrade, libtirpc was included and enabled in NFS as well. Now, none of the NFS-related services (nfsd, rpc.statd, rpc.mountd, portmap) are running. When attempting to start nfsd, the following errors occur: "unable to set any sockets for nfsd" "writing fd to kernel failed: errno 89 (Destination address required)" or "errno 111 (Connection refused)" Console logs show: "svc: failed to register nfsdv3 RPC service (errno 111)." After upgrading glibc, the --enable-obsolete-rpc option has been removed from glibc. Can anyone provide guidance on how to debug or resolve this issue?
... View more
By MelsaContributorNetwork and Storage Protocols2025-03-0501:27 AM
3,177 Views
0
8