Hi, Has anyone tried or know if possible to FabricPool Mirror between a Remote Cluster Tier (StorageGRID) and Local Cluster Tier (ONTAP S3)? The StorageGRID bucket is connected via the Intercluster LIFs and the Local Cluster Tier would traverse the backend Cluster LIFs. Is it supported? Just thinking of options to migrate FabricPool object store data from StorageGRID to a FAS.
... View more
Hello, I have applied one fpolicy name sm_fpolicy_136 on Storage Vm SVMNAS except on 2 volumes for tests ! thats works. But know i want to apply the same fpolicy on the two excluded volumes before and i don't find the command... 😞 i think it is : 'vserver fpolicy' but after ? thanks a lot
... View more
In ONTAP ARP, the max snapshot copies is 8. What happens if arw.surge.snap.interval.days is set to 100 days. And there are multiple attacks, lets say, 100 attacks to same volume, how many ARP snapshots will be retained? If it is 8, then arw.surge.snap.interval.days is meanless. https://docs.netapp.com/us-en/ontap/anti-ransomware/modify-automatic-snapshot-options-task.html
... View more
With over half the world’s data stored in ONTAP, it’s no wonder so many customers request “I need a catalog for my data.” However, there are different types of catalogs used to achieve different ends. Data engineers, scientists, and analysts need to identify schemas, tables, columns, and data types to accelerate exploratory data analysis. There are two types of catalogs: a business catalog and a technical catalog. For this example we used Open Metadata, an open-source business catalog built with the Data Mesh in mind. There are some commonalities required for all configurations in whichever cataloging system you use. First you must enable S3 in ONTAP and create your buckets. Then create an S3 compatible connection in your catalog. Begin exploring your data! Some considerations: An S3 server is configured per SVM. You cannot have multiple S3 servers per SVM nor is that necessary. There are two functions of ONTAP S3. One is for tiering data and the other is for S3 applications. We are configuring for S3 applications which is a highly valuable ONTAP service! There are two types of S3 buckets: NAS buckets are multi-protocol buckets applied to a NAS (NFS and/or SMB) volume. S3 buckets are S3 only with no attachment to a NAS volume. S3 buckets will automatically provision a Flexgroup and manage the capacity for you which is really helpful. The first step is to enable S3 in ONTAP: Create S3 server The easiest way to do this is by the command line interface (CLI) vserver object-store-server create -vserver <server_name> -object-store-server <name_can_be_different_from_server_name -is-https-enabled true | false In ONTAP’s System Manager graphical user interface (GUI). I am using Cloud Volumes ONTAP (CVO) in Microsoft Azure. In your CVO instance click “switch to advanced view” to enable System Manager Storage > Storage VMs > select your SVM > settings Enable and configure your S3 server Connect to AD/LDAP or create local S3 accounts Support for external directories requires ONTAP 9.14.1. If you have SMB or multi-protocol volumes already you are likely already connecting to AD, but you must configure LDAPS for ONTAP S3 with AD/LDAP to work. The SVM must have S3 enabled, and a bucket created. AD/LDAP requires DNS to be configured in ONTAP. A root certification authority of the LDAP server must be installed on the SVM. An LDAP client must be configured with TLS enabled on the SVM and it must be associated with the SVM. Test access using the AWS CLI In your AWS config file: Access Key: prepend “NTAPFASTBIND” to (base64-encode(ldapuser:password) Secret Key: 0123456789123456 (any random 16 digit key) Location: make sure to include the region in AWS format. IT WILL FAIL WITHOUT THIS. aws s3 ls --endpoint-url –https://<ip_address_of_data_lif> to list your test bucket. Now configure your NAS Buckets. You must be running at least ONTAP 9.12 to deploy NAS buckets. If you support for external directories is required, you must be at version 9.14.1. It’s recommended to configure S3 NAS buckets AFTER you’ve deployed your NAS volume with NFS or SMB. In System Manager > Storage > buckets > click Add Bucket. Select “More Options” to expand configuration options “Browse” to the “folder” which is the NFS and/or SMB volume you will map to the S3 bucket. Assign permissions as needed and save. Run your aws cli test to see your new S3 NAS bucket: aws s3 ls --endpoint-url –https://<ip_address_of_data_lif> The second step is to define the connection in your catalog. We are using Open Metadata for the catalog. This pattern is common, however, to all other catalogs. Define connection: In Open Metadata go Settings > Services > Databases (NOT STORAGES – yes that’s counterintuitive!) > Add New Service Choose DataLake Service. Again, this is counterintuitive but trust me it works! In the drop down choose S3 Config. We are configuring an S3 compatible data lakehouse which is why this is defined as a database in this catalog. Configure your AWS access key, secret and region as well as your S3 endpoint which is https://<ip_address_of_S3_data_lif>. The region is necessary, or it will fail! This is the region of our CVO instance. "Test connection". Now start exploring your data making sure to add the relevant ownership and descriptions that are searchable for your data customers. Go to Settings > databases to see all the databases configured or search for data assets in the search bar You will see the supported files listed in the tables. Click a file to see the schema, sample data, data lineage, data quality and profile, set the domain ownership, and a host of other important information. Tables listed in bucket. Sample Data. To get this without a catalog a data engineer will have to connect to the database and do a series of SQL selects or jq statements for semi-structured data or head statements for unstructured data. Having this data at one’s fingertips save enormous amounts of exploratory time. Schema of the data. Understanding the schema is required for all data engineering, science, and analytics. Data Profile. This will help a data engineer and scientist understand the volume of the data as well as how much and what kind of work will be necessary for wrangling the data. By enabling S3 in on NAS volumes in ONTAP we have now unleashed our data in a useful format to all data consumers. ONTAP now becomes a key data source in the data mesh, enabled for self-serve exploratory data analysis, and enabling a vast resource to train models and serve the needs of analytics consumers.
... View more
Hi All, customer new Windows 11 24H2 with AD 2022 access OnTap 9.2 failed and event show display client use "NTLMv2" auth, but exist all Windows client include Windows 11 23H2 no problem and event show display client use "Kerberos" we try test Windows 11 24H2 with AD 2019 access OnTap simulator 9.2, 9.7, 9.9.1, 9.10.1, 9.11.1, 9.12.1 and 9.13.1 SVM FQDN as below - Windows 11 24H2 and SVM use default setting > Windows 11 24H2 access OnTap 9.2, 9.7, 9.9.1, 9.10.1, 9.11.1 and 9.12.1 failed and OnTap event show display client use "NTLMv2" auth same as customer issue, but access 9.13.1 successful and cifs session show client use Kerberos - all OnTap SVM setup "cifs security modify -is-aes-encryption-enabled true" except 9.13.1 because AES encryption enable by default > Windows 11 24H2 access OnTap 9.2, 9.7, 9.9.1, 9.10.1, 9.11.1 and 9.12.1 failed, but access 9.13.1 successful - Windows 11 24H2 setup "Set-SMbClientConfiguration -BlockNTLM $true" > Windows 11 24H2 access OnTap 9.2, 9.7, 9.9.1 and 9.10.1 failed, but access 9.11.1, 9.12.1 and 9.13.1 successful - Windows 11 24H2 setup "Set-SmbClientConfiguration -EnableInsecureGuestLogons $true" > Windows 11 24H2 access OnTap 9.2, 9.7, 9.9.1 and 9.10.1 failed, but access 9.11.1, 9.12.1 and 9.13.1 successful - Windows 11 24H2 setup "Set-SmbClientConfiguration -RequireSecuritySignature $false" > Windows 11 24H2 access OnTap 9.2, 9.7, 9.9.1 and 9.10.1 failed, but access 9.11.1, 9.12.1 and 9.13.1 successful We cannot found any combability issue about Windows 11 24H2 with old OnTap in google or NetApp KB site, any idea? Best regards
... View more