ActiveIQ Unified Manager documentation says the following about the local account used for discovery: ------------------------------------ This account must have the admin role with Application access set to ontapi, ssh, and http. ------------------------------------ My question: is this really accurate? I don't see why it requires the admin role since AIUM is just reading data to populate charts and tables. Anyone have insight here? We are trying to reduce service accounts to the least amount of permissions needed and I'm thinking this is a good candidate for reduction, but documentation says otherwise.
... View more
I need to clarify whether ONTAP can dedup and compress data blocks locked by snapshot. Does below command vol efficiency start -volume vol1 -vserver vs1 -scan-all -shared-blocks -snapshot-blocks dedup and compress data blocks locked by snapshot ? if not, does it mean snapshot blocks can not be deduplicated or compressed ? Regards, Chun Chiang
... View more
Just replaced a drive, but one of our aggregates is still showing failed disks. How can we get the status back to normal? We have plenty of spares.
RAID Group /aggr2_sas_clp_lcl_fas8020b/plex0/rg1 (double degraded, block checksums, raid_dp)
Usable Physical
Position Disk Pool Type RPM Size Size Status
-------- --------------------------- ---- ----- ------ -------- -------- ----------
dparity 3.33.7 0 SAS 15000 546.9GB 547.7GB (normal)
parity 3.32.8 0 SAS 15000 546.9GB 547.1GB (normal)
data 3.33.8 0 SAS 15000 546.9GB 547.7GB (normal)
data 3.32.9 0 SAS 15000 546.9GB 547.1GB (normal)
data 3.33.9 0 SAS 15000 546.9GB 547.7GB (normal)
data FAILED - - - 546.9GB - (failed)
data 3.33.10 0 SAS 15000 546.9GB 547.7GB (normal)
data 3.32.11 0 SAS 15000 546.9GB 547.1GB (normal)
data 3.33.11 0 SAS 15000 546.9GB 547.7GB (normal)
data FAILED - - - 546.9GB - (failed)
data 3.33.12 0 SAS 15000 546.9GB 547.7GB (normal)
data 3.32.13 0 SAS 15000 546.9GB 547.1GB (normal)
data 3.33.13 0 SAS 15000 546.9GB 547.7GB (normal)
data 3.32.14 0 SAS 15000 546.9GB 547.1GB (normal)
data 3.33.14 0 SAS 15000 546.9GB 547.7GB (normal)
Pool0
Spare Pool
Usable Physical
Disk Type Class RPM Checksum Size Size Status
---------------- ------ ----------- ------ -------------- -------- -------- --------
2.22.17 SAS performance 10000 block 836.9GB 838.4GB zeroed
2.22.19 SAS performance 10000 block 836.9GB 838.4GB zeroed
2.23.9 SAS performance 10000 block 836.9GB 838.4GB zeroed
3.30.22 SAS performance 15000 block 546.9GB 547.1GB zeroed
3.31.2 SAS performance 15000 block 546.9GB 547.7GB zeroed
3.32.12 SAS performance 15000 block 546.9GB 547.7GB zeroed
Original Owner: clp-lcl-fas8020b
Pool0
Spare Pool
Usable Physical
Disk Type Class RPM Checksum Size Size Status
---------------- ------ ----------- ------ -------------- -------- -------- --------
2.20.18 SAS performance 10000 block 836.9GB 838.4GB zeroed
2.20.23 SAS performance 10000 block 836.9GB 838.4GB zeroed
2.21.17 SAS performance 10000 block 836.9GB 838.4GB zeroed
3.32.5 SAS performance 15000 block 546.9GB 547.1GB zeroed
3.32.7 SAS performance 15000 block 546.9GB 547.1GB zeroed
3.32.10 SAS performance 15000 block 546.9GB 547.7GB zeroed
3.33.23 SAS performance 15000 block 546.9GB 547.7GB zeroed
1.10.9 SSD solid-state - block 186.1GB 186.3GB zeroed
14 entries were displayed.
... View more
Hello everyone, I am trying to collect SnapMirror fields using the endpoint api/private/cli/snapmirror . According to the documentation, I should receive the following fields ( and others...) : destination_path => destination_location relationship_id => relationship_id cg_item_mappings => cg_item_mappings destination_volume => destination_volume destination_volume_node => destination_node destination_vserver => destination_vserver healthy => healthy last_transfer_type => last_transfer_type policy_type => policy_type relationship_group_type => group_type relationship_type => relationship_type schedule => schedule source_path => source_location source_volume => source_volume source_vserver => source_vserver status => relationship_status unhealthy_reason => unhealthy_reason break_failed_count => break_failed_count break_successful_count => break_successful_count lag_time(duration) => lag_time last_transfer_duration(duration) => last_transfer_duration last_transfer_end_timestamp(timestamp) => last_transfer_end_timestamp last_transfer_size => last_transfer_size newest_snapshot_timestamp(timestamp) => newest_snapshot_timestamp resync_failed_count => resync_failed_count resync_successful_count => resync_successful_count total_transfer_bytes => total_transfer_bytes total_transfer_time_secs => total_transfer_time_secs update_failed_count => update_failed_count update_successful_count => update_successful_count However, the response I am receiving contains only the following fields: "records": [
{
"source_path": "#:#####",
"source_vserver": "#",
"source_volume": "#",
"destination_path": "#:#",
"destination_vserver": "#",
"destination_volume": "#"
}
] Does anyone have any insights on what might be going wrong? BTW I did not have any issues with aggregation and the nic_common endpoint/fields before. Thanks in advance for your help! PS : The polar in question is the REST API, and the version of my harvest in the cluster is 9.12.1. Best,
... View more
Hello, I am a new administrator on a legacy NetApp system running ONTAP 9.8. I have some experience using Netapps. Q1: Is is able to create shares inside NFS Volumes? I can do this with CIFS volumes but cannot figure a way to do this NFS, Linux access only (not mixed) Q2: I need to migrate shares from an old LDAP server to the corporate AD server. The NFS has an LDAP configuration setting. Will the LDAP override the AD setting if I am authenticating to the AD ? I need to migrate PCs one at a time. My problem is I am mounting the share on a Ubuntu 22 VM that authenticates to the AD as root, but it is not recognizing the AD groups as write permissions. If I create a single user folder and chown the folders, I am able to write to the folder. Maybe 2 different unrelated issues or I am not using NFS correctly Thank you for replies.
... View more