This error sounds like a problem with the host the cluster is trying to connect to and less like a problem with the cluster. What was the command you used to try and grab the new qual_devices.zip?
... View more
Hi PeterMekes, I just confirmed in Fusion that the partitioning you described is as expected from ADP R-D-D. Because of this, you should be able to adjust MIN_SPARES to 0, I didn't have access to an 8 drive system, so I was unable. As you can see in the error message from my previous reply was that setting it to 0 on systems with more than 16 drives isn't supported, but you are well under that threshold.
... View more
I believe setting that value to 0 would eliminate the SPARES_LOW warning, but the real question here should be "should I have zero hot spares available?", to which my answer would be no...Now if you have cold spares on site, that would make me less nervous, but I'd much rather see a hot spare be available. EDIT: Just tried this and go the following: option raid.min_spare_count: Cannot be set to 0 if there are more than 16 drives or any RAID4 groups configured.
... View more
I don't have access to an eSeries to try this on at the moment, but I did find the documentation for doing this in the System Manager GUI here.
... View more
I've just had a look at this using a lab environment and confirm what I had stated, have a look: I was unable to grab a screenshot with the two times perfectly synced as the GUI refreshes on it's own constantly in an attempt to provide near-realtime status.
... View more
Looking at your two examples, this is how I interpret them: P28DT3H59M55S → 28 Days, 3 Hours, 59 Minutes, 55 Seconds....I think. PT8H35M42S → 8 Hours, 35 Minutes, 42 Seconds. Perhaps you could issue the API call at roughly the same time as the CLI command: snapmirror show -fields lag-time Comparing the two may provide you with the necessary insight to decode, feel free to post the output of the two here and I'll have a look.
... View more
Hi Roy, I think you may have underestimated what is required to administer a fibre channel SAN, even a simple one. I think your best course of action is to talk to your local NetApp or partner sales team, seeking guidance from the SE assigned to you and potentially purchasing some training credits. Do you know who the company you work for bought this array from? You could also contact support and have them guide you through the process over the phone. All the information you need is in the documentation I've already linked to, I'm sorry I couldn't be of more assistance via this message board.
... View more
According to the VSC docs: The 9.7.1 and later releases of virtual appliance for VSC, VASA Provider, and SRA support ONTAP 9.8 and vSphere 7.0U1. So do you still need it?
... View more
Hi Roy, The Interoperability Matrix Tool (IMT) isn't something you download, it's a website you use to ensure all of the components in your solution are supported together, access it here, a valid support account is required but I'm assuming you have one of those. You should probably read the Windows Unified Host Utilities 7.1 Installation guide, basically if you're presenting a LUN through multiple paths you need something that will tell Windows that it's actually the same device and treat it accordingly. When I put a quick configuration into the IMT for Windows 2016, it says: "For W2K16 MSDSM Configuration, Refer "Windows Host Utilities 7.1 Installation guide". And I haven't done a Windows FC install in a very long time. Best of luck.
... View more
Hi Roy, If this is indeed a direct attached Fibre Channel configuration, why are you looking into iSCSI? While they are both block protocols and your NetApp array can present the same LUN either way, their transport medium is different. Fibre Channel is delivered over a Fibre Channel connection using HBAs in both Initiator and Target while iSCSI is encapsulated within Ethernet packets and therefore connected via Ethernet. Assuming you've connected the new host to the storage using the appropriate transceivers and fibre, then the LUN must be mapped to the new initiator then the storage discovered by the host via a rescan. Assuming FC is your intention and not iSCSI then have a look at the FC Express guide here. If what you want is iSCSI however, there's a similar but different guide for that here. As an aside, I have no idea if your attached images shed any light on the situation as I am unable to open them. Good luck, kindly update this thread with your results.
... View more
You're likely looking at partitioned drives, so those 10 devices are slices of larger devices. ADP is the most efficient way to carve up your SSDs.
... View more
I believe the problem here is that your root volume has filled up as it usually does in the simulator. I just had a look at the deployment instructions for the 9.7 sim and I see they've removed this bit. Go have a look at page 40 & 41 of the 9.5 guide here and perform those instructions. You'll probably have to reboot as well.
... View more
Sure looks right to me, and while the intent of the KB is to get you from whole drives to ADP I don't see why it wouldn't work going from ADP to ADP. Your first response has one grave error in it however: The drives the AFF8040 came with were the X439_S16351T6ATD. They are end of support on 10/31/2021 however our support on the array is good until 6/30/2022. That means Netapp will support the old drives. This is not so, the support for your X439s will end on October 31, 2021 regardless of the support contract that exists on the controller. I highly recommend you get clarification here before it becomes a problem. I'm going through this with a bunch of my clients running 3TB drives right now.
... View more
The following is taken from the technical FAQ: What notable key features does SaaS Backup for Office 365 provide? Answer: SaaS Backup for Office 365 provides the following key features that are unique compared to a traditional data protection solution: <trimmed for length> Search capabilities Complete job status and monitoring Detailed user activity logging Simple, wizard-driven interface I'm not sure if that's what you're looking for, but if not, since the data is stored in AWS S3/Azure Blob, you can always build something with Lambda/Functions to do your bidding as well. Start with the free trial and find out.
... View more
Hi there, Depending which 1.6TB SSDs you have, EOS dates have already been released: X365 & X366 - January 31, 2023 X439 & X576 - October 31, 2021 I'm willing to wager a fair bit of money you have the X36x drives though so you have some time before you need to retire the 1.6s, unless you're retiring them for other, internal reasons. As an aside, the A300 goes EOS on November 30, 2026. The root volume relocation is pretty easy actually, and the procedure can be found here. I would be more concerned about ensuring that you're still able to take advantage of ADP on the new half shelf. ONTAP has been getting "smarter" with respect to aggregate creation, but even if it does get created in a way you are not happy with, tearing it down and recreating them isn't so bad as zeroing SSDs takes seconds, not minutes/hours...I'm looking at you 8TB+ drives; also, depending which version of ONTAP you deployed the A300 with, it may also have the Rapid Zeroing feature which came with ONTAP 9.4. I don't believe the new drives will partitions automatically since you won't be adding them to the same RAID group or aggregate so if you went with the automatic approach, this would mean dedicating 3 SSD per node for root and 1 spare per node, losing 8 of those precious SSDs. If this was my system, I'd be looking into partitioning the drives manually to the same sizes as an A300 with 12x3.8GB SSDs would come from the factory as, and only then migrating root. If any of this makes you uncomfortable, it's probably time to talk to your local SE or the support centre on how to proceed.
... View more
Hi there, I wrote this post for you. I believe by your username that you're in Germany and the time difference makes troubleshooting onerous. Also, I will be unavailable for a few days now. Kindly let me know if it helps.
... View more
The IP space you're using is conflicting with existing IP allocation in your environment as the GUI presented is for the out of band management interface of some Cisco server. Also, I see .11, the cluster_mgmt LIF is hosted on e0a whereas .10, node_mgmt is on e0c. Typically under Fusion, both e0a and e0b are used for ClusterNet and the Fusion setting would be "host-only" and e0c/e0d are used for client access and by default set to "NAT" in Fusion. The CLI you're logging into is effectively the serial port of the simulator so IP addresses don't come into play. You likely can't ssh to either IP either because they're probably both assigned to external servers. I'd recommend changing IP spaces completely, try moving to 10.0.0.0/24. In the console, type: net int modify -vserver ontap-test -lif cluster_mgmt -address 10.0.0.50 -netmask 255.255.255.0 net int modify -vserver ontap-test -lif ontap-test-01_mgmt1 -address 10.0.0.51 -netmask 255.255.255.0 Then in Fusion, make sure e0a/e0b is set to host-only and e0c/e0d is set to NAT, I use Bridged here so that I can access the simulator from other hosts as well, not just my laptop. Try connecting via https to either of the new IP addresses as well as ssh. I'm not sure why your cluster_mgmt is on e0a, but I assume it's because you set it to a single-node cluster, I don't do this specifically because I want the ClusterNet interfaces to still exist for better screenshots. Then you should be able to hit https://10.0.0.50/ and login.
... View more
Sorry for misunderstanding, but you literally started your original post with "cannot access the GUI". Looking at your screenshot, that's a Cisco web page, not the ONTAP Cluster login. Please provide the previously requested output, or find out what IP the cluster_mgmt LIF is on, and try that instead.
... View more