<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Downgrade or revert new AFF A300 from 9.6 to 9.5 in ONTAP Discussions</title>
    <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152526#M33968</link>
    <description>&lt;P&gt;If these are new systems, it is faster to simply install another version. Use special boot menu 9a to remove existing partitions, then 7 to install desired version (needs HTTP server to download ONTAP from) and then 4 or 9b to initialize. After that join nodes to existing cluster.&lt;/P&gt;</description>
    <pubDate>Sat, 23 Nov 2019 20:48:23 GMT</pubDate>
    <dc:creator>aborzenkov</dc:creator>
    <dc:date>2019-11-23T20:48:23Z</dc:date>
    <item>
      <title>Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152525#M33967</link>
      <description>&lt;P&gt;We have 2 new AFF A300 system that came preinstalled with OnTap 9.6p2.&amp;nbsp; Our exisiting clusters of FAS8040 are running 9.5p5.&amp;nbsp; We'd like to downgrade A300s to Ontap 9.5.&amp;nbsp; What are proper procedures for downgrading nodes?&amp;nbsp; Our A300s are connnected to cluster switches (not part of cluster) but should we connect A300 nodes together when downgrading OnTap?&amp;nbsp; Our goal is to downgrade A300s to OnTap 9.5 then initalize with ADPv2.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 12:07:59 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152525#M33967</guid>
      <dc:creator>kelwin</dc:creator>
      <dc:date>2025-06-04T12:07:59Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152526#M33968</link>
      <description>&lt;P&gt;If these are new systems, it is faster to simply install another version. Use special boot menu 9a to remove existing partitions, then 7 to install desired version (needs HTTP server to download ONTAP from) and then 4 or 9b to initialize. After that join nodes to existing cluster.&lt;/P&gt;</description>
      <pubDate>Sat, 23 Nov 2019 20:48:23 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152526#M33968</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2019-11-23T20:48:23Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152586#M33981</link>
      <description>&lt;P&gt;You could do this:&lt;/P&gt;
&lt;P&gt;1. Boot to Special Boot Menu&lt;/P&gt;
&lt;P&gt;2. Option #7&lt;/P&gt;
&lt;P&gt;3. Choose your interface (like e0M): TMAC's Hack: DO NOT say yes to reboot. Choose N&lt;/P&gt;
&lt;P&gt;4. Option&amp;nbsp;#7 (NEED a web server)&lt;/P&gt;
&lt;P&gt;5. e0M is now selected (without a reboot)&lt;/P&gt;
&lt;P&gt;6. Enter info (IP of interface, netmask, gateway if needed)&lt;/P&gt;
&lt;P&gt;7. Enter URL of ONTAP package (same package you use to upgrade ONTAP)&lt;/P&gt;
&lt;P&gt;8. Let it install and do its' thing.&lt;/P&gt;
&lt;P&gt;9. During reboot, be sure to catch the Special Boot Menu&lt;/P&gt;
&lt;P&gt;10. Choose Option 9&lt;/P&gt;
&lt;P&gt;11. Get BOTH controllers to this point&lt;/P&gt;
&lt;P&gt;12. Choose Option 9a on Node 1. Let it finish&lt;/P&gt;
&lt;P&gt;13. Choose Option 9a on Node 2. Let it finish&lt;/P&gt;
&lt;P&gt;14. Choose Option 9a on Node 1. Let it finish (you should see all drives listed)&lt;/P&gt;
&lt;P&gt;15. Choose Option 9a on Node 2. Let it finish (you should see all drives listed)&lt;/P&gt;
&lt;P&gt;16. Choose Option 9b on Node 1. Let it reboot, partition and start up ONTAP cluster setup.&lt;/P&gt;
&lt;P&gt;17. Choose Option 9b on Node 2. It will reboot, partition and get to the ONTAP cluster Setup.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 26 Nov 2019 13:29:24 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152586#M33981</guid>
      <dc:creator>TMACMD</dc:creator>
      <dc:date>2019-11-26T13:29:24Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152589#M33982</link>
      <description>&lt;P&gt;Although you can downgrade your new 300 to 9.5, my recommendation would be to upgrade your 8040 to 9.6 which is already up to P4.&amp;nbsp; There are over 10,000 filers in the field running 9.6 and it's a stable and supported release (even-numbered releases used to be supported for only 1 year but that changed so it's now the same as odd-numbered releases).&lt;/P&gt;</description>
      <pubDate>Tue, 26 Nov 2019 14:04:56 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152589#M33982</guid>
      <dc:creator>EWILTS_SAS</dc:creator>
      <dc:date>2019-11-26T14:04:56Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152605#M33986</link>
      <description>&lt;P&gt;We've downgraded successfully but now we cannot join the cluster because of 2 different OnTap images.&amp;nbsp; Current cluster is image2 9.4P4 image1 9.5P5, new A300 nodes are image2 9.6p2 image1 9.5P5.&amp;nbsp; Are we able to set both images the same across all nodes in the cluster wihout causing issues.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 02:41:38 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152605#M33986</guid>
      <dc:creator>kelwin</dc:creator>
      <dc:date>2019-11-27T02:41:38Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152608#M33989</link>
      <description>&lt;P&gt;I doubt very much the problem is in different images. What version is currently active on AFF? Please, show actual console protocol of join attempt including error you get.&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 03:12:23 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152608#M33989</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2019-11-27T03:12:23Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152611#M33990</link>
      <description>&lt;P&gt;Enter the IP address of an interface on the private cluster network from the&lt;/P&gt;
&lt;P&gt;cluster you want to join: xxx.xxx.xxx.xxx&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Joining cluster at address xxx.xxx.xxx.xxx&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;System checks .Error: Cluster join operation cannot be performed at this time: All nodes in cluster must be at the same ONTAP version before node can be joined.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Resolve the issue, then try the command again.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Restarting Cluster Setup&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&amp;nbsp;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;CURRENT CLUSTER&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Last login time: 11/26/2019 11:04:53&lt;/P&gt;
&lt;P&gt;cluster1::&amp;gt; version&lt;/P&gt;
&lt;P&gt;NetApp Release 9.5P5: Fri Jun 14 15:33:34 UTC 2019&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;A300 NODE A&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;::&amp;gt; version&lt;/P&gt;
&lt;P&gt;NetApp Release 9.5P5: Fri Jun 14 15:33:34 UTC 2019&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Notice: Showing the version for the local node; the cluster-wide version could&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; not be determined.&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 11:33:56 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152611#M33990</guid>
      <dc:creator>kelwin</dc:creator>
      <dc:date>2019-11-27T11:33:56Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152618#M33992</link>
      <description>&lt;P&gt;On the cluster, what does this show:&lt;/P&gt;
&lt;P&gt;"system image show"&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;How about showing the full output of the serial session up to the failure on the A300?&lt;/P&gt;
&lt;P&gt;Maybe there is something that was missed?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;How did you "downgrade" to 9.5 on the A300s?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you did not wipe(like I indicated earlier), that may be causing issues.&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 17:05:20 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152618#M33992</guid>
      <dc:creator>TMACMD</dc:creator>
      <dc:date>2019-11-27T17:05:20Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152619#M33993</link>
      <description>&lt;P&gt;Wiped per instructions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;cluster1::*&amp;gt; system image show&lt;BR /&gt;Is Is Install&lt;BR /&gt;Node Image Default Current Version Date&lt;BR /&gt;-------- ------- ------- ------- ------------------------- -------------------&lt;BR /&gt;node-01&lt;BR /&gt;image1 true true 9.5P5 7/9/2019 21:08:16&lt;BR /&gt;image2 false false 9.5P4 6/12/2019 01:43:04&lt;BR /&gt;node-02&lt;BR /&gt;image1 true true 9.5P5 7/9/2019 21:08:24&lt;BR /&gt;image2 false false 9.5P4 6/12/2019 01:43:14&lt;BR /&gt;4 entries were displayed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;====================================================&lt;/P&gt;
&lt;P&gt;Selection (9a-9e)?: 9b&lt;BR /&gt;9b&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;########## WARNING ##########&lt;/P&gt;
&lt;P&gt;All configuration data will be deleted and the node will be&lt;BR /&gt;initialized with partitioned disks. Existing disk partitions must&lt;BR /&gt;be removed from all disks (9a) attached to this node and&lt;BR /&gt;its HA partner (and DR/DR-AUX partner nodes if applicable).&lt;BR /&gt;The HA partner (and DR/DR-AUX partner nodes if applicable) must&lt;BR /&gt;be waiting at the boot menu or already initialized with partitioned&lt;BR /&gt;disks (9b).&lt;BR /&gt;Do you still want to continue (yes/no)? yes&lt;BR /&gt;yes&lt;BR /&gt;AdpInit: This system will now reboot to perform wipeclean.&lt;BR /&gt;bootarg.bootmenu.selection is |wipeconfig|&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorReadingOwnership:notice]: error 16 (disk does no t exist) while reading ownership on disk 0a.00.21 (S/N S3SGNA0M801601)&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorDuringIO:error]: error 16 (disk does not exist) on disk 0a.00.23 (S/N S3SGNA0M801789) while reading individual disk ownership area&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorReadingOwnership:notice]: error 16 (disk does no t exist) while reading ownership on disk 0a.00.19 (S/N S3SGNA0M704669)&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorReadingOwnership:notice]: error 16 (disk does no t exist) while reading ownership on disk 0d.00.22 (S/N S3SGNA0M802615)&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorDuringIO:error]: error 16 (disk does not exist) on disk 0a.00.19 (S/N S3SGNA0M704669) while reading individual disk ownership area&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorDuringIO:error]: error 16 (disk does not exist) on disk 0d.00.22 (S/N S3SGNA0M802615) while reading individual disk ownership area&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorReadingOwnership:notice]: error 16 (disk does no t exist) while reading ownership on disk 0d.00.20 (S/N S3SGNA0M802540)&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorDuringIO:error]: error 16 (disk does not exist) on disk 0d.00.20 (S/N S3SGNA0M802540) while reading individual disk ownership area&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorReadingOwnership:notice]: error 16 (disk does no t exist) while reading ownership on disk 0d.00.18 (S/N S3SGNA0M704671)&lt;BR /&gt;Nov 27 10:29:46 [localhost:diskown.errorDuringIO:error]: error 16 (disk does not exist) on disk 0d.00.18 (S/N S3SGNA0M704671) while reading individual disk ownership area&lt;BR /&gt;.&lt;BR /&gt;Terminated&lt;BR /&gt;Skipped backing up /var file system to boot device.&lt;BR /&gt;Uptime: 15m0s&lt;BR /&gt;System rebooting...&lt;BR /&gt;BIOS Version: 11.5&lt;BR /&gt;Portions Copyright (C) 2014-2018 NetApp, Inc. All Rights Reserved.&lt;/P&gt;
&lt;P&gt;Initializing System Memory ...&lt;BR /&gt;Loading Device Drivers ...&lt;BR /&gt;Configuring Devices ...&lt;/P&gt;
&lt;P&gt;CPU = 1 Processor(s) Detected.&lt;BR /&gt;Intel(R) Xeon(R) CPU D-1587 @ 1.70GHz (CPU 0)&lt;BR /&gt;CPUID: 0x00050664. Cores per Processor = 16&lt;BR /&gt;131072 MB System RAM Installed.&lt;BR /&gt;SATA (AHCI) Device: ATP SATA III mSATA AF120GSMHI-NT2&lt;/P&gt;
&lt;P&gt;Boot Loader version 6.0.6&lt;BR /&gt;Copyright (C) 2000-2003 Broadcom Corporation.&lt;BR /&gt;Portions Copyright (C) 2002-2018 NetApp, Inc. All Rights Reserved.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Starting AUTOBOOT press Ctrl-C to abort...&lt;BR /&gt;Loading X86_64/freebsd/image2/kernel:0x200000/15719336 0x10fdba8/13881768 Entry at 0xfff fffff802dc4b0&lt;BR /&gt;Loading X86_64/freebsd/image2/platform.ko:0x1e3b000/4076848 0x221e530/586840&lt;BR /&gt;Starting program at 0xffffffff802dc4b0&lt;BR /&gt;NetApp Data ONTAP 9.5P5&lt;BR /&gt;IPsec: Initialized Security Association Processing.&lt;BR /&gt;Copyright (C) 1992-2019 NetApp.&lt;BR /&gt;All rights reserved.&lt;BR /&gt;*******************************&lt;BR /&gt;* *&lt;BR /&gt;* Press Ctrl-C for Boot Menu. *&lt;BR /&gt;* *&lt;BR /&gt;*******************************&lt;BR /&gt;cryptomod_fips: Executing Crypto FIPS Self Tests.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'CPU COMPATIBILITY' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-128 ECB, AES-256 ECB' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-128 CBC, AES-256 CBC' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-128 GCM, AES-256 GCM' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-128 CCM' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'CTR_DRBG' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'SHA1, SHA256, SHA512' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'HMAC-SHA1, HMAC-SHA256, HMAC-SHA512' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'PBKDF2' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-XTS 128, AES-XTS 256' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'Self-integrity' passed.&lt;BR /&gt;Wed Nov 27 10:30:35 2019 [nv2flash.restage.progress:NOTICE]: ReStage is not needed becau se the flash has no data.&lt;BR /&gt;Wipe filer procedure requested.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Nov 27 10:30:53 Power outage protection flash de-staging: 19 cycles&lt;BR /&gt;***OS2SP configured successfully***&lt;BR /&gt;sk_allocate_memory: large allocation, bzero 7782 MB in 987 ms&lt;BR /&gt;cryptomod_fips: Executing Crypto FIPS Self Tests.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'CPU COMPATIBILITY' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-128 ECB, AES-256 ECB' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-128 CBC, AES-256 CBC' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-128 GCM, AES-256 GCM' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-128 CCM' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'CTR_DRBG' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'SHA1, SHA256, SHA512' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'HMAC-SHA1, HMAC-SHA256, HMAC-SHA512' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'PBKDF2' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'AES-XTS 128, AES-XTS 256' passed.&lt;BR /&gt;cryptomod_fips: Crypto FIPS self-test: 'Self-integrity' passed.&lt;BR /&gt;9b&lt;BR /&gt;AdpInit: Root will be created with 6 disks with configuration as (3d+2p+1s) using disks of type (SSD).&lt;BR /&gt;bootarg.bootmenu.selection is |4a|&lt;BR /&gt;AdpInit: System will now perform initialization using option 4a&lt;BR /&gt;BOOTMGR: The system has 0 disks assigned whereas it needs 6 to boot, will try to assign the required number.&lt;BR /&gt;sanown_assign_X_disks: init boot assign with half shelf policy&lt;BR /&gt;Nov 27 10:31:53 [localhost:diskown.hlfShlf.assignStatus:notice]: Half shelf based automa tic disk assignment is "enabled".&lt;BR /&gt;sanown_split_shelf_lock_disk_op: msg success op: RESERVE lock disk: 5002538B:09759850:00 0000:00000000:00000000:00000000 status: 0&lt;BR /&gt;sanown_split_shelf_lock_disk_op: msg success op: RELEASE lock disk: 5002538B:09759850:00 000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000 status: 0&lt;BR /&gt;sanown_dump_split_shelf_info: Time: 30502 Shelf count:1&lt;BR /&gt;sanown_dump_split_shelf_info: Shelf: 0 is_local: 1 is_internal: 0 flags 2c max_slot: 24 type: 0&lt;BR /&gt;sanown_dump_split_shelf_info: Shelf: 0 section: 0 owner_id: 538117741 state: 1&lt;BR /&gt;sanown_dump_split_shelf_info: Shelf: 0 section: 1 owner_id: 538117925 state: 1&lt;BR /&gt;sanown_dump_split_shelf_info: Shelf: 0 Lock index: 0 Lock valid: 1 Lock slot: 0 Lock dis k: 5002538B:09759850:00000000:00000000:00000000:00000000:00000000:00000000:00000000:0000 0000&lt;BR /&gt;sanown_dump_split_shelf_info: Shelf: 0 Lock index: 1 Lock valid: 1 Lock slot: 1 Lock dis k: 5002538B:0983CB20:00000000:00000000:00000000:00000000:00000000:00000000:00000000:0000 0000&lt;BR /&gt;sanown_assign_X_disks: assign disks from my unowned local site pool0 loop&lt;BR /&gt;sanown_assign_disk_helper: Assigned disk 0a.00.2&lt;BR /&gt;Cannot do remote rescan. Use 'run local disk show' on the console of ?? for it to scan t he newly assigned disks&lt;BR /&gt;sanown_assign_disk_helper: Assigned disk 0a.00.4&lt;BR /&gt;Nov 27 10:31:53 [localhost:diskown.RescanMessageFailed:error]: Could not send rescan mes sage to ??.&lt;BR /&gt;sanown_assign_disk_helper: Assigned disk 0d.00.1&lt;BR /&gt;sanown_assign_disk_helper: Assigned disk 0a.00.0&lt;BR /&gt;sanown_assign_disk_helper: Assigned disk 0d.00.3&lt;BR /&gt;sanown_assign_disk_helper: Assigned disk 0d.00.5&lt;BR /&gt;BOOTMGR: already_assigned=0, min_to_boot=6, num_assigned=6&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Nov 27 10:31:53 [localhost:raid.disk.fast.zero.done:notice]: Disk 0a.00.4 Shelf 0 Bay 4 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M801738] UID [5002538B:098358D0:00000000:00 000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0 x5dde50996f7252d0).&lt;BR /&gt;Nov 27 10:31:53 [localhost:raid.disk.fast.zero.done:notice]: Disk 0a.00.2 Shelf 0 Bay 2 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M801787] UID [5002538B:09835BE0:00000000:00 000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0 x5dde50993a9b0ed9).&lt;BR /&gt;Nov 27 10:31:53 [localhost:raid.disk.fast.zero.done:notice]: Disk 0d.00.5 Shelf 0 Bay 5 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M802536] UID [5002538B:0983C540:00000000:00 000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0 x5dde509919ebe69e).&lt;BR /&gt;Nov 27 10:31:53 [localhost:raid.disk.fast.zero.done:notice]: Disk 0d.00.3 Shelf 0 Bay 3 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M802516] UID [5002538B:0983C400:00000000:00 000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0 x5dde50994e719c5d).&lt;BR /&gt;Nov 27 10:31:53 [localhost:raid.disk.fast.zero.done:notice]: Disk 0a.00.0 Shelf 0 Bay 0 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M704674] UID [5002538B:09759850:00000000:00 000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0 x5dde509904d0c5e7).&lt;BR /&gt;Nov 27 10:31:53 [localhost:raid.disk.fast.zero.done:notice]: Disk 0d.00.1 Shelf 0 Bay 1 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M802630] UID [5002538B:0983CB20:00000000:00 000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0 x5dde50992670c329).&lt;BR /&gt;Nov 27 10:31:54 [localhost:raid.autoPart.start:notice]: System has started auto-partitio ning 6 disks.&lt;BR /&gt;Nov 27 10:31:55 [localhost:raid.partition.disk:notice]: Disk partition successful on Dis k 0a.00.0 Shelf 0 Bay 0 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M704674] UID [50025 38B:09759850:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], p artitions created 3, partition sizes specified 1, partition spec summary [3]=37660224.&lt;BR /&gt;Nov 27 10:31:56 [localhost:raid.partition.disk:notice]: Disk partition successful on Dis k 0d.00.1 Shelf 0 Bay 1 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M802630] UID [50025 38B:0983CB20:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], p artitions created 3, partition sizes specified 1, partition spec summary [3]=37660224.&lt;BR /&gt;Nov 27 10:31:58 [localhost:raid.partition.disk:notice]: Disk partition successful on Dis k 0a.00.2 Shelf 0 Bay 2 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M801787] UID [50025 38B:09835BE0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], p artitions created 3, partition sizes specified 1, partition spec summary [3]=37660224.&lt;BR /&gt;Nov 27 10:31:59 [localhost:raid.partition.disk:notice]: Disk partition successful on Dis k 0d.00.3 Shelf 0 Bay 3 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M802516] UID [50025 38B:0983C400:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], p artitions created 3, partition sizes specified 1, partition spec summary [3]=37660224.&lt;BR /&gt;Nov 27 10:32:01 [localhost:raid.partition.disk:notice]: Disk partition successful on Dis k 0a.00.4 Shelf 0 Bay 4 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M801738] UID [50025 38B:098358D0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], p artitions created 3, partition sizes specified 1, partition spec summary [3]=37660224.&lt;BR /&gt;Nov 27 10:32:02 [localhost:raid.partition.disk:notice]: Disk partition successful on Dis k 0d.00.5 Shelf 0 Bay 5 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M802536] UID [50025 38B:0983C540:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], p artitions created 3, partition sizes specified 1, partition spec summary [3]=37660224.&lt;BR /&gt;Nov 27 10:32:02 [localhost:raid.autoPart.done:notice]: Successfully auto-partitioned 6 o f 6 disks.&lt;BR /&gt;Nov 27 10:32:02 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0 /rg0/0a.00.4P3 Shelf 0 Bay 4 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M801738NP003] UID [6002538B:098358D0:500A0981:00000003:00000000:00000000:00000000:00000000:00000000:00 000000] to aggregate aggr0 has completed successfully&lt;BR /&gt;Nov 27 10:32:02 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0 /rg0/0d.00.3P3 Shelf 0 Bay 3 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M802516NP003] UID [6002538B:0983C400:500A0981:00000003:00000000:00000000:00000000:00000000:00000000:00 000000] to aggregate aggr0 has completed successfully&lt;BR /&gt;Nov 27 10:32:02 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0 /rg0/0a.00.2P3 Shelf 0 Bay 2 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M801787NP003] UID [6002538B:09835BE0:500A0981:00000003:00000000:00000000:00000000:00000000:00000000:00 000000] to aggregate aggr0 has completed successfully&lt;BR /&gt;Nov 27 10:32:02 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0 /rg0/0d.00.1P3 Shelf 0 Bay 1 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M802630NP003] UID [6002538B:0983CB20:500A0981:00000003:00000000:00000000:00000000:00000000:00000000:00 000000] to aggregate aggr0 has completed successfully&lt;BR /&gt;Nov 27 10:32:02 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0 /rg0/0a.00.0P3 Shelf 0 Bay 0 [NETAPP X357_S16433T8ATE NA53] S/N [S3SGNA0M704674NP003] UID [6002538B:09759850:500A0981:00000003:00000000:00000000:00000000:00000000:00000000:00 000000] to aggregate aggr0 has completed successfully&lt;BR /&gt;Nov 27 10:32:02 [localhost:wafl.data.compaction.event:notice]: WAFL volume data compacti on state changed in aggregate "aggr0" to "enabled".&lt;BR /&gt;Nov 27 10:32:03 [localhost:wafl.transition.cp.completed:notice]: Transition CP with reas on none, 00000000 for replaying=0,0 unmounting=0,0 total=1,0 volumes with a total of tot al=35 incoming=10 dirty buffers took 12ms with longest CP phases being CP_P2_FLUSH=7, CP _P1_CLEAN=1, CP_PRE_P0=1 on aggregate aggr0.&lt;BR /&gt;Nov 27 10:32:03 [localhost:wafl.transition.cp.completed:notice]: Transition CP with reas on none, 00000000 for replaying=0,0 unmounting=0,0 total=1,0 volumes with a total of tot al=29 incoming=0 dirty buffers took 14ms with longest CP phases being CP_P2_FLUSH=11, CP _P5_FINISH=0, CP_P4_FINISH=0 on aggregate aggr0.&lt;BR /&gt;Nov 27 10:32:03 [localhost:wafl.transition.cp.completed:notice]: Transition CP with reas on none, 00000000 for replaying=0,0 unmounting=0,0 total=1,0 volumes with a total of tot al=51 incoming=10 dirty buffers took 15ms with longest CP phases being CP_P2_FLUSH=10, C P_P1_CLEAN=1, CP_PRE_P0=1 on aggregate aggr0.&lt;BR /&gt;Nov 27 10:32:03 [localhost:wafl.transition.cp.completed:notice]: Transition CP with reas on none, 00000000 for replaying=0,0 unmounting=0,0 total=2,1 volumes with a total of tot al=105 incoming=21 dirty buffers took 15ms with longest CP phases being CP_P2_FLUSH=5, C P_PRE_P0=2, CP_P3A_VOLINFO=1 on aggregate aggr0.&lt;BR /&gt;Nov 27 10:32:03 [localhost:wafl.transition.cp.completed:notice]: Transition CP with reas on none, 00000000 for replaying=0,0 unmounting=0,0 total=2,1 volumes with a total of tot al=90 incoming=5 dirty buffers took 20ms with longest CP phases being CP_P2_FLUSH=15, CP _P5_FINISH=0, CP_P4_FINISH=0 on aggregate aggr0.&lt;BR /&gt;Nov 27 10:32:03 [localhost:wafl.transition.cp.completed:notice]: Transition CP with reas on none, 00000000 for replaying=0,0 unmounting=0,0 total=2,1 volumes with a total of tot al=89 incoming=3 dirty buffers took 8ms with longest CP phases being CP_P2_FLUSH=4, CP_P 5_FINISH=0, CP_P4_FINISH=0 on aggregate aggr0.&lt;BR /&gt;Nov 27 10:32:03 [localhost:cf.fm.notkoverClusterDisable:error]: Failover monitor: takeov er disabled (restart)&lt;BR /&gt;Nov 27 10:32:03 [localhost:tar.csum.notFound:notice]: Stored checksum file does not exis t, extracting local://mnt/prestage/mroot.tgz.&lt;BR /&gt;Nov 27 10:32:03 [localhost:tar.csum.mismatch:notice]: Stored checksum 0 does not match c alculated checksum 3629085172, extracting local://mnt/prestage/mroot.tgz.&lt;BR /&gt;Nov 27 10:32:03 [localhost:cf.fsm.takeoverOfPartnerDisabled:error]: Failover monitor: ta keover of partner disabled (Controller Failover takeover disabled).&lt;BR /&gt;Nov 27 10:32:04 [localhost:wafl.transition.cp.completed:notice]: Transition CP with reas on none, 00000000 for replaying=0,0 unmounting=0,0 total=2,1 volumes with a total of tot al=2527 incoming=2382 dirty buffers took 37ms with longest CP phases being CP_P1_CLEAN=1 5, CP_P2_FLUSH=3, CP_P2V_INO=2 on aggregate aggr0.&lt;BR /&gt;Nov 27 10:32:05 [localhost:tar.csum.notFound:notice]: Stored checksum file does not exis t, extracting local://mnt/prestage/pmroot.tgz.&lt;BR /&gt;Nov 27 10:32:05 [localhost:tar.csum.mismatch:notice]: Stored checksum 0 does not match c alculated checksum 1569079177, extracting local://mnt/prestage/pmroot.tgz.&lt;BR /&gt;Nov 27 10:32:06 [localhost:kern.syslog.msg:notice]: Registry is being upgraded to improv e storing of local changes.&lt;BR /&gt;Nov 27 10:32:06 [localhost:kern.syslog.msg:notice]: Registry upgrade successful.&lt;BR /&gt;Nov 27 10:32:06 [localhost:kern.syslog.msg:notice]: domain xing mode: off, domain xing i nterrupt: false&lt;BR /&gt;Nov 27 10:32:06 [localhost:clam.invalid.config:error]: Local node (name=unknown, id=0) i s in an invalid configuration for providing CLAM functionality. CLAM cannot determine th e identity of the HA partner.&lt;BR /&gt;Kernel thread "perfmon poller thre" (pid 4711) exited prematurely.&lt;BR /&gt;System initialization has completed successfully.&lt;BR /&gt;Nov 27 10:32:07 [localhost:scsitarget.hwpfct.linkUp:notice]: Link up on Fibre Channel ta rget adapter 1b.&lt;BR /&gt;Nov 27 10:32:07 [localhost:scsitarget.hwpfct.linkUp:notice]: Link up on Fibre Channel ta rget adapter 1a.&lt;BR /&gt;Nov 27 10:32:07 [localhost:scsitarget.hwpfct.linkUp:notice]: Link up on Fibre Channel ta rget adapter 1d.&lt;BR /&gt;Nov 27 10:32:07 [localhost:scsitarget.hwpfct.linkUp:notice]: Link up on Fibre Channel ta rget adapter 1c.&lt;BR /&gt;Occupied cpu socket mask is 0x1&lt;BR /&gt;wrote key file "/tmp/rndc.key"&lt;BR /&gt;Nov 27 10:33:00 [localhost:monitor.globalStatus.critical:EMERGENCY]: Controller failover partner unknown. Controller failover not possible.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Welcome to the cluster setup wizard.&lt;/P&gt;
&lt;P&gt;You can enter the following commands at any time:&lt;BR /&gt;"help" or "?" - if you want to have a question clarified,&lt;BR /&gt;"back" - if you want to change previously answered questions, and&lt;BR /&gt;"exit" or "quit" - if you want to quit the cluster setup wizard.&lt;BR /&gt;Any changes you made before quitting will be saved.&lt;/P&gt;
&lt;P&gt;You can return to cluster setup at any time by typing "cluster setup".&lt;BR /&gt;To accept a default or omit a question, do not enter a value.&lt;/P&gt;
&lt;P&gt;This system will send event messages and periodic reports to NetApp Technical&lt;BR /&gt;Support. To disable this feature, enter&lt;BR /&gt;autosupport modify -support disable&lt;BR /&gt;within 24 hours.&lt;/P&gt;
&lt;P&gt;Enabling AutoSupport can significantly speed problem determination and&lt;BR /&gt;resolution should a problem occur on your system.&lt;BR /&gt;For further information on AutoSupport, see:&lt;BR /&gt;&lt;A href="http://support.netapp.com/autosupport/" target="_blank"&gt;http://support.netapp.com/autosupport/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Type yes to confirm and continue {yes}: yes&lt;/P&gt;
&lt;P&gt;Enter the node management interface port [e0M]:&lt;BR /&gt;Enter the node management interface IP address: xxx.xxx.xxx.xxx&lt;/P&gt;
&lt;P&gt;Enter the node management interface netmask: xxx.xxx.xxx.xxx&lt;BR /&gt;Enter the node management interface default gateway: xxx.xxx.xxx.xxx&lt;BR /&gt;A node management interface on port e0M with IP address xxx.xxx.xxx.xxx has been created.&lt;/P&gt;
&lt;P&gt;Use your web browser to complete cluster setup by accessing &lt;A href="https://xxx.xxx.xxx.xxx" target="_blank"&gt;https://xxx.xxx.xxx.xxx&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Otherwise, press Enter to complete cluster setup using the command line&lt;BR /&gt;interface:&lt;BR /&gt;Exiting the cluster setup wizard. Any changes you made have been saved.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;The cluster administrator's account (username "admin") password is set to the system def ault.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Warning: You have exited the cluster setup wizard before completing all&lt;BR /&gt;of the tasks. The cluster is not configured. You can complete cluster setup by typing&lt;BR /&gt;"cluster setup" in the command line interface.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Wed Nov 27 10:38:54 UTC 2019&lt;BR /&gt;login: admin&lt;BR /&gt;******************************************************&lt;BR /&gt;* This is a serial console session. Output from this *&lt;BR /&gt;* session is mirrored on the SP console session. *&lt;BR /&gt;******************************************************&lt;BR /&gt;::&amp;gt; hostname&lt;BR /&gt;localhost&lt;/P&gt;
&lt;P&gt;::&amp;gt; exit&lt;BR /&gt;Goodbye&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;login:&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 17:25:15 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152619#M33993</guid>
      <dc:creator>kelwin</dc:creator>
      <dc:date>2019-11-27T17:25:15Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152620#M33994</link>
      <description>&lt;P&gt;Thats a great start.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;what about the rest?&lt;/P&gt;
&lt;P&gt;Run the cluster setup from the CLI and show that output also.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am wondering if the cluster is not communicating properly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From the cluster (these are all auto-assigned 169 addresses, no real need to mask)&lt;/P&gt;
&lt;P&gt;net port show -ipspace Cluster&lt;/P&gt;
&lt;P&gt;net int show -vserver Cluster&lt;/P&gt;
&lt;P&gt;net device-discovery show -ipspace Cluster&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From the A300s CLI:&lt;/P&gt;
&lt;P&gt;net port show -port e0a|e0b&lt;/P&gt;
&lt;P&gt;net&amp;nbsp;device-discovery show -port e0a|e0b&lt;/P&gt;
&lt;P&gt;net i&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 17:38:05 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152620#M33994</guid>
      <dc:creator>TMACMD</dc:creator>
      <dc:date>2019-11-27T17:38:05Z</dc:date>
    </item>
    <item>
      <title>Re: Downgrade or revert new AFF A300 from 9.6 to 9.5</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152741#M34012</link>
      <description>&lt;P&gt;FYI to those interested, we contacted support and after some digging they fouind a stale smf table entry blocking new nodes trying to joing the cluster.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Although Nodes 3 and 4 had the same version of ONTAP as the existing cluster they would not join because there was a stale entry in the smf tables.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;xxxxxxx::*&amp;gt; debug smdb table cluster_version_replicated show&lt;BR /&gt;uuid generation major minor version-string date ontapi-major ontapi-minor is-image-same state&lt;BR /&gt;------------------------------------ ---------- ----- ----- -------------------------------------------------- ------------------------ ------------ ------------ ------------- -----&lt;BR /&gt;3101b6df-7cec-11e5-8e37-00a0985f3fc6 8 3 0 NetApp Release 8.3P1: Tue Apr 07 16:05:35 PDT 2015 Tue Apr 07 12:05:35 2015 1 30 true none&lt;BR /&gt;57c64277-7cec-11e5-8e37-00a0985f3fc6 9 5 0 NetApp Release 9.5P5: Fri Jun 14 15:33:34 UTC 2019 Fri Jun 14 11:33:34 2019 1 150 true none&lt;BR /&gt;833939e5-7cd5-11e5-b363-396932647d67 9 5 0 NetApp Release 9.5P5: Fri Jun 14 15:33:34 UTC 2019 Fri Jun 14 11:33:34 2019 1 150 true none&lt;BR /&gt;f4270dd4-7cd3-11e5-a735-a570cc7c464a 9 5 0 NetApp Release 9.5P5: Fri Jun 14 15:33:34 UTC 2019 Fri Jun 14 11:33:34 2019 1 150 true none&lt;BR /&gt;4 entries were displayed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There should be an entry for each node in the Cluster plus 1 entry for the Cluster.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We removed the 8.3P1 entry:&lt;/P&gt;
&lt;P&gt;::*&amp;gt; debug smdb table cluster_version_replicated delete -uuid 3101b6df-7cec-11e5-8e37-00a0985f3fc6&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&amp;nbsp; Do not edit the smf tables without guidance from NetApp Support.&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Dec 2019 18:36:11 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Downgrade-or-revert-new-AFF-A300-from-9-6-to-9-5/m-p/152741#M34012</guid>
      <dc:creator>kelwin</dc:creator>
      <dc:date>2019-12-03T18:36:11Z</dc:date>
    </item>
  </channel>
</rss>

