Data Backup and Recovery

SDU issues on RHEL 7.2

bDOT
5,761 Views

I am running RHEL 7.2 and have installed Unified Host Utilities 7.1 and SDU 5.3.1, as well as sg3_utils and sg3_utils_libs. We are using iSCSI to present a LUN from an 8.1.4 7-mode filer. The LUN is presented and multipath has been configured.

 

[root@rheltest ~]# sanlun lun show
controller(7mode/E-Series)/                                  device          host                  lun
vserver(cDOT/FlashRay)        lun-pathname                   filename        adapter    protocol   size    product
---------------------------------------------------------------------------------------------------------------
bradtest-01                   /vol/testvol/lun               /dev/sde        host5      iSCSI      5g      7DOT
bradtest-01                   /vol/testvol/lun               /dev/sdd        host6      iSCSI      5g      7DOT
bradtest-01                   /vol/testvol/lun               /dev/sdb        host3      iSCSI      5g      7DOT
bradtest-01                   /vol/testvol/lun               /dev/sdc        host4      iSCSI      5g      7DOT

[root@rheltest ~]# multipath -ll 360a98000427045777a244a2d555a4535 dm-0 NETAPP ,LUN size=5.0G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=2 status=active |- 3:0:0:0 sdb 8:16 active ready running |- 4:0:0:0 sdc 8:32 active ready running |- 5:0:0:0 sde 8:64 active ready running `- 6:0:0:0 sdd 8:48 active ready running
[root@rheltest ~]# cat /etc/multipath.conf # All data under blacklist must be specific to your system. blacklist { wwid 36000c294abf87d45ac7e123f864a9c5b }

[root@rheltest ~]# rpm -qa | egrep "netapp|sg3|iscsi|multipath"
iscsi-initiator-utils-iscsiuio-6.2.0.873-35.el7.x86_64
netapp_linux_unified_host_utilities-7-1.x86_64
sg3_utils-libs-1.37-9.el7.x86_64
iscsi-initiator-utils-6.2.0.873-35.el7.x86_64
device-mapper-multipath-libs-0.4.9-99.el7.x86_64
device-mapper-multipath-0.4.9-99.el7.x86_64
netapp.snapdrive-5.3.1-1.x86_64
sg3_utils-1.37-9.el7.x86_64

So far so good. SnapDrive has been configured as follows:

 

[root@rheltest ~]# grep "^[^#;]" /opt/NetApp/snapdrive/snapdrive.conf
default-transport="iscsi" #Transport type to use for storage provisioning, when a decision is needed
fstype="ext3" #File system to use when more than one file system is available
multipathing-type="NativeMPIO" #Multipathing software to use when more than one multipathing solution is available. Possible values are 'NativeMPIO' or 'DMP' or 'none'
rbac-method="native" #Role Based Access Control(RBAC) methods
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
vmtype="lvm" #Volume manager to use when more than one volume manager is available

[root@rheltest ~]# snapdrive config list
username    appliance name   appliance type
----------------------------------------------
root        bradtest-01      StorageSystem

[root@rheltest ~]# snapdrive storage show -all

 WARNING: This operation can take several minutes
          based on the configuration.
0001-185 Command error: storage show failed: no NETAPP devices to show or add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system or retry after changing snapdrive.conf to use http for storage system communication and restarting snapdrive daemon.

I get the following on the filer log when I run "snapdrive storage show -all" from the host:

 

 

Fri Dec  2 10:44:25 MST [bradtest-01:app.log.info:info]: rheltest.local: snapdrive 5.3.1 for UNIX: (3) general: Connected Luns=0, DGs=0, HVs=0, FS=0, OS_Name=Linux, Platform=Red Hat Enterprise Linux Server 7.2 (Maipo), Kernel_Version=RHCK 3.10.0-327.el7.x86_64, Protocol=iscsi, File_System=ext3, Multipath_Type=none, Host_VolumeManager=lvm, Host_Cluster=no, Host_Virtualization=yes, Virtualization_Flavor=VMware, RBAC_Method=native, Protection_Usage=none

 

When starting SDU, I get the all too common storage stack error.

 

[root@rheltest ~]# service snapdrived restart
Stopping snapdrive daemon: Successfully stopped daemon

Starting snapdrive daemon: WARNING!!! Unable to find a SAN storage stack. Please verify that the appropriate transport protocol, volume manager, file system and multipathing type are installed and configured in the system. If NFS is being used, this warning message can be ignored.
Successfully started daemon

And get the following when running sdconfcheck:

 

[root@rheltest ~]# sdconfcheck import -file /tmp/confcheck_data.tar.gz

The data files have been successfully imported from the specified source.
[root@rheltest ~]# sdconfcheck check

NOTE: SnapDrive Configuration Checker is using the data file version  v12052013
  Please make sure that you are using the latest version.
  Refer to the SnapDrive for Unix Installation and Administration Guide for more details.

Detected Intel/AMD x64 Architecture
Detected Linux OS
Detected Software iSCSI on Linux
Detected   Ext3 File System

Did not find any supported Volume managers.
Detected   Linux Native MPIO

Did not find any supported cluster solutions.

Did not find any supported HU tool kits.
sdconfcheck: ../../../messageinfo/src/messages.cpp:145: void messageinfo::messageParse::parseRecord(std::string&, std::string&): Assertion `false' failed.
Aborted

Any thoughts? I've been beating my head against the wall on this for hours. Everything aligns with the IMT, and this is a brand new RHEL minimal install so it seems this SDU install should be textbook.

1 ACCEPTED SOLUTION

bDOT
5,640 Views

Well, I realized I had installed the 7.1 version of the Unified Host Utilities, while the IMT calls for 7.0 with RHEL 7.2. I uninstalled 7.1, installed 7.0 but still had issues. I then decided to start with a fresh RHEL install as I was no longer confident in this install, with all the different RPMs I've installed and uninstalled during troubleshooting. Lo and behold, everything is working perfectly on the new RHEL install. I don't recall doing anything differently this time around, but whatever the case it's now working as expected.

View solution in original post

3 REPLIES 3

ekashpureff
5,737 Views

 

Brad -

 

Looks like an authentication issue.

 

I noted :

Command error: storage show failed: no NETAPP devices to show or add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system or retry after changing snapdrive.conf to use http for storage system communication and restarting snapdrive daemon.

 

The LUN wasn't created with SnapDrive to start with ?

 


I hope this response has been helpful to you.

 

At your service,

 

Eugene E. Kashpureff, Sr.
Independent NetApp Consultant http://www.linkedin.com/in/eugenekashpureff
Senior NetApp Instructor, FastLane US http://www.fastlaneus.com/
(P.S. I appreciate 'kudos' on any helpful posts.)

 

 

bDOT
5,718 Views

Eugene, I'm thinking that's a generic error. I removed the credentials via "snapdrive config delete" and re-added them. I first tried with invalid credentials, to see if I would get any error. No error, so I removed and re-added with valid credentials and get the following printed to the console on the filer:

 

Fri Dec  2 17:02:40 MST [bradtest-01:app.log.notice:notice]: rheltest.local: snapdrive 5.3.1 for UNIX: (3) bradtest-01 configured: OS_Name=Linux, Platform=Red Hat Enterprise Linux Server 7.2 (Maipo), Kernel_Version=RHCK 3.10.0-327.el7.x86_64, Protocol=iscsi, File_System=ext3, Multipath_Type=none, Host_VolumeManager=lvm, Host_Cluster=no, Host_Virtualization=yes, Virtualization_Flavor=VMware, RBAC_Method=native, Protection_Usage=none

I'm guessing that confirms that the credentials are valid. I've also tried switching from HTTPS to HTTP to no avail. And I've added the RHEL host to trusted.hosts as well to no avail.

 

The LUN that's currently mounted was created before SnapDrive was installed. I doubt that SnapDrive will create a LUN in it's current state, but I'll try here in a bit and report back.

bDOT
5,641 Views

Well, I realized I had installed the 7.1 version of the Unified Host Utilities, while the IMT calls for 7.0 with RHEL 7.2. I uninstalled 7.1, installed 7.0 but still had issues. I then decided to start with a fresh RHEL install as I was no longer confident in this install, with all the different RPMs I've installed and uninstalled during troubleshooting. Lo and behold, everything is working perfectly on the new RHEL install. I don't recall doing anything differently this time around, but whatever the case it's now working as expected.

Public