Subscribe
Accepted Solution

Icehouse Cluster mode Drivers

I am using 3 nodes setup and cinder is been installed on controller.

Problem is only when creating volume. It end up with an error on GUI and Cli.

2014-05-13 10:07:10.890 886 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'84ad078c3d9443788fc87fad6c134005', 'tenant': u'4b82bfee2c3a4f1f906c91fc2ebe00aa', 'user_identity': u'84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -'}

2014-05-13 10:07:10.957 886 ERROR cinder.scheduler.filters.capacity_filter [req-abfd3652-ba17-48c9-aa8c-b2915c57f7a0 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] Free capacity not set: volume node info collection broken.

2014-05-13 10:07:10.963 886 ERROR cinder.scheduler.flows.create_volume [req-abfd3652-ba17-48c9-aa8c-b2915c57f7a0 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] Failed to schedule_create_volume: No valid host was found.

2014-05-13 10:10:18.702 886 INFO cinder.openstack.common.service [-] Caught SIGTERM, exiting

2014-05-13 10:13:59.057 1072 AUDIT cinder.service [-] Starting cinder-scheduler node (version 2014.1)

2014-05-13 10:13:59.188 1072 ERROR oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] AMQP server on controller:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 1 seconds.

2014-05-13 10:14:00.190 1072 INFO oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] Reconnecting to AMQP server on controller:5672

2014-05-13 10:14:00.191 1072 INFO oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] Delaying reconnect for 1.0 seconds...

2014-05-13 10:14:01.202 1072 ERROR oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] AMQP server on controller:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 3 seconds.

2014-05-13 10:14:04.202 1072 INFO oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] Reconnecting to AMQP server on controller:5672

2014-05-13 10:14:04.203 1072 INFO oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] Delaying reconnect for 1.0 seconds...

2014-05-13 10:14:05.213 1072 ERROR oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] AMQP server on controller:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 5 seconds.

2014-05-13 10:14:10.217 1072 INFO oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] Reconnecting to AMQP server on controller:5672

2014-05-13 10:14:10.217 1072 INFO oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] Delaying reconnect for 1.0 seconds...

2014-05-13 10:14:11.230 1072 INFO oslo.messaging._drivers.impl_rabbit [req-8b5d1942-11a3-403d-b4bf-0f86c1823bb5 - - - - -] Connected to AMQP server on controller:5672

2014-05-13 10:14:11.506 1072 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672

2014-05-13 10:43:35.668 1072 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'84ad078c3d9443788fc87fad6c134005', 'tenant': u'4b82bfee2c3a4f1f906c91fc2ebe00aa', 'user_identity': u'84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -'}

2014-05-13 10:43:35.734 1072 ERROR cinder.scheduler.filters.capacity_filter [req-9ecef6c8-9b4f-46d7-881f-4d51ba92aa37 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] Free capacity not set: volume node info collection broken.

2014-05-13 10:43:35.741 1072 ERROR cinder.scheduler.flows.create_volume [req-9ecef6c8-9b4f-46d7-881f-4d51ba92aa37 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] Failed to schedule_create_volume: No valid host was found.

2014-05-13 10:48:07.162 1072 INFO cinder.openstack.common.service [-] Caught SIGTERM, exiting

2014-05-13 10:48:07.697 3031 AUDIT cinder.service [-] Starting cinder-scheduler node (version 2014.1)

2014-05-13 10:48:07.716 3031 INFO oslo.messaging._drivers.impl_rabbit [req-c72cfc0a-594a-42ff-8c70-5fa953617060 - - - - -] Connected to AMQP server on controller:5672

2014-05-13 10:48:07.998 3031 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672

2014-05-13 10:50:53.103 3031 INFO cinder.openstack.common.service [-] Caught SIGTERM, exiting

2014-05-13 10:50:53.614 3231 AUDIT cinder.service [-] Starting cinder-scheduler node (version 2014.1)

2014-05-13 10:50:53.628 3231 INFO oslo.messaging._drivers.impl_rabbit [req-f6ffb37e-a890-4bd5-990c-f65353f4c48d - - - - -] Connected to AMQP server on controller:5672

2014-05-13 10:50:53.915 3231 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672

2014-05-13 10:54:00.178 3231 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'84ad078c3d9443788fc87fad6c134005', 'tenant': u'4b82bfee2c3a4f1f906c91fc2ebe00aa', 'user_identity': u'84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -'}

2014-05-13 10:54:00.246 3231 ERROR cinder.scheduler.filters.capacity_filter [req-adb57242-336b-4a4e-b269-4fa53998bf97 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] Free capacity not set: volume node info collection broken.

2014-05-13 10:54:00.251 3231 ERROR cinder.scheduler.flows.create_volume [req-adb57242-336b-4a4e-b269-4fa53998bf97 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] Failed to schedule_create_volume: No valid host was found.

volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver

netapp_server_hostname=data lif ip address

netapp_server_port=80

netapp_storage_protocol=nfs

netapp_storage_family=ontap_cluster

netapp_vserver=open

netapp_login=admin

netapp_password=tttttt1!

nfs_shares_config=/etc/cinder/netapp.conf

Re: Icehouse Cluster mode Drivers

Adding more details:

If i un comment  configuration in cinder for below i lines i get this error

rootwrap_config = /etc/cinder/rootwrap.conf

iscsi_helper = tgtadm

volume_name_template = volume-%s

volume_group = cinder-volumes

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes

2014-05-13 11:18:37.712 4241 AUDIT cinder.service [-] Starting cinder-volume node (version 2014.1)

2014-05-13 11:18:37.714 4241 INFO cinder.volume.manager [req-cb093ffc-9e69-49c0-872b-2b5ae19e83cd - - - - -] Starting volume driver NetAppDriver (1.0.0)

2014-05-13 11:18:37.781 4241 ERROR cinder.volume.manager [req-cb093ffc-9e69-49c0-872b-2b5ae19e83cd - - - - -] Error encountered during initialization of driver: NetAppDriver

2014-05-13 11:18:37.781 4241 ERROR cinder.volume.manager [req-cb093ffc-9e69-49c0-872b-2b5ae19e83cd - - - - -] NetApp api failed. Reason - Unexpected error:<urlopen error [Errno 111] ECONNREFUSED>

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager Traceback (most recent call last):

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 242, in init_host

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager     self.driver.do_setup(ctxt)

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/netapp/nfs.py", line 654, in do_setup

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager     self._do_custom_setup(self._client)

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/netapp/nfs.py", line 734, in _do_custom_setup

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager     (major, minor) = self._get_ontapi_version()

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/netapp/nfs.py", line 694, in _get_ontapi_version

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager     res = self._client.invoke_successfully(ontapi_version, False)

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/netapp/api.py", line 213, in invoke_successfully

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager     result = self.invoke_elem(na_element, enable_tunneling)

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/netapp/api.py", line 201, in invoke_elem

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager     raise NaApiError('Unexpected error', e)

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager NaApiError: NetApp api failed. Reason - Unexpected error:<urlopen error [Errno 111] ECONNREFUSED>

2014-05-13 11:18:37.781 4241 TRACE cinder.volume.manager

2014-05-13 11:18:38.088 4241 INFO oslo.messaging._drivers.impl_rabbit [req-cb093ffc-9e69-49c0-872b-2b5ae19e83cd - - - - -] Connected to AMQP server on controller:5672

Re: Icehouse Cluster mode Drivers

Hi Pravinp,

I think the problem lies in your cinder.conf file.  You stated you have the following line:

netapp_server_hostname=data lif ip address

However, netapp_server_hostname needs to be the IP or hostname of the management interface, not the data interface.  Preferably it should be the cluster address, though a vserver management address can be used (with some limits on functionality).

Re: Icehouse Cluster mode Drivers

Thanks..

I did tried with both, Below is the ip address details. When i change it to cluster_mgmt i get errors on cinder-scheduler.log. While creating the volume same status error on gui and cli

open1::> network interface show
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
open
            data         up/up    10.238.229.43/25   open1-01      e0a     true
open1
            cluster_mgmt up/up    10.238.229.42/25   open1-01      e0a     true
open1-01
            mgmt1        up/up    10.238.229.41/25   open1-01      e0d     true
3 entries were displayed.

cinder-scheduler.log

2014-05-13 17:00:36.335 6395 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672
2014-05-13 17:01:00.234 6395 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'84ad078c3d9443788fc87fad6c134005', 'tenant': u'4b82bfee2c3a4f1f906c91fc2ebe00aa', 'user_identity': u'84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -'}
2014-05-13 17:01:00.317 6395 ERROR cinder.scheduler.filters.capacity_filter [req-f000d37a-7475-48d4-b7f5-2cca6d1f26af 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] Free capacity not set: volume node info collection broken.
2014-05-13 17:01:00.323 6395 ERROR cinder.scheduler.flows.create_volume [req-f000d37a-7475-48d4-b7f5-2cca6d1f26af 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] Failed to schedule_create_volume: No valid host was found.

volume log2014-05-13 17:00:36.058 6249 INFO cinder.volume.manager [req-d4079530-34af-42f1-9801-7262023ca825 - - - - -] Updating volume status
2014-05-13 17:00:36.059 6249 WARNING cinder.volume.manager [req-d4079530-34af-42f1-9801-7262023ca825 - - - - -] Unable to update stats, NetAppDriver -1.0.0  driver is uninitialized.
2014-05-13 17:00:36.070 6249 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672
2014-05-13 17:00:36.867 6249 INFO cinder.openstack.common.service [-] Caught SIGTERM, exiting
2014-05-13 17:00:36.876 6229 INFO cinder.openstack.common.service [-] Caught SIGTERM, stopping children
2014-05-13 17:00:36.876 6229 INFO cinder.openstack.common.service [-] Waiting on 1 children to exit
2014-05-13 17:00:36.876 6229 INFO cinder.openstack.common.service [-] Child 6249 exited with status 1
2014-05-13 17:00:37.353 6435 INFO cinder.volume.drivers.netapp.common [-] Requested unified config: ontap_cluster and nfs
2014-05-13 17:00:37.373 6435 INFO cinder.volume.drivers.netapp.common [-] NetApp driver of family ontap_cluster and protocol nfs loaded
2014-05-13 17:00:37.374 6435 INFO cinder.openstack.common.service [-] Starting 1 workers
2014-05-13 17:00:37.377 6435 INFO cinder.openstack.common.service [-] Started child 6441
2014-05-13 17:00:37.382 6441 AUDIT cinder.service [-] Starting cinder-volume node (version 2014.1)
2014-05-13 17:00:37.384 6441 INFO cinder.volume.manager [req-81e335ba-121d-4a4c-9adc-33326cab6a7d - - - - -] Starting volume driver NetAppDriver (1.0.0)
2014-05-13 17:00:37.500 6441 INFO cinder.volume.drivers.netapp.nfs [req-81e335ba-121d-4a4c-9adc-33326cab6a7d - - - - -] Shares on vserver open will only be used for provisioning.
2014-05-13 17:00:41.487 6441 INFO cinder.volume.manager [req-81e335ba-121d-4a4c-9adc-33326cab6a7d - - - - -] Updating volume status

api log

2014-05-13 17:01:01.347 6421 AUDIT cinder.api.v1.volumes [req-62a0efdb-47b6-4833-9d2d-4ce40d57ab55 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] vol={'migration_status': None, 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2014, 5, 13, 11, 26, 50), 'provider_geometry': None, 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'1207cd7d-d614-4266-90e8-bf08260c8e88', 'size': 2L, 'user_id': u'84ad078c3d9443788fc87fad6c134005', 'attach_time': None, 'attached_host': None, 'display_description': u'', 'volume_admin_metadata': [], 'encryption_key_id': None, 'project_id': u'4b82bfee2c3a4f1f906c91fc2ebe00aa', 'launched_at': None, 'scheduled_at': None, 'status': u'error', 'volume_type_id': u'6397b985-4b8f-4ce1-809b-cd5feb4a7dba', 'deleted': False, 'provider_location': None, 'host': None, 'source_volid': None, 'provider_auth': None, 'display_name': u'test', 'instance_uuid': None, 'bootable': False, 'created_at': datetime.datetime(2014, 5, 13, 11, 26, 50), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x7fcf8364a5d0>, '_name_id': None, 'volume_metadata': []}

2014-05-13 17:01:01.404 6421 INFO cinder.api.openstack.wsgi [req-62a0efdb-47b6-4833-9d2d-4ce40d57ab55 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] http://controller:8776/v1/4b82bfee2c3a4f1f906c91fc2ebe00aa/volumes/detail returned with HTTP 200

2014-05-13 17:01:01.405 6421 INFO eventlet.wsgi.server [req-62a0efdb-47b6-4833-9d2d-4ce40d57ab55 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] 10.238.229.11 - - [13/May/2014 17:01:01] "GET /v1/4b82bfee2c3a4f1f906c91fc2ebe00aa/volumes/detail HTTP/1.1" 200 1301 0.076219

2014-05-13 17:01:01.408 6421 INFO eventlet.wsgi.server [-] (6421) accepted ('10.238.229.11', 42365)

2014-05-13 17:01:01.411 6421 INFO cinder.api.openstack.wsgi [req-29a99544-2b39-44f1-ae1a-3c19d11e433f 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] GET http://controller:8776/v1/4b82bfee2c3a4f1f906c91fc2ebe00aa/snapshots/detail

2014-05-13 17:01:01.424 6421 INFO cinder.api.openstack.wsgi [req-29a99544-2b39-44f1-ae1a-3c19d11e433f 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] http://controller:8776/v1/4b82bfee2c3a4f1f906c91fc2ebe00aa/snapshots/detail returned with HTTP 200

2014-05-13 17:01:01.424 6421 INFO eventlet.wsgi.server [req-29a99544-2b39-44f1-ae1a-3c19d11e433f 84ad078c3d9443788fc87fad6c134005 4b82bfee2c3a4f1f906c91fc2ebe00aa - - -] 10.238.229.11 - - [13/May/2014 17:01:01] "GET /v1/4b82bfee2c3a4f1f906c91fc2ebe00aa/snapshots/detail HTTP/1.1" 200 255 0.015170

Re: Icehouse Cluster mode Drivers

Are you restarting the cinder-volume and cinder-scheduler processes after you change the cinder.conf file?

Can you capture the cinder-volume.log output during the startup phase?

Re: Icehouse Cluster mode Drivers

Sorry, ignore the request for cinder-volume.log start up... I see that in the above post.  Can you provide the contents of your nfs shares file (/etc/cinder/netapp.conf)

Re: Icehouse Cluster mode Drivers

changed once i had change this in cinder.conf file

10.238.229.42:/open_test

Re: Icehouse Cluster mode Drivers

It looks like your NFS export in the netapp.conf is pointing to your cluster mgmt interface. This one should point to the data interface.

The driver config should point to mgmt interface, the shares config should point to data interface.

Re: Icehouse Cluster mode Drivers

Thanks Seems to be the problem..